-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Orchestrator router #1761
Orchestrator router #1761
Conversation
12197ab
to
2f44433
Compare
f43f8d0
to
77f23aa
Compare
@@ -18,6 +18,10 @@ RUN apt-get update \ | |||
RUN update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-8 30 \ | |||
&& update-alternatives --install /usr/bin/clang clang /usr/bin/clang-8 30 | |||
|
|||
RUN GRPC_HEALTH_PROBE_VERSION=v0.3.6 && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This CLI tool can be used in k8s environments for liveness and startup probes instead of vanilla HTTP probes which won't work right out of the box for a gRPC server. See livepeer/livepeer-infra#416.
r.srv = s | ||
|
||
net.RegisterOrchestratorServer(s, r) | ||
grpc_health_v1.RegisterHealthServer(s, health.NewServer()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is required for the gRPC server to implement the gRPC health checking protocolhttps://github.com/grpc/grpc/blob/master/doc/health-checking.md) which allows us to use tools like grpc_health_probe to check the health of the server.
return nil | ||
} | ||
|
||
func getOrchestratorInfo(ctx context.Context, uris []*url.URL, req *net.OrchestratorRequest) (*net.OrchestratorInfo, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is pretty similar to some of the code in the discovery
package so an initial thought was to use the code there that implements the common.OrchestratorPool
interface. But, there are some additional dependencies with that approach such as a broadcaster interface, capabilities interface and suspender interface which are not requirements here (yet). So, for now, I chose to just add this relatively small piece of code for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM after squashing the fixup commits 👍
ec7a640
to
5b0a1ee
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One small comment, otherwise looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
though CircleCI is failing with |
5b0a1ee
to
c1af467
Compare
Ah looks like the intermediate Docker image was overwritten by another PR. Fixed by re-building to make sure the intermediate Docker image for CI contains the livepeer_router binary. |
What does this pull request do? Explain your changes. (required)
This PR adds a new buildable binary
livepeer_router
. The binary runs a gRPC server that accepts GetOrchestrator requests from Bs and forwards OrchestratorInfo responses from Os. A router is configured with a set of Os and is responsible for routing a requesting B to one of the Os by passing the OrchestratorInfo response from that O to the B. The B will then submit segments to the URI indicated in theTranscoder
field of the OrchestratorInfo response. If any Os go offline, are at capacity or are have network transit issues, the router can route a different O to the B. This router should also help mainnet node operators start running multiple Os with a single on-chain registered ETH address instead of running each O with a different on-chain registered ETH address.At the moment, the router sends a received GetOrchestrator request to all of its Os and forwards the first OrchestratorInfo response back to the B. As a result, a requesting B will be routed to the O that is closest to the router. A more sophisticated routing algorithm can be introduced separately. For example, an improved routing algorithm might involve doing a geoip lookup on the Os and the requesting B and routing the B to the O with the smallest distance based on lat/lng coordinates.
I chose to include this router functionality in a separate binary because the
livepeer
binary already has a lot of configuration options and I was hesitant to introduce another "node type" there especially since this is a relatively early concept. Plus this allows for some experimentation with this functionality without having to incorporate it into the code for thelivepeer
binary just quite yet. If there is a desire to move this functionality into thelivepeer
binary I think we can consider doing so later.Specific updates (required)
See commit history.
How did you test each of these updates (required)
Tested manually.
Checklist:
./test.sh
pass