-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd, common, eth: cache orchestrator stake #1216
Conversation
924ae6b
to
25bc310
Compare
25bc310
to
9b93f63
Compare
Changed base branch to #1182 for now and added caching stake when starting the node. |
146902d
to
f48706c
Compare
9b93f63
to
2374fb7
Compare
dropped the commits and rebased on latest #1182 |
} | ||
|
||
for _, o := range orchs { | ||
go getStake(o) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating a goroutine for each query feels like a premature optimization at this point IMO since it introduces a piece of code that needs to be concurrency safe (and ideally we would test for it). Caching the stake for each active O isn't that time sensitive since it just happens on startup. Additionally, a simpler optimization that we could use in the future is to write a wrapper contract that aggregates all the stake values for a list of Os so that we can fetch the results with a single query.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm not mistaken go-sqlite does use a RWmutex so this should be thread safe regardless, since we're only writing the value to the DB and not memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah good point. On second thought, I'm fine with the concurrent requests used here 👍 with the wrapper contract as a nice to have optimization later on that removes the need for any concurrency code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
} | ||
|
||
for _, o := range orchs { | ||
go getStake(o) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah good point. On second thought, I'm fine with the concurrent requests used here 👍 with the wrapper contract as a nice to have optimization later on that removes the need for any concurrency code.
What does this pull request do? Explain your changes. (required)
This PR caches the stake for orchestrators in the active set whenever a new round event is emitted.
This can be both a removed log (re-org) or an added log (new round), in both cases we make an RPC call for the current round number and a subsequent RPC call for each O to fetch its stake for that specific round. We then update the orchestrator in the DB with the new stake.
Specific updates (required)
stake
column to theorchestrators
table and updatedUpdateOrch
andSelectOrchs
methods accordinglyRoundsWatcher
subscription toOrchestratorWatcher
handleRoundEvent
andcacheOrchestratorStake
toOrchestratorWatcher
StubEthClient.GetTranscoderPoolForRound
DBOrchestratorPoolCache
is createdHow did you test each of these updates (required)
ran unit tests
Does this pull request close any open issues?
Fixes #1213
Checklist:
./test.sh
pass