Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Addition of prepare_beacon_proposer endpoint. #178

Merged
merged 5 commits into from
Feb 8, 2022

Conversation

mcdee
Copy link
Contributor

@mcdee mcdee commented Nov 14, 2021

This adds the prepare_beacon_proposer endpoint, required for the merge. It follows on from discussion at sigp/lighthouse#2715

There has been some debate over how to handle the fact that this information is not persistent over beacon node restarts. The three options suggested are:

  • implict expiry e.g. the information expires 2 epochs after being provided or when the beacon node restarts, whichever is earlier
  • explicit expiry e.g. each request object has an until_epoch field such that the information expires at the given epoch or when the beacon node restarts, whichever is earlier
  • no expiry i.e. the information expires when the beacon node restarts

Given that the first two options still lose the data when the beacon node restarts the initial implementation uses the 'no expiry' option, however this could be changed if there is a compelling reason to do so.

There is a fourth option, which is to persist the information presented in the beacon node such that it survives beacon node restarts, however this was generally considered to be adding unnecessary complexity to beacon nodes.

@arnetheduck
Copy link
Contributor

arnetheduck commented Nov 14, 2021

It seems harmless that validator clients call this repeatedly? If so, it would alleviate any concerns about the beacon node "forgetting" - the vc can simply keep polling.

@mcdee
Copy link
Contributor Author

mcdee commented Nov 14, 2021

Yes, from the description:

Note that because the information is not persistent across beacon node restarts it is recommended that either the beacon node is monitored for restarts or this information is refreshed by resending this request periodically (for example, each epoch).

It doesn't fix all concerns about the beacon node restarting, as it would be possible for the validator to send the request and have the beacon node restart immediately, but given the various options available this seems to be the best approach.

@paulhauner
Copy link
Contributor

This looks good to me, thanks for taking the initiative to write it up. I'll be implementing this in the coming days/weeks so I'll be sure to come back with any feedback.

@tbenr
Copy link
Contributor

tbenr commented Nov 15, 2021

In teku we were discussing the implementation of this API and some thoughts came out in regards of the fee_recipient.

In particular, in scenarios where VCs use BC-as-a-service (ie infura), the ability to hit this endpoint without any validator authentication it would open up the possibility by an attacker to call that API to set a fee_recipient address he controls, trying to “steal” the fees for the each upcoming block proposers using that BC-as-a-service.

@mcdee
Copy link
Contributor Author

mcdee commented Nov 15, 2021

Services such as infura already have authentication, so they could provide controls within their system to manage this (e.g. provide a list of allowed fee recipients for the given user).

I believe that there is also already a weak trust assumption for this API as a whole, as it's trivial to create a denial of service attack on any node that exposes the API publicly.

@tbenr
Copy link
Contributor

tbenr commented Nov 15, 2021

Services such as infura already have authentication, so they could provide controls within their system to manage this (e.g. provide a list of allowed fee recipients for the given user).

I think it is more that you have to prove that you are in control of a given validator. Because I can always open a new infura account and trust my eth1 address and then call the api passing the trusted address but providing any validator_index.

@mcdee
Copy link
Contributor Author

mcdee commented Nov 15, 2021

Well, you could supply Infura with the list of your validator indices and sign them as proof. But this feels much more like an implementation detail for the service provider rather than a direct issue with the suggested endpoint (and I believe that the use of outsourced execution nodes is going to cause other issues as well, anyway).

@mcdee
Copy link
Contributor Author

mcdee commented Nov 15, 2021

Sorry, one other point I meant to mention earlier. Even if the fee recipient information is corrupted in some way at the service provider, when the validator client receives the block proposal it should be confirming that it's happy with the fee recipient within the execution payload before signing the block. So although this could potentially be used to sabotage a block producer it wouldn't create a situation where an incorrect fee recipient automatically flows through to stolen transaction fees.

@mpetrunic
Copy link
Collaborator

@mcdee should we merge it to separate branch (for example merge) with separate rc releases?

@mcdee
Copy link
Contributor Author

mcdee commented Nov 15, 2021

We could, or we could leave this floating as a PR until it has been implemented on a few testnets and people are happy with it (or tweaked it until they are). Happy either way.

@mpetrunic
Copy link
Collaborator

lets leave it floating unless more PR-s shows up

@ajsutton
Copy link
Contributor

Well, you could supply Infura with the list of your validator indices and sign them as proof. But this feels much more like an implementation detail for the service provider rather than a direct issue with the suggested endpoint (and I believe that the use of outsourced execution nodes is going to cause other issues as well, anyway).

Yeah I think there's definitely a question here around how much this setup matters, but Infura probably isn't the best example of a problem - they're primarily exposing an API that's mostly there for read access and it's kind of been a nice coincidence that it has supported being used as a backup for validator clients so far.

I think more interesting are cases like blox staking (I think) where they run the beacon node but the client runs the VC as a way to provide a staking service that is less custodial. Maybe we still just leave it to them to sort out but I hadn't seen any discussion of this kind of thing and I do think it's worth raising and being deliberate about the decision.

@mcdee
Copy link
Contributor Author

mcdee commented Nov 15, 2021

Perhaps the answer here is to make the issue clear in the API documentation. Something along the lines of "there is no guarantee that the beacon node will use the supplied fee recipient, so on receipt of a proposed block the validator should confirm that it finds the fee recipient within the block acceptable before signing it" perhaps?

@ajsutton
Copy link
Contributor

ajsutton commented Nov 15, 2021

Perhaps the answer here is to make the issue clear in the API documentation. Something along the lines of "there is no guarantee that the beacon node will use the supplied fee recipient, so on receipt of a proposed block the validator should confirm that it finds the fee recipient within the block acceptable before signing it" perhaps?

Yep, definitely worth mentioning.

I think if we had gone with the VC driven model this wouldn't be an issue because the preparePayload call would include the randao reveal which is signed by the proposing validator. You could achieve the same thing here by also providing an aggregate signature from the included validators over the coinbase address to confirm you do own all the validators. I don't think that needs to be in the standard API as it adds overhead, but could be a simple extension to solve this for anyone who needs it. I'll leave it up to them to convince VC implementors its worth supporting. :)

edit: Fee recipient is per-validator so would just be an individual signature but otherwise same concept.

Comment on lines 6 to 9
required when proposing blocks for the given validators. The information
supplied for each validator index is considered persistent until overwritten
by new information for the given validator index, or until the beacon node
restarts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we should require this information to be persisted until restart. It seems reasonable for a beacon node to hold this information for a couple of epochs but expect it to be regularly updated - otherwise it winds up preparing blocks for validators that may have long since stopped using this node and the only way to stop that would be to restart the beacon node.

I'd suggest requiring it be maintained for a minimum of 2 epochs - if the VC refreshes each epoch it then handles restarts pretty well but there's "overlap" so the previous notification doesn't expire before the next one arrives.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The concern with implicit expiry is that it isn't obvious to the validator client that it happens, or when it happens. Is the relatively low chance that the beacon node prepares a block for a validator that no longer cares enough of a concern compared to that of the beacon node not knowing which fee recipient to provide due to a tardy validator client refresh? I would also assume that most validators would be unlikely to change their fee recipient address so it would seem that we would be resending the same data every epoch for no real benefit.

If implicit expiry is considered desirable then we should be clear in the documentation as to exactly what that means. I'm not opposed to the change, just want to make sure that we're happy with the resultant requirement.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's why we'd want to have a minimum time the information is persisted for - that way validators do know that they have to resend the info within that amount of time. And a tardy information refresh isn't going to be an issue as long as there's some overlap - bear in mind the VC already has to call the BN at very precise timings to create blocks and attestations.

I don't mind if it's a reasonably long period but I do feel quite uncomfortable about persisting this with no way to reset it other than a restart. The key management API is built around the assumption that keys will be added and removed while the VC is running so it seems wrong to then design the BN API on the assumption that validators never move nodes. I agree it's not common and the impact is just extra load on the EL but it also seems pretty simple to avoid.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the wording so that the data is transient, lasting for 2 full epochs plus whatever is left of the epoch in which the submission is made. This gives the validator clients time to refresh the information each epoch, with the ability to miss one (or two) and still keep the beacon node up-to-date, but also decays quickly enough that it shouldn't cause lots of wasted cycles nodes generating payloads for blocks that will never be requested.

@djrtwo
Copy link
Contributor

djrtwo commented Feb 7, 2022

Where do we stand on this PR @mcdee?

@mcdee
Copy link
Contributor Author

mcdee commented Feb 7, 2022

I think that we're happy with it, but would like a confirmation from @paulhauner and @ajsutton before we merge.

@ajsutton
Copy link
Contributor

ajsutton commented Feb 7, 2022

Oh I didn't realise this hadn't merged. We've implemented it and it seems to be working well.

@paulhauner
Copy link
Contributor

We've implemented this over in sigp/lighthouse#2924 and will merge today or tomorrow.

I also think we should merge it 🚀

bors bot pushed a commit to sigp/lighthouse that referenced this pull request Feb 8, 2022
…t (similar to graffiti / graffiti-file) (#2924)

## Issue Addressed

#2883 

## Proposed Changes

* Added `suggested-fee-recipient` & `suggested-fee-recipient-file` flags to validator client (similar to graffiti / graffiti-file implementation).
* Added proposer preparation service to VC, which sends the fee-recipient of all known validators to the BN via [/eth/v1/validator/prepare_beacon_proposer](ethereum/beacon-APIs#178) api once per slot
* Added [/eth/v1/validator/prepare_beacon_proposer](ethereum/beacon-APIs#178) api endpoint and preparation data caching
* Added cleanup routine to remove cached proposer preparations when not updated for 2 epochs

## Additional Info

Changed the Implementation following the discussion in #2883.



Co-authored-by: pk910 <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Philipp K <[email protected]>
@mpetrunic mpetrunic merged commit e12ffc1 into ethereum:master Feb 8, 2022
bors bot pushed a commit to sigp/lighthouse that referenced this pull request Feb 8, 2022
…t (similar to graffiti / graffiti-file) (#2924)

## Issue Addressed

#2883 

## Proposed Changes

* Added `suggested-fee-recipient` & `suggested-fee-recipient-file` flags to validator client (similar to graffiti / graffiti-file implementation).
* Added proposer preparation service to VC, which sends the fee-recipient of all known validators to the BN via [/eth/v1/validator/prepare_beacon_proposer](ethereum/beacon-APIs#178) api once per slot
* Added [/eth/v1/validator/prepare_beacon_proposer](ethereum/beacon-APIs#178) api endpoint and preparation data caching
* Added cleanup routine to remove cached proposer preparations when not updated for 2 epochs

## Additional Info

Changed the Implementation following the discussion in #2883.



Co-authored-by: pk910 <[email protected]>
Co-authored-by: Paul Hauner <[email protected]>
Co-authored-by: Philipp K <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants