Skip to content
This repository has been archived by the owner on Nov 15, 2023. It is now read-only.

Commit

Permalink
replace some straggling html links
Browse files Browse the repository at this point in the history
  • Loading branch information
rphmeier committed Jun 20, 2020
1 parent 12c3a42 commit dd512a2
Show file tree
Hide file tree
Showing 5 changed files with 12 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Register on startup an event producer with `NetworkBridge::RegisterEventProduce

For each relay-parent in our local view update, look at all backed candidates pending availability. Distribute via gossip all erasure chunks for all candidates that we have to peers.

We define an operation `live_candidates(relay_heads) -> Set<CommittedCandidateReceipt>` which returns a set of [`CommittedCandidateReceipt`s](../../types/candidate.html#committed-candidate-receipt) a given set of relay chain heads that implies a set of candidates whose availability chunks should be currently gossiped. This is defined as all candidates pending availability in any of those relay-chain heads or any of their last `K` ancestors. We assume that state is not pruned within `K` blocks of the chain-head.
We define an operation `live_candidates(relay_heads) -> Set<CommittedCandidateReceipt>` which returns a set of [`CommittedCandidateReceipt`s](../../types/candidate.md#committed-candidate-receipt) a given set of relay chain heads that implies a set of candidates whose availability chunks should be currently gossiped. This is defined as all candidates pending availability in any of those relay-chain heads or any of their last `K` ancestors. We assume that state is not pruned within `K` blocks of the chain-head.

We will send any erasure-chunks that correspond to candidates in `live_candidates(peer_most_recent_view_update)`. Likewise, we only accept and forward messages pertaining to a candidate in `live_candidates(current_heads)`. Each erasure chunk should be accompanied by a merkle proof that it is committed to by the erasure trie root in the candidate receipt, and this gossip system is responsible for checking such proof.

Expand Down
14 changes: 7 additions & 7 deletions roadmap/implementors-guide/src/node/backing/pov-distribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This subsystem is responsible for distributing PoV blocks. For now, unified with

`ProtocolId`: `b"povd"`

Input: [`PoVDistributionMessage`](../../types/overseer-protocol.html#pov-distribution-message)
Input: [`PoVDistributionMessage`](../../types/overseer-protocol.md#pov-distribution-message)


Output:
Expand All @@ -18,17 +18,17 @@ Output:

## Functionality

This network protocol is responsible for distributing [`PoV`s](../../types/availability.html#proof-of-validity) by gossip. Since PoVs are heavy in practice, gossip is far from the most efficient way to distribute them. In the future, this should be replaced by a better network protocol that finds validators who have validated the block and connects to them directly.
This network protocol is responsible for distributing [`PoV`s](../../types/availability.md#proof-of-validity) by gossip. Since PoVs are heavy in practice, gossip is far from the most efficient way to distribute them. In the future, this should be replaced by a better network protocol that finds validators who have validated the block and connects to them directly.

This protocol is described in terms of "us" and our peers, with the understanding that this is the procedure that any honest node will run. It has the following goals:
- We never have to buffer an unbounded amount of data
- PoVs will flow transitively across a network of honest nodes, stemming from the validators that originate them.

As we are gossiping, we need to track which PoVs our peers are waiting for to avoid sending them data that they are not expecting. It is not reasonable to expect our peers to buffer unexpected PoVs, just as we will not buffer unexpected PoVs. So notification to our peers about what is being awaited is key. However it is important that the notifications system is also bounded.

For this, in order to avoid reaching into the internals of the [Statement Distribution](statement-distribution.html) Subsystem, we can rely on an expected propery of candidate backing: that each validator can only second one candidate at each chain head. So we can set a cap on the number of PoVs each peer is allowed to notify us that they are waiting for at a given relay-parent. This cap will be the number of validators at that relay-parent. And the view update mechanism of the [Network Bridge](../utility/network-bridge.html) ensures that peers are only allowed to consider a certain set of relay-parents as live. So this bounding mechanism caps the amount of data we need to store per peer at any time at `sum({ n_validators_at_head(head) | head in view_heads })`. Additionally, peers should only be allowed to notify us of PoV hashes they are waiting for in the context of relay-parents in our own local view, which means that `n_validators_at_head` is implied to be `0` for relay-parents not in our own local view.
For this, in order to avoid reaching into the internals of the [Statement Distribution](statement-distribution.md) Subsystem, we can rely on an expected propery of candidate backing: that each validator can only second one candidate at each chain head. So we can set a cap on the number of PoVs each peer is allowed to notify us that they are waiting for at a given relay-parent. This cap will be the number of validators at that relay-parent. And the view update mechanism of the [Network Bridge](../utility/network-bridge.md) ensures that peers are only allowed to consider a certain set of relay-parents as live. So this bounding mechanism caps the amount of data we need to store per peer at any time at `sum({ n_validators_at_head(head) | head in view_heads })`. Additionally, peers should only be allowed to notify us of PoV hashes they are waiting for in the context of relay-parents in our own local view, which means that `n_validators_at_head` is implied to be `0` for relay-parents not in our own local view.

View updates from peers and our own view updates are received from the network bridge. These will lag somewhat behind the `StartWork` and `StopWork` messages received from the overseer, which will influence the actual data we store. The `OurViewUpdate`s from the [`NetworkBridgeEvent`](../../types/overseer-protocol.html#network-bridge-update) must be considered canonical in terms of our peers' perception of us.
View updates from peers and our own view updates are received from the network bridge. These will lag somewhat behind the `StartWork` and `StopWork` messages received from the overseer, which will influence the actual data we store. The `OurViewUpdate`s from the [`NetworkBridgeEvent`](../../types/overseer-protocol.md#network-bridge-update) must be considered canonical in terms of our peers' perception of us.

Lastly, the system needs to be bootstrapped with our own perception of which PoVs we are cognizant of but awaiting data for. This is done by receipt of the [`PoVDistributionMessage`](../../types/overseer-protocolhtml#pov-distribution-message)::ValidatorStatement variant. We can ignore anything except for `Seconded` statements.

Expand All @@ -55,7 +55,7 @@ struct PeerState {
}
```

We also assume the following network messages, which are sent and received by the [Network Bridge](../utility/network-bridge.html)
We also assume the following network messages, which are sent and received by the [Network Bridge](../utility/network-bridge.md)

```rust
enum NetworkMessage {
Expand All @@ -72,7 +72,7 @@ Here is the logic of the state machine:

*Overseer Signals*
- On `StartWork(relay_parent)`:
- Get the number of validators at that relay parent by querying the [Runtime API](../utility/runtime-api.html) for the validators and then counting them.
- Get the number of validators at that relay parent by querying the [Runtime API](../utility/runtime-api.md) for the validators and then counting them.
- Create a blank entry in `relay_parent_state` under `relay_parent` with correct `n_validators` set.
- On `StopWork(relay_parent)`:
- Remove the entry for `relay_parent` from `relay_parent_state`.
Expand All @@ -81,7 +81,7 @@ Here is the logic of the state machine:
*PoV Distribution Messages*
- On `ValidatorStatement(relay_parent, statement)`
- If this is not `Statement::Seconded`, ignore.
- If there is an entry under `relay_parent` in `relay_parent_state`, add the `pov_hash` of the seconded Candidate's [`CandidateDescriptor`](../../types/candidate.html#candidate-descriptor) to the `awaited` set of the entry.
- If there is an entry under `relay_parent` in `relay_parent_state`, add the `pov_hash` of the seconded Candidate's [`CandidateDescriptor`](../../types/candidate.md#candidate-descriptor) to the `awaited` set of the entry.
- If the `pov_hash` was not previously awaited and there are `n_validators` or fewer entries in the `awaited` set, send `NetworkMessage::Awaiting(relay_parent, vec![pov_hash])` to all peers.
- On `FetchPoV(relay_parent, descriptor, response_channel)`
- If there is no entry in `relay_parent_state` under `relay_parent`, ignore.
Expand Down
4 changes: 2 additions & 2 deletions roadmap/implementors-guide/src/runtime/inclusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
1. If the core assignment includes a specific collator, ensure the backed candidate is issued by that collator.
1. Ensure that any code upgrade scheduled by the candidate does not happen within `config.validation_upgrade_frequency` of `Paras::last_code_upgrade(para_id, true)`, if any, comparing against the value of `Paras::FutureCodeUpgrades` for the given para ID.
1. Check the collator's signature on the candidate data.
1. Transform each [`CommittedCandidateReceipt`](../../types/candidate.html#committed-candidate-receipt) into the corresponding [`CandidateReceipt`](../../types/candidate.html#candidate-receipt), setting the commitments aside.
1. Transform each [`CommittedCandidateReceipt`](../../types/candidate.md#committed-candidate-receipt) into the corresponding [`CandidateReceipt`](../../types/candidate.md#candidate-receipt), setting the commitments aside.
1. check the backing of the candidate using the signatures and the bitfields, comparing against the validators assigned to the groups, fetched with the `group_validators` lookup.
1. check that the upward messages, when combined with the existing queue size, are not exceeding `config.max_upward_queue_count` and `config.watermark_upward_queue_size` parameters.
1. create an entry in the `PendingAvailability` map for each backed candidate with a blank `availability_votes` bitfield.
Expand All @@ -74,7 +74,7 @@ All failed checks should lead to an unrecoverable error making the block invalid
* `enact_candidate(relay_parent_number: BlockNumber, CommittedCandidateReceipt)`:
1. If the receipt contains a code upgrade, Call `Paras::schedule_code_upgrade(para_id, code, relay_parent_number + config.validationl_upgrade_delay)`.
> TODO: Note that this is safe as long as we never enact candidates where the relay parent is across a session boundary. In that case, which we should be careful to avoid with contextual execution, the configuration might have changed and the para may de-sync from the host's understanding of it.
1. call `Router::queue_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../../types/messages.html#upward-message) from the [`CandidateCommitments`](../../types/candidate.html#candidate-commitments).
1. call `Router::queue_upward_messages` for each backed candidate, using the [`UpwardMessage`s](../../types/messages.md#upward-message) from the [`CandidateCommitments`](../../types/candidate.md#candidate-commitments).
1. Call `Paras::note_new_head` using the `HeadData` from the receipt and `relay_parent_number`.
* `collect_pending`:

Expand Down
2 changes: 1 addition & 1 deletion roadmap/implementors-guide/src/types/availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ candidates for the duration of a challenge period. This is done via an erasure-c

## Signed Availability Bitfield

A bitfield [signed](backing.html#signed-wrapper) by a particular validator about the availability of pending candidates.
A bitfield [signed](backing.md#signed-wrapper) by a particular validator about the availability of pending candidates.


```rust
Expand Down
2 changes: 1 addition & 1 deletion roadmap/implementors-guide/src/types/backing.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ impl<Payload: EncodeAs<RealPayload>, RealPayload: Encode> Signed<Payload, RealPa
}
```

Note the presence of the [`SigningContext`](../types/candidate.html#signing-context) in the signatures of the `sign` and `validate` methods. To ensure cryptographic security, the actual signed payload is always the SCALE encoding of `(payload.into(), signing_context)`. Including the signing context prevents replay attacks.
Note the presence of the [`SigningContext`](../types/candidate.md#signing-context) in the signatures of the `sign` and `validate` methods. To ensure cryptographic security, the actual signed payload is always the SCALE encoding of `(payload.into(), signing_context)`. Including the signing context prevents replay attacks.

`EncodeAs` is a helper trait with a blanket impl which ensures that any `T` can `EncodeAs<T>`. Therefore, for the generic case where `RealPayload = Payload`, it changes nothing. However, we `impl EncodeAs<CompactStatement> for Statement`, which helps efficiency.

Expand Down

0 comments on commit dd512a2

Please sign in to comment.