Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add multi-router support in LP gateway. #1893

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

cdamian
Copy link
Contributor

@cdamian cdamian commented Jun 28, 2024

Description

Adds a new storage and new extrinsic for adding multiple routers for a specific domain.

The outbound queue processing logic is now sending a message using all routers that are stored.

Checklist:

  • I have added Rust doc comments to structs, enums, traits and functions
  • I have made corresponding changes to the documentation
  • I have performed a self-review of my code
  • I have added tests that prove my fix is effective or that my feature works

/// This can only be set by an admin.
#[pallet::storage]
#[pallet::getter(fn domain_multi_routers)]
pub type DomainMultiRouters<T: Config> =
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a separate storage and extrinsic since I'm not sure how to proceed in this case - do we eventually remove the old storage after a migration or can we nuke/replace it now?

Copy link
Contributor

@lemunozm lemunozm Jul 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can replace it now. If needed, the migration can go along with this PR, which I think is quite straightforward: just adding the current router, if it exists, into the vector and purge the old storage

@cdamian cdamian marked this pull request as ready for review June 28, 2024 10:28
@cdamian
Copy link
Contributor Author

cdamian commented Jun 28, 2024

Re-basing on main.

@cdamian cdamian force-pushed the feat/add-lp-gateway-multi-router-support branch from cdc56a2 to a5e30b1 Compare June 28, 2024 12:06
Copy link

codecov bot commented Jun 28, 2024

Codecov Report

Attention: Patch coverage is 69.44444% with 11 lines in your changes missing coverage. Please review.

Project coverage is 47.38%. Comparing base (d034784) to head (22a8f03).

Files Patch % Lines
pallets/liquidity-pools-gateway/src/lib.rs 69.44% 11 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1893      +/-   ##
==========================================
+ Coverage   47.36%   47.38%   +0.01%     
==========================================
  Files         176      176              
  Lines       13305    13325      +20     
==========================================
+ Hits         6302     6314      +12     
- Misses       7003     7011       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@cdamian cdamian force-pushed the feat/add-lp-gateway-multi-router-support branch from c145538 to 22a8f03 Compare June 28, 2024 14:39
Copy link
Collaborator

@mustermeiszer mustermeiszer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty straightforward change. Wanted to propose a few things if we are anyways working on this:

  • Add a dedicated Queue for Inbound and Outbound that is currently served by on idle, so that we can easily switch to Tasks in the future.
  • (NOT SURE IF GOOD IDEA) Instead of iterating over all routers we could have a DoubleMap and then serving the message in a queue does take care of
      1. Ensuring that a message can be send over each Router only once

Furthermore
We will need to have the same for inbound logic. So a the Gateway must take care of a few things

  • Outgoing

    • Each Message is sent only once over each existing Router
  • Incoming

    • Each Message is only forwarded to the LiquidityPools pallet once some information has been verified.
      NOTE: It is to be determined how this will happen, but we should already take care right now that messages are "queued" until this verification has happened.

@mustermeiszer
Copy link
Collaborator

I think https://github.com/paritytech/polkadot-sdk/blame/master/substrate/frame/message-queue/src/lib.rs could be a solution, but I think the queue itself is overly complicated and in general very opaque an raw.

trait ProcessMessage is using a &[u8] as its message type for some reason.

@cdamian
Copy link
Contributor Author

cdamian commented Jul 1, 2024

Pretty straightforward change. Wanted to propose a few things if we are anyways working on this:

  • Add a dedicated Queue for Inbound and Outbound that is currently served by on idle, so that we can easily switch to Tasks in the future.

  • (NOT SURE IF GOOD IDEA) Instead of iterating over all routers we could have a DoubleMap and then serving the message in a queue does take care of

      1. Ensuring that a message can be send over each Router only once

Furthermore We will need to have the same for inbound logic. So a the Gateway must take care of a few things

  • Outgoing

    • Each Message is sent only once over each existing Router
  • Incoming

    • Each Message is only forwarded to the LiquidityPools pallet once some information has been verified.
      NOTE: It is to be determined how this will happen, but we should already take care right now that messages are "queued" until this verification has happened.

@mustermeiszer we already have a queue in place for outbound messages - do you mean that we should replace that with a queue implementation that will serve both the LP gateway and the main LP pallet?

@mustermeiszer
Copy link
Collaborator

we already have a queue in place for outbound messages - do you mean that we should replace that with a queue implementation that will serve both the LP gateway and the main LP pallet?

I think now it makes sense, because there will be much more processing needed. Having a general Inbound/Outbound queue would make sense.

The general idea of the multi router logic is.

Sending:

    1. Send full message over one Router
    1. Send MessageProofs over the other Routers

Receiving
NOTE: That proofs CAN be received prior to the message itself

  • Receive Message/MessageProof
  • Check whether message can be already forwarded to the pallet-liquidity-pools

I really, really strongly am against exposing the MessageProofs and any Multi-Router aggregation to the pallet-liquidity-pools. The pallet itself should always simply assume to process the next message wrt. to business logic.

I would have split the the enum into an enum LpProtocol containing messages related to ensuring the transport and also the batching, and one enum LpMessages containing only the business logic related parts. For now, we could ensure that LpMessage is a subset of LpProtocol so that pallet-liquidity-pools only handles business parts.

For sending batches I would also say we have an fn batch() in pallet-liquidity-pools and the gatway takes care of creating such a batch. Easiest for all purposes would be to have an Gateway::start_pack_batch() on the gateway and an Gateway::end_pack_batch(). All calls to Gateway::submit() that happen between those two will then be boundled into a batch by the gateway. This way we can keep the current non-batch extrinsic logic and just iterate over the Vec<LiquidityPools::Call> that we receive in the fn batch()

Comment on lines +788 to 805
match router.send(sender.clone(), message.serialize()) {
Ok(dispatch_info) => Self::update_total_post_dispatch_info_weight(
&mut post_dispatch_info,
dispatch_info.actual_weight,
read_weight,
),
Err(e) => {
Self::update_total_post_dispatch_info_weight(
&mut post_dispatch_info,
e.post_info.actual_weight,
read_weight,
);

return Err(DispatchErrorWithPostInfo {
post_info: post_dispatch_info,
error: e.error,
});
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also for that:

  • The first router must send the message
  • The next routers must send a MessageProof of said message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants