Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Makes Reno CC slightly loss tolerant #445

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Conversation

janaiyengar
Copy link
Collaborator

The key idea here is that it is unlikely that a singular loss is a good indicator of congestion. I considered using a percentage of the congestion window as the threshold, but that might be too aggressive. This PR sets the threshold to two, meaning that there have to be at least two losses in a recovery episode for there to be congestion action.

Additionally, the congestion window is increased during the recovery episode as long as the number of losses is below the threshold.

Note that the recovery episode still starts and ends as before: starting at the first loss and ending an RTT later. This means that it is possible for the window to keep increasing until threshold number of losses occur, which could be near the end of the recovery RTT. In the worst case, this causes the window to be slightly larger (Beta x MaxPacketSize) at the end of a congestion event.

@janaiyengar janaiyengar requested a review from kazuho March 22, 2021 01:13
Copy link
Member

@kazuho kazuho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the PR. The logic looks fine though I wonder if we can hard-code this.

How about adding a new factory object alongside quicly_cc_reno_init that uses the loss-tolerant variant, calling it reno-lt or something? Then, we can test the efficacy of the proposed approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants