-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate and remove AsyncBackingParameters
#5079
Comments
This issue has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/elastic-scaling-mvp-launched/9392/4 |
We've been discussing implementation options in the context of collation fetching fairness and elastic scaling. The gist of it is that we target a few thins with this change:
I have looked a bit at the code and I want to propose the following:
In The below high level diagram shows the current flow and throttling (red arrows) |
It seems both of them (max_candidate_depth and allowed_ancestry_len) got superseded by the claim queue. The claim queue already provides everything to enforce limits, but more accurately, in particular it also accounts for parachains sharing a core and elastic scaling.
How can we enforce those limits via the claim queue?
max_candidate_depth
For a given core it does not make sense to provide more candidates than there are entries in the claim queue for that parachain, as they could never make it on chain. Backers should keep track of candidates already provided for claim queue entries, even across relay parents and reject candidates if there is no free spot left:
E.g. consider the following claim queue [A,B,A,B]. If there was a collation for B provided at the previous relay chain block already it is still valid in this one, hence we should consider the first
B
in the queue already occupied and only accept one more collation forB
.Now we can perfectly limit the number of provided candidates, by also accounting for other paras sharing the core. Because the claim queue is per core this also naturally covers elastic scaling: More cores, more candidates can be provided.
allowed_ancestry_len
If you still have a free spot in the claim queue as of the point of view of the provided relay parent, your collation will be accepted. This is sufficient for the collator protocol. We will still need to track the allowed relay parents in the runtime, but its buffer size can be determined based on the claim queue length.
Prerequisite: #4776 - otherwise entries are valid longer than they should be and above reasoning is void.
The text was updated successfully, but these errors were encountered: