Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve APIs for Tries in Runtime #5756

Open
wants to merge 84 commits into
base: master
Choose a base branch
from

Conversation

shawntabrizi
Copy link
Member

@shawntabrizi shawntabrizi commented Sep 18, 2024

This is a refactor and improvement from: #3881

  • sp_runtime::proving_trie now exposes a BasicProvingTrie for both base2 and base16.
  • APIs for base16 are more focused on single value proofs, also aligning their APIs with the base2 trie
  • A ProvingTrie trait is included which wraps both the base2 and base16 trie, and exposes all APIs needed for an end to end scenario.
    • This is important because when writing benchmarks, you need abstractions to be able to create and prove tries of different sizes.

@shawntabrizi shawntabrizi marked this pull request as ready for review September 23, 2024 21:36
@shawntabrizi shawntabrizi requested a review from a team as a code owner September 23, 2024 21:36
}

#[test]
fn proof_size_to_hashes() {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, the assumption here is that the bytes of the proof are mostly hashes.

Based on how many hashes are there, and the structure of the trie, we can then know how deep the trie is.

For example, if we have a base 2 trie, there should be 1 hash at for every level of the trie. For a base 16 trie, there should be 15 hashes.

Funny enough, for worst case scenario, we need minimum_encoded_len of the types to get a more accurate result.

To get the most accurate result, we also want to remove the key and value bytes and any other extraneous information from the proof size.

If we use MaxEncodedLen, we may overestimate how many bytes are actually being used for the values, and then underestimate the number of hashes we will actually do.

In this case, I assume an even worst case scenario that all bytes are hashes.

Copy link
Member

@ggwpez ggwpez Sep 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, I assume an even worst case scenario that all bytes are hashes.

I only read the code and figured that's what's going on. It is a pragmatic solution and i think it should be fine =)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made the trait more general, just ProofToHashes.

For Base2 trie, we can look at the number of items directly to get the depth. For Base16, we can still use this length trick.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants