-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to run/play with it #2
Comments
Some questions about the architecture. Thanks.
|
Hey @hustnn - PoC isn't done yet. Just waiting on the prover (https://github.com/maxgillett/giza) to be functional (should be by end of next week / early next). Check the docs here, just added them so you can play with it - https://github.com/liamzebedee/quark-blockchain/blob/master/proof-of-concept.md
|
Thanks. Will plan with it.
Does it mean each prover will get all transaction from the sequencer, and try to calculate the prove, the one get the prove first will win and execute the transaction, similar to miner?
Does it mean there are 2 types of nodes are needed, one for execution and one for storage? User can decide what kind of node they want to maintain separately? One more question, how the sharding and replication logic are implemented? |
Nope, there will be some sort of scheduler which determines txs that can be executed in parallel, which will then distribute the work items to the prover network. I haven't written this out yet but it's not too sophisticated. No competition like a POW miner.
Well, yes haha. There are three networks here - the sequencer, executor/prover, and storage networks. Each has a different node software and does a different job. I don't know what you refer to by "users" here?
Well, the blockchain state is stored in the storage network, and is sharded+replicated like Google's Bigtable. To be honest - this part isn't implemented at all yet - but I can try give a rough explainer. Basically Google's Bigtable is a big sorted table, where each row has a key, and any number of columns. The table is always sorted by key. Bigtable partitions the table by row range, so for a table of 100K rows, with a max partition size of 20K rows, it would make 5 partitions - rows 0..20K, 20..40, 40..60, 60..80, 80..100. Each partition of this table is called a "tablet" - and this is the basic unit of load balancing. One tablet is always hosted by one node. When you go to insert rows, you basically do a binary search in the keyspace to find the correct tablet this row should be inserted into, and then you send it to the owner of that tablet. The genius of this approach is that it's really easy to scale - if the tablet grows too large, it gets split into two tablets - so you add one more node to the system. This is done by the Bigtable master. Some resources: |
I mean everyone can join if it is for permissionless case like ethereum. |
I see. Will the scheduler act as the master of execution network? |
Eventually yeah - though this design is for maximising storage. I think the node requirements will be higher than Eth, lower than Solana. Networking speed for storage is most important, and compute is most important for the prover. |
I guess so - though that part of the design is still open! Just waiting to test the core ideas of the STARK performance right now. |
Thanks for answering. I will play it first.😁 |
Looks like the cairo-lang package you released has some issue. Should the open file mode be 'w' ?
|
@hustnn legend! Yes it should, my bad for forgetting to commit |
Quite interesting, any guideline on how to running it?
The text was updated successfully, but these errors were encountered: