How is ZCash going to handle the questions of scalability, specifically block size?
I found some old posts about this, basically saying that the devs will handle this problem later. Will that be easy to do, or will we find ourselves in a similar situation to Bitcoin in 5 years time, trying to implement a controversial hard fork that the miners don’t like?
What lessons can be drawn from Bitcoin in regards to this? How will the same fate be avoided?
I’ve been following the development of ZCash / zerocoin for years now, and I think this is incredibly important work. I’m in awe of you guys! I really want to see this coin succeed.
There was a hardfork between the z5 and z6 releases that took the Zcash blocksize from 1MB to 2MB.
I don’t follow bitcoin closely enough but I’ve long assumed that part of the reason bitcoin has eschewed bigger blocks is that it would obsolete certain otherwise current generations of mining hardware that have been designed around 1MB blocks before they’ve either paid for themselves or been able to generate the projected profit.
For as long as Zcash mining remains free of such hardware constraints, I doubt that particular kind of hardfork will be cause controversy.
[quote=“Voluntary, post:2, topic:1036, full:true”]current generations of mining hardware that have been designed around 1MB blocks
[/quote]
seems like you’re misinformed, ASIC miner does not depend on a certain blocksize. Current debate on Bitcoin is how to massively scale the network, not just a trivial increase to 2MB limit. Two approaches are either using second trustless layer such as Lightning/Thunder to keep main chain decentralized; or prioritizing main chain transactions by inflating blocksize heavily.
Currently Zcash is testing 2MB blocksize every 2.5 minutes. However private Zcash transaction has much larger size than Bitcoin transaction, so having bigger and faster blocks does not mean Zcash will have higher throughput than Bitcoin.
I know it is too soon to talk about massive scalability for Zcash at the moment but i still want to know the opinion of Zcash team on the long term scaling solutions. Lightning Network (which utilize multisig transaction) seems not to be applicable for Zcash’s JoinSplit. I think the only option left for Zcash in foreseeable future would be increasing max block size.
How will Zcash address this problem in the future? Right now we have a blocksize of 2MB. BCH is 32MB. Will we need to eventually consider raising the blocksize to keep fees low?
I saw this post from daria about scaling via zk rollups…is this an alternative to raising blocksize?
3TB isn’t that much these days. The sync latency isn’t a big issue if you plan on keeping your node running for years.
We are talking about competing with the visa’s of the world.
It is worth it to invest a bit in supporting that effort…
Where can I find the most developer friendly documentation on zcash? Specifically, looking for the following (before I dive into the actual zcash codebase)
An example transaction (a real payload emitted by wallets) to see what the fields & values look like (all youtube videos are conceptual, can’t find a real examples)
Design diagrams on how the software (run by a node) is partitioned
Steady state logs of a single block (what a node does during the 75 second window). Or just step-by-step documentation of the logic, that helps too.
What IDE do zcash devs use?
Right now im looking through z_sendmany - Zcash 6.1.0 RPC Docs, but im more interested in the underlying implementation of the rpc, and how zk proof/verification are implemented in code.
If you’re a dev that recently onboarded to zcash, what was the best starting point for you when getting familiar with how zcash implements zk proofs?
The ZK part is mostly handled by crates e.g. orchard and sapling_crypto. They are a separate and huge can of worms, see Orchard - The Orchard Book for a place to start.
I think most devs use VS Code (but there are always the hardcore ones who use vim or emacs )