Scaling Zcash: Tachyon. Ragu

I have a question related to the latest paper. If the nullifiers are posted to another data availability layer, then we are effectively trusting that DA layer to make them available and assume that it’s in the economic interest of said network to make them available.

Would it be reasonable to assume that Zcash moves to a rollup-like trust model if it posts nullifiers to another data availability layer?

Tachyon will not involve a new DA layer; the blockchain will continue to be used to post nullifiers and note commitments, but not much else involving Tachyon. The assumption we’re currently making about Zcash’s blockchain being highly available (for full nodes) is maintained, which allows us to cheat a bit because oblivious syncing services are effectively just full nodes with more CPUs.

The reason we explored the relevance of a DA layer in the paper is that users become more sensitive to the availability of the history in order to make their funds spendable, at least compared to something like Bitcoin where access to the active UTXO set is sufficient. In fact, a “DA layer” would probably exist solely to incentivize replication of the historic nullifiers/commitments, and more importantly to incentivize running oblivious syncing services, rather than as a substitute for placing those objects in the blockchain.

5 Likes

Thanks for the clarification!

@ebfull, do I understand correctly that the amount of computation to keep the wallet synced. that Tachyon would require every user with funds in the dark pool to perform (or to pay for) is of the order of O(N) where N is the number of all new notes (i.e. transactions) in any period of time?

So it’s essentially a trade-off that replaces the ever-growing storage with the continuous computation of syncing – i.e. still ever-growing costs for each user?

Or are there accumulators that allow batching of non-membership proofs more efficiently?

2 Likes

So it’s essentially a trade-off that replaces the ever-growing storage with the continuous computation of syncing – i.e. still ever-growing costs for each user?

The costs can be heavily amortized amongst users and their wallets’ number of notes.

Imagine using a sparse binary Merkle trie to prove non-inclusion: the tree’s computation is done in advance (during insertion steps) to make the witnesses for non-inclusion less expensive later (O(log n)). We’re not going to be using that kind of accumulator structure for this – ours will use PCD to perform the precomputations, sharding the nullifier space by replaying the insertions and filtering – but it’s the same kind of amortization in principle.

It would definitely be unacceptable for the oblivious syncing service to need to perform a gigantic computation for every user request, given it would scale with the number of historic payments. We will perform (horizontally scalable) computations and storage in advance, by processing the history, so that as few recursive proofs need to be folded by the service as possible for each user request.

I believe the syncing services will ultimately perform a similar amount of computational effort (asymptotically) that validators currently do to maintain a nullifier set for the entire chain history.

4 Likes

I’m looking forward to learning more about the infrastructure incentives for scaling Tachyon. Is there a thread specifically for that?

Tachyon is Zcash’s real shot at interplanetary money scale. Privacy without compromise, finally scalable.