Working on Scalability

Does ECC plan to start work on transaction aggregation (Shielded transaction aggregates · Issue #4946 · zcash/zcash · GitHub) in the next 6months? @zooko @joshs

3 Likes

I think that the transaction detection (trial decryption) problem is a more urgent bottleneck to scalability and should be worked on first. The transaction aggregation ticket acknowledges this:

Light client bandwidth (receiving)
The client has to scan the chain for note ciphertexts, commitments, and nullifiers as it does now, so in the worst case the bandwidth would be the same. However, if many notes are sent out-of-band then the bandwidth needed for scanning will be reduced accordingly.

Mobile wallets couldn’t really take advantage of transaction aggregation for extra performance without also having some method for privately sending transaction data out-of-band. Transaction aggregation on its own would mainly save on storage and processing costs for full nodes, which currently isn’t what’s bottlenecking transaction throughput.

The transaction detection problem is also what I think is blocking the development of super-usable libraries. The need to have a background process that keeps state and continually scans for transactions makes it hard to e.g. make a python or npm package that has a fast light wallet API. Something like this protocol sketch could make it possible to have Zcash libraries that work more like PayPal’s API, while still having pretty good privacy.

I think ECC’s cryptographic engineers could come up with something much better than my protocol sketch, so I’d like to see that prioritized as well. It might also be something that ZCG could put out an RFP for so that ECC can focus on PoS.

6 Likes

Another idea:

The other huge performance bottleneck for light wallets is the need to update unspent note witnesses. For those unfamiliar, the witness is a merkle path that’s used to prove the note exists in the current note commitment tree. Wallets need to continually update the path to work with the latest tree root for privacy. They could choose not to update the witnesses, but then they would be revealing approximately which block the note they’re spending was mined in.

A lot of that work could be avoided by adding another merkle tree whose elements are each block’s note commitment tree root (a merkle tree of merkle trees). Then a wallet could safely avoid having to update each note’s witness, instead proving (a) “my note is in a merkle tree with root T” and (b) “T is the correct note commitment tree root at some block”, where T is a private input to the circuit.

This gives a constant-factor processing speedup: wallets only need to update their witness each time a new block is mined, rather than having to update a witness for each new commitment. I feel like you could also do this recursively—maybe using recursive proofs to keep the proof size small—to make the total work O(log(n)) or something like that. (actually it’s not obvious how to make that work, I’m trying to work it out.)

1 Like

Generally what you’re describing is unnecessary since the cost of updating a Merkle tree authentication path from an old state of the tree to a new state of the tree is roughly logarithmic in the number of new note commitments (in both communication and computation) given advice from a third party that caches deltas (or what we call “bridges”). In fact, we’ve already implemented this in the bridgetree crate that we use for the zcashd Orchard wallet code.

3 Likes

This is exactly my point. As far as I know ZCG hasn’t been made aware if the ball is in their court or not for this kind of stuff now.

I don’t think Zcash can afford to go into another “emergency mode” because we failed to prioritise scaling again. :person_facepalming:

2 Likes

Can the third-party gather information about the user (such as the position of their notes)?

3 Likes

Bridges are from any point in the commitment tree to any future point. It’s possible that block headers themselves could contain bridges at different granularities (one block, ten blocks, a hundred blocks etc.) and the wallet could just use those to rapidly sync up to the latest tree state, but they would need to download every block header to do this privately. (This isn’t so bad in a blockchain that has large blocks but fewer of them.) Alternatively they could ask a third party (like a lightwalletd or whatever) for the bridge(s) they need in order to sync to the latest tree state, avoiding downloading all the headers but at the cost that it could potentially reveal where the wallet was last synced to, though even this could be obscured well.

In any sensible implementation of bridgetree support for wallets, it could be made practically impossible for the third party to identify where your note is actually located in the tree.

5 Likes

What about the first time a note’s witness is bridged? In that case, ‘where the wallet last synced to’ is the block the note appeared in, so to avoid revealing the block height of its note, the wallet should increment the witness by itself to the current block height (or something) and then start bridging? Or is there a way to avoid that?

2 Likes

Thanks, this looks compatible with the representation of commitment trees used in Ywallet and therefore I think we can use it too.

@earthrise The wallet processes blocks from its latest sync point to the oldest usable bridge and then merges the bridge.

3 Likes

So say I open my wallet and it has a bunch of notes whose witnesses are all synced to 1000 positions back, and my wallet finds a new note that’s 500 positions back. Naively, it feels like I’m stuck between either (a) requesting a bridge from -1000 to -500 and a bridge from -500 to 0 (which reveals the index of the new note) or (b) requesting a bridge from -1000 to 0 and incrementing the new note’s witness from -500 to 0 myself, which is Θ(# of new notes that have appeared since I last opened my wallet) work. The -1000 to 0 bridge might be missing the root of a huge subtree that I’d need to update my new note’s witness.

A wallet in the same situation except that it did not get a new note would just request a bridge from -1000 to 0, so if I do anything other than that, I’m revealing the fact that I received a new note.

I probably just don’t understand bridging enough to have a grasp of its privacy properties, so I’ll do more research. :slight_smile:

1 Like

I’d use a passive model where a certain set of bridges are published but not explicitly requested.
[-2048, 0], [-1024, 0], [-512, 0], [-256, 0], [-128, 0], etc.
My wallet receives them all, so there is no telling which one it is going to use if any.

In your example, the old notes would miss the first 2 bridges because they are synced to -1000. They sync from [-1000, -512] incrementally and then [-512, 0] directly.

PS: Another distribution of bridges may be better in practice.

3 Likes

With any exponential distribution of bridges like that, it’s still Θ(# of new notes) work on average to catch up to the next power of the base. I guess the wallet could use an old bridge set to catch up to the next power of the base, but then requesting all of those would make the bandwidth linear again. I think I need to do some whiteboard drawings to figure it out.

1 Like

O notation is a bit misleading in my opinion as it describes asymptotic behavior. In other words, processing 500 notes vs 1000, or 1000 vs 2000 is a major win even though the complexity is the same.

Btw, you still need to request every note for trial decryption.

2 Likes

I agree completely! I’m commenting under the assumption that we’re going to work out a way to privately send notes out-of-band to avoid the Θ(# of new notes) trial decryption and that the next step is to figure out how to update witnesses privately in much less than Θ(# of new notes) work, so that the total work wallets have to do is much less than Θ(# of new notes). In the meantime, constant factor improvements are totally worthwhile.

(It’s not currently possible to do better than Θ(# of new notes) because of the need to sync the whole nullifier set (to privately figure out which of your notes are unspent), but that’s pretty cheap in terms of concrete performance.)

1 Like

Have you got a few minutes to give any remarks about how recursion at the protocol level could positively impact scalability?
Is that feature on the map for the next year or two ahead as a Zcash upgrade?

Recursive Proof Composition without a Trusted Setup (iacr.org)

1 Like

Oh I get it now, if you had something like -2048, -1024, -512, -256, -128 bridges from each block, then you can use an old bridge set to become aligned with the largest power of two (logarithmic), use the largest bridge repeatedly to become aligned with the latest bridge set (linear), and then use the latest bridge set (logarithmic). It works out to be logarithmic complexity as long as there are enough bridges, or if your wallet isn’t too far back, at the cost of just having to download some small number of bridges at fixed intervals (e.g. at each block).

So yeah, a wallet can maintain its privacy by just downloading all of the old published bridges, which isn’t too much overhead.

1 Like

There’s a tradeoff that can be made where notes that are first witnessed by the node are advanced manually or by bridges supplied from the blocks themselves for a while until encountering a Schelling point of some kind where the user can blend in with everyone else updating their witnesses, where they can begin to use bridges from lightwalletd or whatever.

2 Likes

Following up on this note - ECC’s plans for the near-term are aligned to what was described in the restructure announcement from two weeks ago. There are 4 key focus areas:

  • Delivery of a solid SDK as a foundation for future Zcash builders/devs
  • Delivery of a wallet focused on ease of use and accessility
  • Research and support for Proof-of-Stake
  • US-based policy work

Having said this, chain scalability is not a primary focus area. However, as we work towards delivering on all 4 of these key areas, scalability will be an ongoing overarching concern that can be addressed much the same way we are addressing emergency mode - through a combination of technology and process changes.

3 Likes

@ZcashGrants you’re on notice. :grimacing:

2 Likes

Sorry if I lack understanding, but doesn’t https://zerosync.org/ describe something similar as transaction aggregation in their paper (see here for a short summary: zkCoins: A payment system with strong privacy and scalability, combining a client-side validation protocol with validity proofs · GitHub) that doesn’t run into these wallet scalability issues; basically, what if two clients would exchange the data off band and the chain would be a minimal proof “root” or something? Then no scanning would required at all?

Not only does this seem the ultimate answer to scaling, but it would also turbo-boost privacy as data is literally never committed to the chain, the result is more like a an online exchange of cash notes. The chain really only provides double-spend protection to this system and nothing more.

Correct me if I get it wrong completely.