Improving UX with detection keys

WS 2 doesn’t do any hashing for blocks that have no new notes. Not sure it’s the same logic than yours.

WS2 just provides a bridge for every block, right? I think it’s very similar, or at least compatible; the equivalent using tree sharding would be to provide the “bridge” (or rather just half the bridge) up to level 16 for the “right-hand” side of the subtree as of the end of each block, and then use the subtree roots from there on up. The tree states already provide the left-hand side data that’s required, which is what #982 is looking to exploit.

The bridge is provided for any block range, but not inside a block. Basically, anchor point to anchor point.

1 Like

Yup, that’s how I understood it, it makes sense. So does the lightwalletd server “maintain” the bridge for each historic block as each new block arrives? Or do you use bridge fusion like bridgetree does to compute the final bridge needed by the wallet on the fly?

It is both. There are extra fields on the compact block and a bridge server calculates on demand.
I experimented with both approaches.

  1. Is easier
  2. Is faster and can leverage the existing infra. The bridge server is low CPU and low bandwidth.

I can’t agree more. There are likely other reasons holding back adoption. Ie price volatility, merchant acceptance, broad wallet support. However users won’t even get to consider those limitations given the current sync issues.

1 Like

Could you comment on the exact privacy compromise caused by using old anchors?

Could an implementation simply “try” to sync the latest anchors but if the users tries to spend, the wallet simply uses the latest anchors it has?

There are a few issues. First, using old anchors demonstrates that the source of funds from the transaction is no more recent than the chosen anchor. Second, the fact that the wallet is choosing an old anchor can reveal which wallet is being used, as most of the wallets I’m aware of require recent anchors. Finally, depending upon sync behavior, one can infer the time at which the user last synced their wallet. None of these are individually identifying, but it’s still a lot of information to leak to all observers of the chain. Now, for a lot of users, this level of leakage might not matter, but for users who are being specifically targeted (particularly if their counterparties are malicious and/or they’ve been sent notes at known heights with the intent of using this leaked information to help deanonymize them), it could be a problem.

With respect to your second question, yes, absolutely. Also, once a subtree has been fully scanned, there’s no need to perform any redundant work - at that point, all of the notes in that subtree can be made immediately spendable with O(1) work, and it’s only notes in subtrees that were incomplete at the time of the last wallet sync where it may be necessary to wait for tree updates.

Using a system of bridges provided by the lightwallet server as @hanh suggests can make the latter also a non-issue. Discovery of new notes is always going to be the thing that takes longer, absent detection keys or liberated payments, but as @hanh says at the current post-sandblasting level of chain usage, this isn’t really a bottleneck either. In the ECC SDK we need to change the scanning strategy to use adaptive block range sizing based upon how full blocks are; that was something that we wanted for the original 2.0 release but didn’t have time to implement, and that will also help.

2 Likes

That’s what YWallet does. If the user makes a payment while synchronization is in progress, it will pick the latest anchors it has processed so far.

2 Likes

I think this is really important. The scanning/detection problem is at least a source of friction for any imaginable use case, and it’s been an outright blocker for projects (like Zbay) that try to “build something on top of” Zcash. If Brave is going to use Zcash as a messaging layer, this will be a blocker on that roadmap.

Scanning makes the wallets feel clunky, even if it’s possible to accomplish everything you want to do in a reasonable amount of time. The need for a lengthy scanning process also forces the Zcash libraries to take on a shape that’s harder to get started with and harder to use than something you’d expect from e.g. the PayPal API.

I agree with @nuttycom that we shouldn’t just assume this is one of the biggest adoption-blockers though. I think it is, but other things, like not having a stablecoin might be even more important. This sounds like a good item to test our process for finding that sort of stuff out.

I also don’t think that we can necessarily predict or know which use cases will turn out to be important to Zcash. So I think it’s important to make something developers can have fun building on, so they can come and build and experiment with their own ideas, hopefully some of which are killer use cases for Zcash. The UX of our libraries is a huge factor there, and the “shape” of how scanning works has a big impact on that. If it’s frustrating to get started building on Zcash, not many people are going to come and experiment.

Technical Opinions

Detection keys would make privacy a bit worse than the current model. In the current model, wallets tell the server which transaction they’re receiving, but the server doesn’t log that information. So if an attacker breaks in, they only see transaction fetches that happen while the server is compromised. With detection keys, the attacker can break in once, steal the detection keys, and then they forever have the ability to detect transactions going to those wallets. (This should be mitigated by making it easy and automatic to change detection keys. Note that we could also store the detection keys in a TEE, which although imperfect, would still make it a bit harder to steal them.)

I also don’t think that “it doesn’t make privacy worse than it currently is” is really all that good of a justification in the first place. The current level of privacy is probably not good enough for some kinds of use cases, so assuming it’s okay and entrenching the information leakage further might not be a great thing to do.

In terms of privacy cost/UX benefit though, I think detection keys are a fine solution in the short term. It would be nice to see some research into longer-term solutions. Ultimately they still don’t scale, they’re just moving the computation to faster and always-online processors. Aztec’s RFP for this problem will be interesting to watch.

3 Likes

Back in the days of the spam, I had written an implementation of the note decryption algorithm in CUDA. A RTX 2070 could process 4 Million notes per second. Crunching through the spam took 20 seconds without spam filter, and less than 1 sec with.

However, the users I polled were reluctant to upload their FVK

Also, there is the matter of maintaining a cloud server that has a GPU. They are not cheap.

10 Likes

Let me preface this by stating that I (currently) believe that, in the long term, Zcash will migrate to a succinct blockchain (á la Mina) or similar zkrollup-based* architecture. However, that’s a long way off (and a subject for a different topic). In the medium term, detection keys seem to be an option that’s worth exploring. It’s a topic that comes up during Arborist calls every so often (e.g. December 2022, September 2023).

Personally, I’d like to establish a cross-ecosystem team, with engineers from ECC and ZF (plus others, if they’re interested in contributing) to draft a strawman design proposal for deploying detection keys, with a more specific/concrete objective of figuring out what changes would need to be made to the transaction format to support detection keys, with a view to including those changes in the V6 transaction format. That way we could lay the groundwork so that we could roll out detection keys without needing to do another network upgrade.

﹡ I’ve crossed out “zkrollup-based” to avoid people jumping to the conclusion that I’m in favour of Zcash becoming a L2 token on another project’s blockchain. For the record, I’m not. I believe that Zcash should remain on its own L1, and I expect that the L1 architecture Zcash uses in the future will leverage ZKPs to become more scalable (versus today’s UTXO/note-based architecture).

5 Likes

I would love to see such a proposal see the light of day. It’s all just hand waving until a succinct proposal is put in place for people to strawman. The privacy pros and cons would be a must have within the proposal.

1 Like

Real question is whether the cost of processing detection keys would be equivalent to decrypting notes with a view key. @nathan-at-least any idea of the CPU impact difference?

So summing up the issues

  1. Old anchors demonstrates that the source of funds from the transaction is no more recent than the chosen anchor. Seems hardly a concern if we use “best effort”. Plus there should likely be a HUGE anonymity set prior to the most recently sync’ed anchor considering how long the chain currently is and that a wallet at least tries to get recent anchors.
  2. This reveals which wallet is being used Unless this gets standardized as best practice. YWallet already does this.
  3. one can infer the time at which the user last synced their wallet But what exactly does this de-anonymize? Seems pretty nit-picky

All the above to me falls in to the “let perfect be enemy of the good” issue that cryptographers have had for decades. The fear of slight imperfection causes basically non-adoption so we just get no privacy adoption at all. And don’t tell me privacy only for those that care or “really need it” is good enough because that’s what get’s us into the bull’s eye of regulators.

2 Likes

I’ve added Detection Keys to the agenda for today’s Arborist call.

I also want to share this doc, to help illustrate what role detection keys could play in the future evolution of Zcash:

PDF version with working links (57.7 KB)
Original GDoc

7 Likes

@Dodger Are we intending to allow for Opt-In compliance? If so, how do these viewing or detection keys fit in with that possibility. For example, it might be that in order to mint a USDz that compliance is needed where the transactions remain private to the outside world; but at the wallet level the user activates compliance so the USDz is useable locally (or internationally) between wallets. Is this how you are thinking about it? If not, can you explain.

Opt-in compliance is a different topic. As the title of this topic implies, the principle motivation for implementing detection keys is improving the user experience of shielded ZEC.

The problem we have with the current Zcash light client protocol is that it’s not particularly lightweight. It requires that the “light” wallet trial-decrypts every shielded transaction (for a given shielded pool) in order to detect shielded (or shielding) transactions. While the use of compact blocks (together with not storing and validating the entire chain) means that the bandwidth and resource requirements are lower than running a full node, it’s still a significant amount of effort. To couch it in terms of boxing weight class terminology, it’s probably closer to middleweight than lightweight. It’s one of the reasons that mobile wallets struggled during the Sandblast phase (and I think we can all agree that was a sub-optimal user experience for most, if not all, Zcash mobile wallet users), and, in the long term, it’s probably not scalable as Zcash achieves mass adoption.

(Note that this is absolutely not intended as a criticism of the Zcash light client protocol or its designers. It was a major and highly successful step forward for Zcash that made it possible to use shielded ZEC on a mobile device while enjoying a very high degree of privacy. The ECC team deserves ongoing respect and kudos for conceiving of, and rolling out a solution that moved us from “You need to run a full node” to “Just install a wallet app on your phone…”.)

In an ideal world, a Zcash light wallet would only need to decrypt its own transactions. Two ways of achieving that are out-of-band note transmission (where the transaction information is transmitted directly to the recipient’s wallet instead of being posted to the blockchain for them to find themselves) or outsourcing the job of payment detection (through trial decryption) to someone else.

Out-of-band note transmission requires some form of messaging solution, which is non-trivial, and I would guess that the implementation/deployment timeline would be significantly longer than that of detection keys. However, as is hopefully clear from the table I posted above, I think it’s something we should definitely work towards in the medium- to long-term.

Outsourcing payment detection involves privacy trade-offs. Right now, you could (in theory) give someone else your incoming viewing key(s) so they can trial decrypt every shielded transaction and tell you which transactions are yours. The privacy trade-off is that they get to see the details of your incoming transactions. That might be acceptable if you trust the person you’re outsourcing to (or if you’re actually “outsourcing” to a service/node that you control) but there’s a better option.

Detection keys would change the privacy trade-off of outsourcing payment detection to a third party from “they get to see the details of my incoming transactions” to “they get to find out which transactions are my incoming transactions” (but don’t learn anything more - i.e. your payment address, the amount being sent to you, and the contents of the shielded memo field remain private). Future cryptographic improvements (e.g. OMR) might even improve that trade-off further.

External Client model (detection keys)

Additionally, the reduced bandwidth requirements would make it more practicable for the light wallet to interact with the full node over Tor, such that the full node doesn’t even know the light wallet’s IP address.

(Aside: I think that detection keys will also reduce the bandwidth requirements such that the current light client protocol’s privacy trade-offs could be preserved by allowing the light client to trial decrypt using a single transaction field, instead of the whole transaction. Whilst I don’t think that’s where the true win lies, I could imagine it being implemented in lightwalletd as an interim step.)

Anyway, ZF is ready to lean in and support an effort to roll out detection keys by (a) helping design the necessary protocol-level changes (e.g. to the transaction format), and implement them in Zebra, and (b) implementing blockchain scanning using detection keys in Zebra.

6 Likes

It’s all good but mostly theoretical because the usage of zcash isn’t there. A wallet can remain synchronized with very small battery usage. For example, I have kept my wallet up and synced for more than 8 days on a single charge with a combination of warpsync & background sync.

5 Likes

I’m not sure about this; detection keys could require a circuit change and a network upgrade, address and wallet changes, and the implementation of the detection server and the detection server protocol. It’s very difficult to say confidently that this would be less work than integrating a messaging protocol, given that there’s a pretty large corpus of messaging protocol research to draw upon - Zcash would almost certainly not want to try to create its own such protocol from scratch, but beyond adopting a messaging protocol, the other wallet-level changes seem relatively straightforward.

Both are a substantial amount of work, without doubt. I agree with @hanh that at present the usage is not presenting as significant of a problem as it was during sandblasting, so the operative question is how highly to prioritize this effort, based upon how much we have reason to believe that improving UX in this fashion will lead directly to greater adoption.

3 Likes