Is the security of the core primitive used for transaction privacy (zkSNARKs) based on a traditional cryptographic assumption like factoring or does it use a socalled “nonfalsifiable” assumption like other recent SNARK constructions? If it is the latter, does this affect how confident I can be in the realworld privacy provided by Zcash?
ZCash transaction privacy rests on standard assumptions and so your transactions will remain private even if the nonfalsifiable assumptions used in the zkSNARKs under penning ZCash fail[0]. In fact, your transactions will be private even if the multiparty setup is compromised.
However, if the nonfalsifiable assumptions fail to hold, someone could forge transactions.
Even though the assumptions we are using have been around for a while (we didn’t just make them up ourselves), this is less than ideal. However, there’s a large gap between a theoretical attack that breaks the scheme and one that practically lets you forge transactions. Thankfully, we ought to get some warning if the assumptions are on shaky ground well before someone refines them to a practical break.
[0] At its core, Zerocash doesn’t rely on any assumptions for the privacy of transactions: the snarks it uses are statistically zeroknowledge even against an attacker with infinite computational power. The information about your transaction simply isn’t in the proof for an attacker to get. However, in ZCash you notify payment recipients of payment by posting an encrypted message to the blockchain. That’s just standard encryption and if an attacker can break that (e.g. we invent practical quantum computers) you can get information out of there. So if you are really paranoid and take a few extra steps, your transactions are private forever. See section 8.1 of our paper http://zerocashproject.org/media/pdf/zerocashextended20140518.pdf
Thanks for your reply! Two followups:
 How likely is it that we’ll get “some warning if the assumptions are on shaky ground” for the specific assumption used in Zcash? What will this warning look like, especially given that the assumption is not falsifiable? Like, if somebody found a method to make the number field sieve pseudopolynomial time or something, we’d know factoring is probably dead. For a nonfalsifiable assumption will there be some indication like this?
 Does the Zcash team have a contingency plan for a cryptanalytic advance against some part of the protocol? Probably it’d just be swapped out for a different primitive, I guess.

The most likely ways in which the assumptions underlying the Zcash protocol could fail (such as, for example, an advance in the elliptic curve or extension field discrete logarithm problems, or a protocol design mistake), are indeed subject to public explanations or demonstrations on smaller instances. Also, the technical property of nonfalsifiability does not in practice mean that it’s impossible to make arguments casting suspicion on those constructions if there is some related cryptanalytic attack (or just a better thing to replace them with).

Yes, we’d swap out that part of the protocol in a hard fork. How complex this would be and how much assurance it would give that there had been no attack prior to the fork would very much depend on the details of the vulnerability.
In cryptography, it is always assume your methods are insecure. The only difference is the degree to which you think they will be hard to crack. Bitcoin’s assumptions are not falsifiable, we just do not know how to crack it, at least without brute force or nonyetexistent quantum computers that might actually exist. All physics theories are falsifiable only by experiment, not by axioms and logic. Likewise in cryptography: experience is an indication of a method’s future success. If you’re the first to try something complicated (like zkSnarks and Equihash?) experience has shown it will fail. It’s all probably the result of Godel’s incompleteness theorem: some expressions in a finite system of logic can’t be falsified or proven true. Your proofs of security are limited to your axioms. Cracks are not limited to your axioms.
“Nonfalsifiability” in reference to cryptographic assumptions has a precise technical meaning: “an assumption is falsifiable if [and only if] it can be modeled as an interactive game between an adversary and an efficient challenger that can efficiently decide if the adversary won the game”. Most standardmodel assumptions used in cryptography are falsifiable in that sense. OTOH, models such as the random oracle, ideal cipher, generic group, or algebraic group models, which are not equivalent to assuming any falsifiable assumption, are also in common use.
@zawy, I think you’re misunderstanding this term: it has very little to do with incompleteness. Nor is falsifiability by itself particularly useful as a standard for cryptographic assumptions to be held to; for example, a trivially false assumption is falsifiable.
 “interactive game” = physicist or mathematician attacking a theory with observation or proof sequence
 “Adversary” = the physical or math theory.
 “Challenger” = physicist or mathematician.
 “Efficient challenger” = Godel’s “effective procedure” which can be a computer or person writing on paper, as long as it does not go on to infinity
I can’t see how this definition of falsifiability in crypto is not subject to Godel’s theorem.
From wiki on Godel’s 1st incompleteness theorem:
completeness = “A set of axioms is complete if, for any statement in the axioms’ language, that statement or its negation is provable from the axioms”. Godel’s theorem is this does not exist if the axioms are finite and consistent.
“Zcash is protected against crack X” where X has been expressed in Zcash’s finite, consistent axioms. Godel’s theorem says there exists some X for which the assertion can’t be proven.
Therefore, by Godel’s theorem, there is no crypto made up of mathematicallyconsistent axioms for which it can be proven all cracks are impossible. You would have to know all possible cracks and then work backwards to get your crypto axioms which would not be consistent. All possible cracks can’t be known. But for a finite range of cracks, the axioms could be consistent.
[ edit: I guess that’s the difference: crypto works backwards from the known cracks, so its “axioms” do not need to be mathematically consistent, or for a limited range of cracks, the axioms could be consistent. Godel’s 2nd theorem may apply better. That is, in order to defend against a wide variety of attacks, 2 or more inconsistent axioms have to be used. I’m thinking about husbandwife teams and the principle of “complimentarity” in physics: the very nature of trying to disprove a wide variety of statements (making true and consistent observations) creates a inconsistent axioms if they are to be few in number. ]
Gödel’s incompleteness theorems rule out completeness of (sufficiently powerful) consistent logical systems. But here we don’t need completeness! That’s an assertion about the ability to prove or disprove all possible statements in the system. Security doesn’t depend on that.
Specifically, “a particular cryptosystem is protected against all attacks against its specified security properties in some model” is absolutely not equivalent to all possible statements about the cryptosystem.