Sarah Jamie Lewis announces her candidacy for the Major Grants Review Committee

Probably worth starting with a note that I have a history of tweeting out critical vulnerabilities in the zcash ecosystem - that is where we are today - critical vulnerabilities being discovered hours after a release to users. It is not a good place to be for a coin whose main differentiation is privacy and security.

I want to get beyond simply discovering and disclosing vulnerabilities and into the realm of of a high standard of security engineering across the ecosystem. In an ideal world I want grant rewards to be tied to design and implementations that are accompanied by
thoroguh security assessments prior to shipping and advertising features to users.

Where projects lack the expertise to that I want to be able to provide them with options to access or acquire them.

Of course we are not going to get there overnight, and so I think it is likely worth exploring the establishment of a far-reaching disclosure reward program to flush out the rest of the low-medium hanging fruit in the ecosystem. The cryptocurrency ecosystem is a hostile one and mistakes have financial incentives tied to their exploitation and so we need to tie financial incentives to disclosing bugs to those who are best place to fix them, and maintain that standard across the ecosystem. And projects who decide to take on the responsibility of building infrastructure for the Zcash ecosystem need to understand that responsibility and accept that responsibility.

Specific to the MGRC role, if the MGRC is involved in technical audits (which sounds good to me), how would disclosure of vulnerabilities be handled?

In an ideal setup the MGRC would act only as the sponsor of these kind of programs and wouldn’t be directly involved in disclosing these vulnerabilities e.g. a bug found in a zcash node would be reported directly to ECC or ZF teams, a bug found in Zecwallet would be reported there.

Generally what’s the right approach to such a situation?
Given the actual counterfeiting vuln mitigation that occurred what did you learn? How would you apply that Zcash-specific learning to the next vuln?

The counterfeiting bug was a catastrophic bug. There is no right way to respond to them. As I mentioned at the time I think the zcash team handled it well, but I think the precedent they
set for masking an ongoing recovery with operational incompetence was a pretty far-reaching one. I’d have liked to have seen a deeper postmortem regarding the operational aspects of recovery and the impact of those kinds of decisions on community perception and future bugs.

That being said, the decisions that have to be made once critical or catastrophic bugs are found are never going to be ideal or free from consequence or compromise. The priority should be in robust up-front analysis and design to minimize the impact of such discoveries in the first place - energy invested there has far greater returns than in a disclosure process.

Personally, I believe that the biggest security-related risk to zcash no longer lies in catastrophic full-node vulnerabilities but in the quickly expanding ecosystem. Regardless, it is clear to me that the MGRC is well placed to provide an avenue for addressing the concern more generally through the kinds of initiatives I’ve described above.


Are you aware of Zecwallet Lite?

Is there a sense in which it is not ready for exchange use?

I’ve been using it on mobile, and it’s been pretty slick! It’s the kind of system that I worry about the security of, and would love to have more assurances of security ala the audits @sarahjamielewis proposes.

In a slight tangent, I’d like to get @sarahjamielewis 's response to the diasporization of Rust from Mozilla (yup, I did just make that word up).

Is there something the MGRC could/should do with respect to the the nascent Rust Foundation and/or the potential hireable engineers from Mozilla? Any ideas?

I’m excited. I’ve come to love Rust as a language and as a community. I also understand some amount of the vast amount of work that setting up an independent organization entails. There is a great team working on it and, as I said, I’m excited about what they come up with.

ZIP1014 mostly precludes the MGRC from actions outside of the Zcash Ecosystem directly:

Major Grants SHOULD be restricted to furthering the Zcash cryptocurrency and its ecosystem

While there is some flexibility there, I don’t think it is within the intent of the MGRC to take action. Given the reliance of Zcash on the Rust ecosystem, with both ECC and ZF making heavy use of it, it likely makes more sense for those organizations to consider sponsoring the nascent foundation.

That being said, I do think it is within the remit of the MGRC to put forward “Requests for Proposals (RfPs)” that would appeal to the influx of talent coming from Mozilla (although I imagine most have already found new work already so we would have to be quick about it) e.g. RfPs for reviews of the Zcash Rust ecosystems (I note that Mozilla let go a significant portion of their security & system engineers so there is a lot of talent out there) or RfPs for integration projects or far-out experiments (what if we funded a privacy focused web browser with zcash payments built in?)


I don’t believe any wallet is at least version 1.0. They are still being worked on plus the Ledger Z address wallet won’t be ready for roughly another few months.

1 Like

Hi Sarah,

I gave the ECC a hard time over how they handled the print free zec bug. - a bug which got the ECC to hire a very capable security team to handle and coordinate disclosure in the future.

Could you please explain what code of ethics you where adhering to in these tweets? (which you provided a link to in your above post)

I have worked in vulnerability disclosure for 10+ years. I am at a loss to understand this (using full-disclosure tactics without first giving notification.) - Although I understand twitters character limit is not the best for explaining nuance. Would you please elaborate here, specifically because you are promoting yourself as the choice to ensure ethics in the MGRC.



The short version is: This bug was critical, and so obviously exploitable that it was essential that any users of zecwallet-lite stopped using it immediately - that is why I posted my tweets and notified the project simultaneously.

In the cryptocurrency world obvious bugs get exploited in minutes and hours after they are found. I saw this bug while idling through a changeset so it was immediately obvious that anyone willing to exploit it has already doing so by the time I had found it.

The longer version is: This also wasn’t the first cryptographic bug I had found in zecwallet software and it was abundantly clear long before this point that the entire project need a thorough security review - and yet ECC were advertising zecwallet projects on twitter without explicit security warnings.

As I stated to someone at ECC at the time:

It was more fundamental that people stopped using the product immediately and re-evaluated their decision to use it than it was that it was that a patch was [nicely] rolled out.

More background: I don’t make disclosure guarantees about bugs that I find outside of dedicate contracts anymore - especially when it comes to the privacy software - I don’t think they make users of software safer or increase the safety of the ecosystem.

Personally, driven from harsh experience and a variety of past legal threats I become more convinced that, absent any prior contractual relationship, full & immediate public disclosure of vulnerabilities should be the (expected) norm.

Privacy and security projects demand a a certain level of responsibility, you don’t get to do a a bad job when people’s lives,freedom and (yes) money are on the line. Rushing out code that jeopardizes the safety of users is information that I think all users should benefit from immediately and especially when it is very likely that an adversary may be exploiting it.


I have to say, as a user of Zec Wallet, I do not feel I was well served by Sarah’s vulnerability disclosure tweets. I can understand wanting to warn users ASAP, but I don’t see the need to go into details publicly before the bug is patched. This put users at risk unnecessarily.

The tone of the tweets is cavalier (“My reasons for ethically disclosing this via tweet and not a private email is that I’m very high”), and this is not how I would want vulnerabilities in Zcash software to be treated in the future.


Hi Sarah,

Thanks for the clarification. It is very hard to explain this stuff over twitter. I appreciate the extra information.

I know where you are coming from. Personally I disagree with your method for trying to get it done. But I do understand the nuance in the situation. I guess I am somewhat relieved the “Because I was high” was just a facetious statement made in frustration. I am pretty sure it had no bearing on the situation, still you can understand why I had to question it. As other people have expressed it is not a good look.

Yeah, I have been there too. It really sucks when you follow responsible disclosure just to get legal threats. Thankfully the industry has moved on a lot from there. Through work by companies like TippingPoint, iDefense and now stuff like Hacker1/Project Zero.

I have never been a fan of 0-day or 1-day disclosure. And I feel that in this instance, it would have been better to give the bug, poc and fix to the maintainers and let them disclose it. It aids integrity.

Normally naming and shaming is reserved for when a vendor does not react or does not patch in time, which you have explained above, you did it to kick start the security testing.

Can you assure the CAP that you would not do this to other privacy projects or MGRC recipients whilst serving on the MGRC, and follow responsible disclosure? I would be happy with a 30 day version of googles Project Zero, or even just to use project zero for this. (I have stated this on the forums before)

I would strongly advocate for a similar system. I would be pretty chuffed if the zfnd could create their own version of project zero for privacy coins.

For those who are not in the security field, every security person has their own idea of responsible disclosure and method for working - it is as unique as the people themselves. This is not a call out post, but a genuine discussion between professionals and seeking assurance about future conduct in so many words.

Would you be willing to change this stance if you were to get elected to the MGRC?


Let’s be clear that the thing that put users at risk in this instance was a complete lack of certificate verification caused by a rushed rollout of zecwallet. This wasn’t the first cryptographic vulnerability in the zecwallet. The vulnerability was very obvious from a cursory glance at the code. Me tweeting it out didn’t impact the overall risk environment in any way other than to warn people not to use a completely vulnerable wallet.

I regret to inform you that the tone in which I tweet about vulnerabilities in my free time has no bearing on how impactful that vulnerability may be to you.

If you’d like me to cease tweeting about zcash ecosystems vulnerabilities on twitter then arguably the best way is to vote me onto the MGRC so that I can put forward proposals to fund ecosystem projects access to security expertise and reviews to avoid critical issues like this being found on release day.


I strongly disagree with this based on personal and professional experience. While certain parts of industry have improved, there is a long tail of projects and services that have not - this is doubly true when it comes to public infrastructure.

When it comes to privacy tools there is a never ending parade of software advertising “fully private” or “fully anonymous” software and doing very little actually back up those claims. Not calling out those projects undermines my own integrity. To quote Taleb: “If you see a fraud and don’t say fraud, you are a fraud.”

I’m not going to alter my long standing ethics to serve contemporary politics. I have stated the programmes and policies I would support on the MGRC that I believe would greatly improve the security of the ecosystem in this thread. Adoption of some or all of those would hopefully take any disclosure decisions out of my hands entirely.

When it comes to MGRC recipients that would clearly fall under the “absent any prior contractual relationship” clause in my stated worldview. I make no promises regarding the disclosure policies of vulnerabilities I come across for non-MGRC recipients or unrelated privacy projects.

1 Like

ethics != morals.

Which is why I asked what ethics you were following, but I get where you are coming from.

Edit: Sorry this wasn’t meant to be semantic.

Correct but, to be clear, you asked me to commit to altering my conduct (and hence to alter a code of ethics I’ve built up and refined for well over a decade), not my morals.


This thread on disclosure is fascinating. I officially welcome review from @sarahjamielewis of my project Zbay in any form!

absent any prior contractual relationship, full & immediate public disclosure of vulnerabilities should be the (expected) norm.

Setting aside the question of legal risks, I’m super interested in the ethics of this. Do you have any writing online about this question? (update: nevermind! watching the talk!) Does the statement above really apply to all bugs? Even subtle ones? Even in products that don’t present themselves as privacy tools?

(It sounds like part of what you’re saying is that what justified this particular disclosure was the obviousness of the bug and the fact that it was in a privacy-oriented product, but then the statement above sounds more universal than that?)


If I am not mistaken, network level security has always been of grave concern to z2z transactions everywhere. Not sure why the ZecWallet finding warranted seemingly vehement disclosure. Nevertheless, MGRC-funded security reviews and disclosure policies sound like good things. Thank you, SJL et al, for your contributions


Teams need someone uncompromising & demanding - they make damn sure important stuff is never ignored, they’re not satisfied with ‘good enough’ when extra effort is called for, it makes the whole thing a challenge - its essential.


Because there is a difference between “a sophisticated adversary with passive posture can infer correlations regarding network activity / active posture can eclipse a full node and corrupt consensus” and “Literally anyone with any kind of network posture (e.g. a coffee shop barrister) can arbitrarily corrupt consensus (or worse) because your software isn’t checking certificates.”


As @sarahjamielewis said, there’s a big difference. The current light wallet protocol leaks some information about you. (Including in some wallets alarmingly which payment is yours because of a perplexing insistence on fetching memos). Not having cert validation on that exposes the info to anyone doing a MITM attack.

In contrast leaking that IP X made a z2z transaction (ironically hidden by the light wallet protocol) is a lower risk. All be it one that poses problems if 1) you move money in and out of the shielded pool frequently or 2) wanted to hide your IP from the recipient of your payment.
Oh, for added hilarity, if you did want to hide your IP from the recipient AND used Tor as recommended, you’d be subject to hostile exit nodes that do MITM.


Yeah that video is a good overview of some recent examples. The Swiss/Australian e-voting catastrophic cryptographic issues in particular. In those cases Election Authorities in Switzerland and Australia were in a position to lie about the results of an election while being able to produce “proofs” that the election had been run honestly. The proofs would be fraudulence but would satisfy the auditing software provided.

As I say in the lecture - which I gave in Switzerland some months after the system had been withdrawn and been deemed insufficiently lawful - If I could redo that disclosure again I would have fought harder to disclose earlier because unbeknownst to us, and explicitly denied in several communications the software was also being used in New South Wales and was actively being used to run an election. In that case I stand by the assessment that the public has an absolute right to know about those kinds of vulnerabilities. Vendors be damned.

Does the statement above really apply to all bugs? Even subtle ones? Even in products that don’t present themselves as privacy tools?

I don’t tend to review non-privacy oriented tools - that’s just the nature of the work I find myself doing. The subtlety of the bug matters less than the risk profile of the bug - who can exploit it, can it be detected, what is the worst thing that could happen etc. Some of those e-voting bugs were very subtle.

In the last few years I’ve done everything from full, immediate public disclosure (like this zecwallet lite bug) to waiting nearly a year to disclose after eliminating every other option (see our work trying to get Vancouver hospitals from broadcasting patient medical records in plaintext across the entire city ).

The entire point of publicly stating “I don’t promise anything” is to make it explicit. I don’t like having secret knowledge of vulnerabilities especially regarding critical systems, and in many cases I feel the most good is served by releasing that knowledge out into the world so everyone is on the same playing field.


Sure, the debate is as old as computer security. Full disclosure was a needed tool for getting companies to react and do something about vulns. Over time this was phased out for responsible disclosure. it is a really interesting debate with an even more interesting back story. Both sides of the debate have different ethical standards.

here is a schneier article on it - Debating Full Disclosure - Schneier on Security

What Sarah is/was doing is trying to get privacy based companied to wake up and start pre-emptively acting on security issues by testing, using a proven/established technique in the security culture. I disagree with its use in this instance. but I make money from responsible disclosure, so I dont think i am too impartial. Saying that tho, I wouldn’t have tried to sell a bug like this.

for more reading, responsible disclosure morphed into “no more free bugs” then into stuff like hacker1 and project zero.

1 Like

So, you talk a lot about how projects that aim to provide privacy for their users should have extensive security and privacy reviews. As far as I know, security reviews are more commonly done than privacy reviews.

What are resources teams can use in order to cover most of their bases while building such projects?

If you were elected to the MGRC, would you be willing to spend some time writing a guide that can be used for such projects?

I think this kind of information would have great value in general.