I’m not certain I understand the “outreach” part of this proposal based on the descriptions above.
For transparency, In October, what this looked like specifically was:
- (1 day of setup and catching up on the forums)
- Reviewing all wallets for a potential mempool-related DoS bug suggested by @zancas (1 day)
- Zwallet audit readout / remediation support (1 day)
- Setting up zecsec.com (0.5 days)
- Reviewing ZIP-317 for security and privacy (0.5 days)
- 13 days of auditing Zwallet
In November so far, for the outreach component so far I’ve spent half a day researching and making suggestions on what to do long-term about the scanning issues and spent half a day getting the maintainer of the Arch Linux zcashd package to update it to 5.3.0 (it was previously on 5.0.0).
I’ll include a breakdown like this in my blog posts going forward.
Also, there is a bit of concern around the feedback loop where unless the audit is published as of date of completion, no one really knows the outcome and will assume worst case once the 90 days have passed.
There is no perfect way to do this. If the project I’m auditing is announced ahead of time and published immediately once bugs are fixed, then yes, some information about the existence of bugs can be gleaned from how long it’s taking the audit to be published. In my mind the most important thing is protecting users, which means helping to get any bugs fixed ASAP. There is no formal 90 day timeline for this project, that is just the industry standard for disclosing bugs without consent in case of negligent projects.
What is done during a manual source code review on the mobile audits? This process is not clear in the original proposal. Is this following the internal ECC mobile audit? Are there any runtime tests being conducted, or manual/compile analysis only?
Runtime tests are sometimes useful, e.g. to compare one implementation of some cryptography primitive with another, to use darksidewalletd to test reorg edge cases, and so on. I do that as-needed when it’s valuable to do so. The process for manual code review is, in brief (a) brainstorm everything that could go wrong in the target application (what are likely mistakes, what does an attacker want?), (b) create a basic threat model if one doesn’t already exist, (c) read the code and note any plausible bugs, (d) refine the list of plausible bugs into confirmed issues and write the report. It’s a creative process where you’re always trying to make the best use of time to find the worst kinds of bugs.
Who is we and where can the public view this policy?
We = me, usually projects will state their disclosure policy in their README or somewhere on their website (and if not that’s a recommendation I make). Sometimes I don’t and I just have to ask them what they are comfortable with.
From your recent audit was there anything noted in your report that would have helped speed up the process?
I won’t disclose recommendations from this audit yet but in general this is a kind of recommendation I make. For example, if a piece of code could be written differently to save auditing time in the future, I’ll make that recommendation. Anything that makes audits more efficient and more effective is a huge win.
Do we get to request feedback on the overall process of the audit party before the report is disclosed?
I’m not sure I understand what you mean by this. Feedback about what, from who?