Zcash Community Grants Meeting Minutes 6/9/25

Grant Dashboard

Zcash Community Grants Committee Google Meet Meeting: June 9, 2025

[Minutes taken by FPF]

Attendance:

  • Artkor
  • Brian
  • GGuy
  • Jason
  • Zerodartz
  • Alex (FPF resource, notetaker)

Key Takeaways:

Open Grant Proposals

  • Ledger Live Integration
    • Zondax: This project’s main objective is to integrate the new Zcash Ledger Shielded application with Ledger Live to enable transparent transaction support. While they intend to explore shielded transaction integration in a subsequent phase, their initial focus is on delivering transparent transaction support within a reasonable timeframe to ensure a successful and timely release. The requestor is asking for $95,000.
      • The committee is continuing deliberations related to this grant. An update will be provided shortly.
  • ZSAs in NU7 (H2 2025)
    • This grant proposal from QEDIT continues the work initiated in Grant #6 to integrate Zcash Shielded Assets (ZSAs) into Network Upgrade 7 (NU7). As ZSA development has evolved alongside other core components like Zebra and Orchard, integration now requires focused engineering to avoid misalignment and inefficiencies. A dedicated team of eight engineers and architects will implement protocol updates across the stack, ensure compatibility with other NU7 efforts, collaborate with ECC on OrchardZSA reviews, and maintain tools such as zcash_tx_tool. By concentrating efforts on resolving the “moving target” challenges, this grant ensures timely and coordinated ZSA support throughout the NU7 release cycle. They are requesting $708,000.
      • Waiting on additional information from QEDIT.
  • Zaino Respecification
    • Zingo Labs: Unanticipated architectural challenges have emerged in the Zaino project due to inaccuracies in the published zcashd RPC specifications. To address this without delaying the overall Zaino timeline, we are reallocating additional development resources to rewrite key components of the architecture. This grant will support the effort to define, implement, and test a reliable and authoritative spec that aligns with the updated Zaino implementation. They are requesting $44,000.
      • Waiting on additional information from Zingo.
  • Ongoing ZeWIF and Interop Support
    • This grant supports continued development and community adoption of the Zcash extensible Wallet Interchange Format (ZeWIF) over the next year. Following initial delivery of ZeWIF and related tools in early 2025, ongoing collaboration with ECC and Zingo Labs has led to critical revisions and testing. Funding will sustain this technical support as more wallet developers adopt ZeWIF and will also enable broader knowledge-sharing and interoperability efforts across the Zcash ecosystem. They are requesting $60,000.
      • After receiving feedback from multiple parties, ZCG has unanimously voted to reject this grant.
  • Research on Algebraic Anomalies in ECDSA Signatures
    • This project will adapt a peer-reviewed vulnerability detection method to conduct the first large-scale audit of algebraic anomalies in Zcash’s ECDSA signatures. By scanning all transparent transactions, it will identify structurally weak signatures that could enable key recovery under specific conditions. Funding will support the adaptation of the detection pipeline, full-network analysis, and open publication of results, helping confirm Zcash’s cryptographic resilience or reveal rare edge-case risks. They are asking for $19,500.
      • After receiving feedback from multiple parties, ZCG has unanimously voted to reject this grant.
  • Zingo: Z|ECC Summit
    • Applicant is requesting $6,200 to cover the costs associated with attending the Z|ECC summit in Prague.
      • Brian: I support it.
      • Gguy: I’m happy to support this.
      • Zerodartz: I approve.
      • Artkor: I approve this too.
      • Jason: I approve. Zingo is a core team and should be at the summit.
      • Unanimous approval.

Brainstorm Session Follow-Ups

  • Maya Protocol
    • ZCG and FPF have discussed becoming a liquidity provider for the Maya service. With ZCG under FPF we are now able to consider the pros and cons of these types of engagements. Conversations are ongoing and next steps should be agreed upon and implemented prior to the next ZCG meeting.
  • Least Authority
    • Least Authority and ZCG reviewed the request from @Milton to audit the Zebra Launcher. After reviewing the Least Authority budget and the estimated timeline and cost for this audit, ZCG has decided to not pursue this engagement at this time in favor of high priorities, but is open to auditing the project at a later time.
  • BTC Pay Server
    • Milestone 2, and with it this grant, is nearing completion. Currently, testing of interaction with the existing Zcash infrastructure is underway. Some issues have been identified and successfully resolved. Details can be found in the latest update in the grant thread.
    • After the plugin is officially completed, ZCG recommends conducting an external audit of its code. This is not critically necessary because the plugin does not require the entry of any private keys for the receiving address and only interacts with the viewing key. However, ZCG believes that mentioning such an external audit in the plugin repository, as well as in marketing materials, could serve as additional motivation and promote the use of ZEC as a means of accepting payments. Especially for large merchants.
  • Iron Fish
    • Iron Fish has discussed with ZCG several ideas for engagement between their org and the Zcash community. They will be submitting a grant application shortly.
  • LLMs
    • ZCG is reviewing the pros and cons of LLMs being used by grant applicants and where it can be a benefit and detriment to the ecosystem.
9 Likes

My position on this would be neither encouraging the use of LLMs nor disallowing them; honesty and transparency should be encouraged around the use of LLMs and grant applicants should demonstrate awareness around the potential cons and what actions they will take to counteract them if they decide to make substantial use of LLMs. An AI/LLM section could be included in the application form. Any sort of “vibe coding” where a grant applicant does not have full understanding and is not able to take full responsibility for each line generated/written should not be allowed.

Minimal use of LLMs, i.e. minimal use of VSCode autocompletion or consulting ChatGPT for some basic tasks and manually copying or reviewing every line of code generated probably shouldn’t need to be declared, provided that the code author takes full responsibility for that code as if they wrote it from scratch in vim. LLM use should be disclosed if the LLM generates or substantially affects content.

LLMs being used for code generation drastically increase the need for an audit of a codebase given the likelihood of introducing subtle security vulnerabilities, and declaring this for security critical code should be of upmost importance.

I believe that AI generated text should be forbidden with the exception of automated translation, for community interaction (i.e. forum communication), application forms and any area where there is an incentive or likelihood of a spam problem that may be caused because of misuse by spammers and malicious trolls. Here are a couple templates for reference for LLM policies: AI generated text is forbidden with the exception of automated translation - GrapheneOS Discussion Forum and https://www.telosjournals.com.br/ojs/index.php/jts/llm

LLMs are an impending crisis for any kind of user generated content websites or platforms with financial incentives such as bounty and grant programs: The I in LLM stands for intelligence | daniel.haxx.se

1 Like

Good points here, but I disagree with this. Vibe coding is the future as humans we are moving forward in life not backwards. LLMs has helped me a lot I’m in my sophomore year pursuing a bachelor’s degree in computer science I have a 4.0 right now. With vibe coding if a submission is trash it because the user did not go over their work. The times that we are in now having tools, such as LLMs helps so much.

Example if I don’t understand the line of code the LLM gives me a clear understanding with no emotion or leaving out info. AI was created for this why take it away like anything in the past and now know who you are dealing with, and it will show in their work. We shouldn’t exclude newcomers or penalize those who are still learning to understand concepts line by line. Ultimately, what matters most is understanding and progress. I also understand the concerns of and these two links that you posted Z cash with that would not have these types of problems because most people know each other who will be out in the public trying to get grants etc. or even been in the public eye in general.

If you’re using an LLM as a tool to speed up your workflow, as an autocomplete, sure. But if you’re relying on it, it will lead you down some nasty rabbit holes. Subtle hallucinations can trip up experienced developers. As an example, just recently, I ran into an issue where ChatGPT was referencing real GitHub issues and was leading me to believe that a Rust library (tonic) doesn’t support non-TLS connections; I lost several hours down this rabbit hole and it turned out to be nonsense and the issue was with Docker networking. I have 8 years experience as a developer and I’m at the point where it feels faster and less hassle to just write code from scratch. LLMs have zero intellect and will lead you towards implementing things from scratch instead of using reputable libraries, or it’ll hallucinate the names of libraries, or recommend insecure libraries. They will happily generate a lot of code and do not understand good architecture, which is something that you can only learn with years of experience. I’m not advocating for a full ban on LLMs, but for transparency and caution.

I would be very careful about this. LLMs effectively psychologically manipulate you into believing what they tell you makes sense and is the truth.

“Every line of code is liability, and when you have something that produces a lot of lines of code, you have produced liability. You have not produced a strategic moat.”

I feel that Primeagen and Theo have sensible positions on LLMs.

Oh yes, the subtle hallucinations are top tier, I have fallen down those nasty rabbit holes especially with my 1-year experience as a coder so far. LLMs editing code is a headache because it will rewrite the code change functions outputs other things within the code just to fix one error which will lead to more error. The bad recommends has led me to have to delete whole project’s, I completely understand full transparency and caution I love it. When using the LLMs Multiple inspections and testing should take place after producing lots of code. Identify the inaccuracies will not be obvious to most who do not have the advance skills, which others could not understand. The security vulnerabilities either in the final implementation and the liability state at risk.

1 Like