I would like to better understand the argument behind the following (@acityinohio@tromer, tagging for attention but of course anyone is welcome to reply):
New Section 7: Initially the ZF SHALL appoint the members of the Major Grant Review Committee and the ZF SHALL have authority to change or modify the Committee’s membership. To align with the Future Community Governance timeline (more on that below), the terms and election structure for members of the Major Grant Review Committee SHALL be decided in a new ZIP and ratified by the ZF Community Panel (or successor mechanism) no later than the end of 2021.
Why not simply use the Advisory Panel + Forum poll for identifying the MG review committee members in open elections right before the new dev fund is created? 20+ months for putting together a simple ZIP for that (plus a couple of competing proposals, if any) and having a public discussion on their relative merits feels a bit excessive, no?
That’s more than it took the community to arrive from the initial new dev fund forum thread (January 2019, if I remember correctly) to where it is today.
There is no technical reason for the “20+ months”. Don’t forget, the majority of the community could also have voted for option B and the Major Grant Review Committee should have been implemented before NU4 activation.
Aragon has dissolved their Flock grants program. Co-founder Luis Cuende explains the rationale here.
Summary of reasons for shutting down the program: checklist-based approach (pre-defining large chunks of work instead of iterating and adapting based on ongoing user feedback), lack of public feedback loops, upfront funding (grants instead of prizes), no-strings-attached funding, high coordination costs.
Different project/context but perhaps something to consider and learn from as the Zcash community designs/operates its own grants system.
Thanks for sharing this, it’s a very important post-mortem. Here is an excerpt that I find very interesting:
Feedback was kept private, creating information silos that kept the community in the dark. Being able to assess the quality of the work requires a huge amount of context, so only other grantee teams could provide such feedback. That’s the reason that when that feedback became public, it was perceived as harsh and created inter-personal issues.
Just an idea, but maybe a way to mitigate that is to let the engineering forces of the various Zcash “stewards” self-organize into whatever team/project they are interested in working on. Valve famously implemented that flat-hierarchy system with some success I believe.
It’s clear that ECC and ZFND already talk a lot. My -perhaps naive- idea is to take it a notch further if we are to welcome new Zcash stewards. Explicitly breakdown the entity-specific allegiances, let the people self-organize in cross-entity task forces. That way information flows, work has to happen in the open (facilitate on-boarding and community contributions), and we minimize the risk of redundant, compartmentalized, and competing development.
Maybe that’s already the way it works?
Disclaimer: I’m a junior wannabe-engineer with zero management experience
For anyone interested to learn more, here is an overview of the Aragon community proposal/funding/voting system that is now being dismantled. Includes sections on the general process, voter turnout, funds spent, relationship to ANT price, and what to conclude from it all.
Food for thought on using grants, prizes, or a combination of both to fund research and innovation.
Alongside private investment, grants and prizes are two of the most common ways to fund R&D and innovation.
Grants are used in areas where results are highly uncertain and require long-term efforts, or where success criteria are difficult to predefine. This includes fundamental research, engineering projects with considerable upfront costs, as well as coming up with new and promising topics, problems, or areas of experimentation.
Prizes are used to incentivize finding solutions to an already known problem, especially where success criteria can be clearly specified. A prize does not require predefining a solution or technology, only a way to measure success. Prizes incentivize multiple simultaneous attempts to solve a particular problem, and usually reward only the best outcomes.
The choice between grants and prizes is not necessarily binary. Grant programs can have elements of strictly results-based funding, and prize competitions can provide some upfront financial or in-kind support to all eligible participants. Both can be tailored to fit a given objective or task at hand, weighing the following pros and cons:
Grants
Pros:
Provide stable working conditions for potential innovators.
Once rewarded, recipients have more flexibility in organizing their work.
Usually contingent on proof of expertise and past performance. Grants can thus be seen as implicit prizes, even though today’s grants can’t be awarded based on future results.
More suitable in areas where results are highly uncertain, require expensive long-term effort, and success metrics are difficult to specify.
Upfront funding comes with lower financial risk for participants than prizes. In return, grant programs can require that all outputs must remain in the public domain.
Work best when efforts by recipients are easy and cheap to monitor and assess. Based on ongoing assessment, funding can be doled out as “prizes” over time. This also allows for regularly reviewing rules and reporting criteria.
In the context of product development, results-based grant funding can be made contingent on proven efforts to iterate according to ongoing user feedback.
Cons:
High-risk for the funding allocator.
High coordination and reporting overhead.
Funding tends to be available for professionals only.
Grant programs rely heavily on the trustworthiness and expertise of their administrators.
May encourage relationships between grant allocators and recipients that bias past contributors over new and possibly more capable entrants.
Reward inputs and promised effort, not outputs and actual results. Grants often provide a steady flow of upfront funding with few strings attached which reduces accountability. In some cases, this can be mitigated by doling out funds over time based on predefined deliverables.
Depending on the metrics used, recurring grants may lead innovators to only partially report their progress to benefit from similar grants in the future.
Principal-agent problem: funding allocator (principal) has limited options to observe and assess the efforts and abilities of the funding recipient (agent).
Prizes
Pros:
Encourage competition and community building.
Offer more prestigious status benefits than grants.
Strongly contingent on actual performance and results.
Depending on design, may have lower administrative overhead compared to grants.
Can be awarded for best contribution, not only for solving a problem completely.
Winner-take-all prizes provide a particularly strong incentive for potential innovators.
Can be divided into smaller bits that reward incremental progress.
Can be used as a “pre-screening” tool for grants. In other words, innovators eligible for grant funding can be identified through small-scale, low-cost prize competitions.
Funding is available to anyone who can solve a problem, not just experts. This attracts new entrants and encourages a more diverse set of approaches and experiments.
Encouraging a diversity of approaches helps demonstrate the viability of alternatives. As a result, prizes tend to source more work per $ spent than grants.
Cons:
High-risk for potential innovators.
Less suitable for projects where desired results are difficult to specify or measure.
Don’t provide stable working conditions for potential innovators. This is particularly problematic in areas where results are highly uncertain and require expensive long-term effort.
Introduce strong barriers to entry in areas with relatively high upfront costs which can greatly limit the number of potential applicants.
Concrete success metrics may end up biasing certain technical choices over others, including potentially more innovative ones.
Encouraging multiple teams to work on the same task may result in duplication of efforts and inefficient use of real resources. If the best team is easy to identify, it may be more appropriate to award a single well-targeted grant instead of forcing multiple teams to invest time and resources into producing outputs that are not necessarily additive.
Prizes can be less flexible compared to grants when it comes to rule modifications and adaptations, because initial requirements determine early investments by participants.
To incentivize broader participation, prize administrators may be forced to waive the requirement for outputs to remain in the public domain. This provides participants with more options to privately commercialize technology or intellectual property that result from the investments they must make to participate in the competition.
Depending on the metrics used, recurring prizes may lead innovators to delay publicizing certain advancements and thereby benefit from similar prizes in the future. In other words, if metrics allow it, participants may “hold back” results to maximize financial gain.
That’s different from the financial “implementation” acts of receipt, custody/investment, and executing the disbursement of the MG Dev Fund slice. The ZIP 1014 framework explicitly has ZF doing these. Consequentially, MG disbursement is explicitly subject to ZF’s legal constraints and cannot violate its declared nonprofit purpose.
A bounty is just a special type of prize, no? Depending on how the task is defined, finding new problems may itself be the “solution” (e.g. identifying bugs or yet unknown security vulnerabilities). In general, as far as I know, bounty rewards are offered for completing clearly defined tasks (e.g. Gitcoin).
I remember editing out either “generally” or “usually” - there are definitely edge cases, incl. bounty programs that don’t specify the task as clearly as most innovation prizes do.
can you edit it back in please? and maybe specify a line something like, innovative solutions to unknow problems would also qualify for retrospective grants at the discretion of the mg
Actually this might need a bit more work, I see why you edited it out. hurm. i really do feel that there needs to be something similar in there. If we are trusting these people to value work before it is done, why not trust them to value work after it is done?
Thanks! I added back the word “usually” so people realize that it’s not some iron law of innovation prizes but merely a characterization of the more typical case
By the way, the list above is in no way prescriptive, and I’ve intentionally avoided references to MG or MGRC. It is only a summary of typical comparisons between the two types of funding. I’m sure there can be mixed forms of funding where it’s not exactly clear whether it should be called a grant, a prize, or a bounty. I did write that grants are usually contingent on proof of expertise and past performance, and that grants can be doled out in a similar way to prizes. This could certainly include “retrospective grants” for solving problems that the grant-making body itself wasn’t aware of or able to specify.