That was my thought, too. We need a better baseline for the contest.
What you asked Zcash company for sounds very reasonable and even modest from your project’s perspective (given that it’s unlikely the max bounty would ever be claimed), but I can see how it’s difficult for Zcash in terms of logistics (need to commit significant funds for a period pretty long for a startup). Maybe you could ask for, say, 20 BTC as payment instead, and offer this smaller bounty cap for Cuckoo Cycle?
Nitpick: 100 BTC would “only” be enough for up to 2^9 (not 2^10) per your formula, given that you might have to pay all of the smaller bounties first.
Zcash might not be able to commit 100BTC for 4 years, but I imagined one of their many investors could.
100 BTC is enough for 2^10, because claims will be handled serially, and each payout establishes a new baseline, perhaps even for one of the other two categories.
So, basically you ask 60K$ instead of 30K$ (the reasons seem unrelated to the business of zcash?). Why not!!! but I think there are plenty of people who will apply for 30K$
If such large performance progress are possible, it makes it even more obvious for us that some people WILL apply to the contest. Their code might not be as performant as yours, but probably close!
As @Austin-Williams pointed out, you are missing an opportunity here… except if you find a private investor that is unaware of the contest (unlikely)
Just watching ‘top’ on a moderate PC with 1 thread (0.05 S/s): RAM typically stays at the max ~700 MB for 15 secs then drops to ~100 MB to 200 MB for about 15 secs. Hence 300 MB average. Sometimes it’s at 700 MB for 30 secs, and sometimes it’s at ~100 MB to 200 MB for 2 minutes.
Before n=200 k=9, it did not vary like this, but stayed near the max RAM all the time.
Why BTC?
Perhaps one possibility solution would be (x) ZEC in bounties (and perhaps (x) BTC)?
Hopefully you think strongly enough on Z-cash (Especially since you spend so much time developing a miner) that it would be a solution. (Excuse my bad English, it is not my native language)
With the hope of an good open source CPU/GPU miner to us small miners!
Also, advertising a 100ZEC bounty doesn’t have the same impact as
advertising a 10BTC bounty, because everybody knows BTC and how
valuable it is.
My interest in Zcash stems mostly from its use of a memory hard PoW.
Like Greg Slepak, I have my doubts about the long term viability of a cryptocurrency that can be compromised so undetectably. It’s an interesting technological experiment nevertheless…
a) how many more blocks (% wise) would your miner be able to produce vs using the zcashd miner
b) does the 100BTC (~$60000USD) you are asking for include a GPU miner release or is it solely improving on the CPU miner.
Block production is proportional to solution rate, so I’m not disclosing that.
The 100 BTC maximum coverage of bounty insurance I’m asking for is
for the cpu equihash solver and half-finished gpu equihash solver
(still working on it).
I think your best bet to get the total 60K USD you are looking for is to take up the CPU miner bounty and pre-sell the GPU miner for 1 BTC per executable (through escrow or similar and send to participants at or before launch). Im confident enough looking at people that invested 450BTC into zeropond that you will get way more than you might expect from just a bounty. IMHO ofcourse.
I am willing to pay handsomely for an Open-CL (AMD) miner. I run a large Ethereum mine but I’m more hardware oriented so writing our own miner has been slow going as my partner is the same.
I would be willing to partner on the purchase price some for an AMD miner. My mine is smaller than yours, but still 2 GH/s+ (Eth hash rate measured). PM if you get some concrete info.
Hey, @solardiz – maybe I’m the only person who finds neither the ZCash spec nor the Equihash paper to be perfectly clear, but could you explain how Zcash uses BLAKE2b hashes differently than the reference implementation? A second question: how are you measuring the percentage of solutions that an implementation finds? Thanks!
You’re not alone, the spec and the paper are in fact unclear, and the paper is deliberately non-specific (it refers to “hashes”, but not specifically to BLAKE2b). The reference implementation masks portions of BLAKE2b output, so for n=200, which is 25 bytes, it ends up using 40 bytes of BLAKE2b output (and in fact its parameters in pow.h need to be patched for this to even be possible). I think Zcash uses all of BLAKE2b output, without masking, up to the required length. To your second question, I only did such measurement for the (patched) reference implementation (which is optimally tuned to find most but not all solutions), by running it for 500 different seeds in a script and then comparing the average number of solutions found (as reported with debugging output I patched in) against what’s expected per a formula (which was only shared with some folks privately, unfortunately - someone intends to publish a paper - but you can re-derive it on your own and publish). The Equihash paper says there should be about 2 solutions on average for the suggested N, and this is correct - we just need more precision for some tests.
We have much more precision for expected number of solutions, see
For both Zcash’s and my solver, which appear to discard less than 1% of all solutions, the rate is 1.88 solutions per run.
I verified this experimentally with 100000 runs giving 187981 solutions (expanding on an earlier 10000 runs).
Does Zcash’s solver and yours disregard any solutions at all, and why? Your “100000 runs giving 187981 solutions” suggests this might not be the case. Do you know otherwise, such as specific implementation detail where a solution is expected to be disregarded in a rare special case?