Equihash currently implemented?

This may have been answered some where else but I couldn’t find it, does the current version of zcash use a version of Equihash POW when “mining”? Or are the test coins we mine simply “bitcoin” normal (SHA-256) hashes which are verified on the zcash test blockcain?

The reason I ask is because I wanted to play around with computer set ups w/without decent graphics cards for mining to see the effects on #of blocks mined, but this would be pointless if the POW wasn’t Equihash yet.

As I understand it, Equihash was introduced to the previous release (z2, for want of a better term. The current release is z3…) and, at this point, there is no GPU implementation for ZCash yet.

It seems that you are correct, just some of the issues on github are about its lack of optimization.-edit

And as to the GPU aspect, I recall reading in the Equihash paper that it was tested CPU vs GPU and the algorithm did have the effect of leveling the playing-field for them, but it was still not exactly level, so my hope is to try a CPU+Onboard (crummy) graphics card vs CPU+PCIe card (better) and observe if there is a difference (even if small)

But that’s not to say (as you mentioned) zcashd miner is set up to even “see” a GPU to utilize it…

The equihash parameters must be set pretty low at the moment. When I mine I max out my CPU usage, but use less than 1% of my 8 Gb of RAM.

@Austin-Williams When you’ve been mining, have you gotten any block rewards? I ask because the software inherited a bug that is causing a single core to operate at 100% even without mining.

No, I’ve never mined any coins. I figured it was because the difficulty was too high (even on testnet) that my machines don’t have enough power/memory-bandwidth to find any blocks.

Is there a solution to this?

If I mine on a 2-core machine my CPU usage maxes out at ~200%, and my memory usage stays around 0.6% of my 8 Gb of RAM.

I’ve been meaning to ask… Those that are successfully mining, what CPUs are you using? And is that the only factor?

1 Like

Yes, the parameters are set to the lowest specified in the paper in testnet. This was because the parameters we are thinking of using resulted in nearly 10GB peak memory usage with the unoptimised solver. The memory optimisations in #921 bring the peak memory usage down much closer to the theoretical peak, so an upcoming release will bump up the parameters. The only fly in the ointment is that the runtime of the higher-memory parameters is still often longer than the block interval, but I’m working on improving that next :slight_smile:

3 Likes

Is that block interval problem true for the most current generation of CPUs and memory interfaces? Also, is the solver multithreaded / is it practical to attempt a multithreaded implementation?

Thanks for all your work! Very impressive stuff.

To add to @Voluntary s questions, have you guys settled on a target amount of memory per cpu/core?

It is an implementation problem. The Equihash authors stated they have an implementation that only takes 30 seconds for a proof with the parameters we are considering, but it is unclear from the paper what particular configuration they are using there (they appear to only trim the top bit in that case, which means higher memory usage).

The solver can certainly be multithreaded - the only part of the algorithm that has memory contention is sorting (which is why the authors say memory bandwidth is the limiting factor). However, the actual implementation details are tricky, because if a single Equihash solver uses several cores, there is less scope for running multiple mining threads to use additional memory (plus doing threading inside a thread is… less fun). My current tendency is to lean towards making the solver multithreaded at the expense of limiting the miner to a single mining thread, because of the block interval issue.

2 Likes

We were hoping to target about 1 GB per Equihash solver, but the possible parameters don’t actually allow for that precisely. Theoretical targets in that range are 500MB (trim to 4 bits) or 700MB (trim to 8 bits) with n = 144, k = 5, or 1.2GB (trim to 4 bits) or 2.2GB (trim to 8 bits) with n = 192, k = 7; obviously trimming indices to 4 bits greatly increases the number of partial solutions found and thus the runtime of the solver. I’ll be looking at different strategies (e.g. running the partial->full solution stage in parallel, at least to a point without exceeding the peak memory usage of the first step).

2 Likes

Thank you for the explanations, this is a very cool project and even for a newcomer (to crypto-coins) like me it is very interesting to watch it develop! :sunglasses:
It seems to me like the sweet spot will be around the 1-2gb mark if you want to also keep the possibility of running on a phone since most (except the very newest phones) only have 1-2GB of RAM. At that rate of GB per thread/core it would hopefully be enough to keep out the ASIC’s and still have as many machines as possible contributing.

That is our thinking too :slight_smile: however remember that you actually need to look at the spare memory on a phone, not its total memory. My Nexus 6P has 3GB of RAM but generally only has between 700MB and 900MB free (although strangely its average free memory over the last day is 1.1GB, higher than usual…)

1 Like