Equihash doesn't actually work at all?

The paper’s stated goal seems to imply that it solves the problem of GPUs having an advantage over CPUs:

the requirement for fast verification so far made it an easy prey for
GPU-, ASIC-, and botnet-equipped users. The attempts to rely on
memory-intensive computations in order to remedy the disparity
between architectures have resulted in slow or broken schemes

However, the paper goes onto say

Thus the total advantage of GPU over CPU is about the factor of 4

So the paper is basically refuting itself, and saying that the paper doesn’t prove the stated abstract goal at all. I don’t get this at all.

Also, the study where gpus have an advantage over cpus by a factor of 4 is from 2011, but gpu performance has steadily accelerated over cpu performance in the past few years (gpus benefit directly from moore’s law because you can fit twice as many gpu cores every few years, whereas the number of cores per cpu has decreased by a much smaller rate). So the 2011 study is basically completely irrelevant in 2016, and the factor of gpu efficiency vs cpu at the same price point might be like 10 or 15x now.

Am I missing something, or does this totally not work?

Update: Ok so it seems that without factoring electricity, the cost efficiency is 4x -10x for gpus vs cpus. Now what I’m puzzling over is how zcash can only have a 4-10x efficiency for gpus vs cpus, whereas ethereum has a 100x efficiency gain for typical user devices versus dedicated gpu hardware. Theoretically, if typical laptops are in fact memory-bound when running equihash/ ethash mining algorithm, then the gain should only be 4-10x.

However, I hypothesize that the actual bottleneck for typical laptops / user devices is in fact not memory, but actually the number of cores (e.g, for a laptop with 32gb ram with just 2 cores, having more threads per core only results in a limited gain in mining efficiency. So you’re effectively only using like 2-4gb of that ram for mining).

For example, for ethereum I saw numbers for 30 mh/s for a gpu costing like $300. However, a macbook pro retina using gpu can mine about 3 mh/s. Macbook pro retina cpu mining is about .3 mh/s, about 100x less efficient. So my guess is that the optimum number of threads per core is just 1 or 2, so much less memory and memory bandwidth is used. Thoughts?

Memory bandwidth is the most important factor to Equihash performance. Both graphics card memory and system memory technologies have improved since 2011 and will continue to improve. No doubt GPGPU tools are getting better as well but, for now, the more traditional programming environments present the lower barrier to getting the software developed.

Ok but memory bandwidth for gpus are up to 10 times as high as those for cpus at the same price points. So gpus will be 10 times more efficient than cpus on mining. Doesn’t seem like equihash works at all?

Don’t forget the other end of the spectrum - all the cheap bandwidth offered by Raspberry Pis & etc. Also, that 10x figure for GDDR over DDR performance is going to be for a specific type of benchmark - one that probably takes advantage of sequential memory accesses, which GDDR is designed for. But Equihash doesn’t use memory in such an orderly way.

It’s also important to note the there is a hard coded memory requirement of about 700MB per mining thread running the Equihash solver. Even a Titan X GPU with thousands of cores has a limited 12GB of RAM to access for calculating.

1 Like

The paper had to look at 2 other papers to conclude the advantage was 4x better when the bandwidth was 8x greater in the GPU. Newer GPUs have not increased this advantage any faster than DDR4 over DDR3 has increased. The electricity needed by the GPU they used was about 2.5x more than the CPU, so the GPU advantage may be only 60%. It might be 4x more electricity when comparing new GPUs to new laptop CPUs (if bandwidth is the limiting factor), so GPU’s might be only equally good as CPUs for sorting Equihash and it might require a special paralell algorithm the paper described that no one will make public (although the new large quantities of VRAM on GPU’s and Zcash’s Equihash parameter selection appear to not require the GPUs to have parallel programming). As the bandwidth of GPUs has increased, their electrical use has not decreased as it has in CPUs.

The bigger problem is that there is probably a glut in GPU mining equipment looking for new coins.

1 Like

Memory hardness is the defining factor here. If you assume a 4x direct computing advantage of GPU, you still have almost equilibrium running a 32 core Xeon setup with at least 24 GB of RAM. And at lower power.

Theoretically at least.

I don’t know if it’s correct to say that the 700MB is “hard coded,” as the amount of memory used can be changed. However, Equihash is designed such that attempting to use less memory has a considerable performance penalty. (Conversely, it is worth noting that using more memory doesn’t provide substantial performance gains.)

1 Like

It can certainly be changed by the developers before and after launch, I simply meant that there will always remain a correlation between the number of mining threads and a specific amount of RAM.

On a side note, how has your GPU implementation been going? @aniemerg

It works for lower parameters, and I have been rewriting it to work with the current parameters.

4 times advantage quoted in the whitepaper is actually small and it does not make CPU mining obsoleted.

What I’m not understanding is the ‘botnet resistant’ part which simply says that heavy usage of hardware can trigger user’s suspicion. That’s not convincing at all since all CPU mining algo consume hardware resource extensively, and average PC users are stupid that they wont notice strange thing in their PCs.

In addition, Equihash is designed to favor low end hardware. Dual core CPU does not produce twice of hash rate of single core CPU. This means botnet, made up of mostly poor PC, seems to be favored. Is there anything wrong in my logic?

1 Like

I agree that CPU usage is easy for botnets to hide from average PC users (by only using spare CPU, so it doesn’t impact their performance). Significant RAM usage is much harder to hide, especially when it can’t easily be reduced without a significant impact on mining performance.

1 Like

2 threads with 4 GB RAM is not noticeable on my 2010 PC.

Ok so then what were there benchmarks? For a 10x performance in sequential memory access, what is the equivalent performance multiplier for equihash? if they got this number, did they test only on a few devices?

Well the same can be said about ethereum’s mining algorithm , which is overtaken by GPUs. The equation for efficiency is basically memory bandwidth * amount of memory / cost. At the same price points, GPUs are still vastly more efficient per unit cost using this metric.

I’m wondering if the author of the paper did any economic calculations at all regarding this metric, which is the supposed bottleneck. And furthermore, if they even bothered to study what is the most cost efficient gpus and cpus for this metric, along with a comparative analysis.

Electricity cost is irrelevant for Bitcoin mining. The largest Bitcoin farms in countries with renewable energy and China basically have 0 cost for mining, so this isn’t accurate.

There was a program in I think in 2014 showing one of the biggest miners. Their electricity cost was $80,000 per month per facility that brought in something like $600k. I would not be surprised if Zcash is about 100x more expensive in terms of electricity per hash per miner on the network. I’ve shown in a previous post the electricity per Zcash will be $1.50 per coin if there are 50,000 miners and there are no other changes. That’s $3 to $5 per coin for GPU miners.

This is anecdotal and I can’t even comment on specifics of what hardward is being used but one early effort to produce a GPU mining implementation has reported a 3x speed increase over the standard CPU miner. It’s expected that further optimisation could double that.

Do you mean memory hardness * memory bandwidth as the defining factor? So for the 32 core Xeon, the unit cost with respect to hashrate is basically 24GB * cpu_memory_bandwidth / $1000. So assuming a gpu with a 8x memory bandwidth, then if there exists a gpu with X gb ram and Y cost, such that X * 8 * cpu_memory_bandwidth / Y > 24GB * cpu_memory_bandwidth / $1000, then gpu’s are more cost efficient than cpus. Correct?

I’m not sure why you’re saying 4 times advantage is small. Take your salary now. Now divide that number by 4. Would you be happy?

an advantage of 4x would be insane, given that most mining for these coins are already done with GPUs. It would basically mean that all mining is done on GPUs, with additional farms buying more GPUs just to mine zcash.

My problem with the paper is that it implies that it is solving the GPU problem, but it isn’t. And that’s totally fine if it doesn’t because mining centralization for POW is not even a solvable problem. But it should acknowledge that and just say it is doing the best it can by using memory bandwidth as a metric as previous coins have done, instead of implying that it’s solving the gpu prey problem.