Zcash (Equihash) FPGA implementation

Hi all,

Can’t believe noone else has mentioned this yet; but has anyone considered implementing Equihash on an FPGA board?
While its true that Equihash is a memory-hard algorithm, there are many ‘shared-memory’ FPGA implementations, where more than one FPGA has access to a segment of shared memory concurrently.

Where a GPU falls down is when a call to Global memory is made, resulting in a fetch that costs time (GPUs can make quick memory calls but to very small size segments).
Additionally FPGA’s (once scaled out) achieve a modest power saving over a GPU.

I’d taken a good look at this with Decred; but the crowd support turned out to be quite lazy and apprehensive at times.

Sufficient interest here?



Because the quickest way to centralize mining of a coin is for it to go the FPGA/ASIC route. There is a reason Ethereum, Decred and ZCash didn’t make it easier for FPGA/ASIC implimentation. Because they don’t want it.

In at least in two cases have stated they’d change the algorithm if it seemed like FPGA/ASICs and the centralized mining that comes with it became a thing.

I hear what you’re saying… I think you’re confusing centralization with the concept of simply having larger shareholders.

Anyone can and will always be able to stand up a Zcash node (decentralization), allowing P2P broadcast events to flow throughout the network, as well as verifying transactions.

We’re already seeing large/farm size GPU mining setups aimed at Zcash, as well as the stratum implementations being sold off (not free is the point). Having an FPGA implementation is simply the next evolution; especially when efficiency is concerned.

I know of a Decred FPGA solution that has been around for nearly a year now, i’m sure there would be quite a few now.

Having a large network presence is beneficial for the actual users of Zcash; because it means transactions will be confirmed quickly and with a high level of security as the difficulty increases.

1 Like

I’m not confusing anything. I’ve been around this space for a long time.

There is a steep scaling curve for GPU coins than ASIC ones. Which of course is why If you go to whatomine.com you’ll notice GPU-only coins are more profitable to mine compared to Scrypt or SHA-256 coins.

While there are large GPU farms there are also plenty of 1 card, 2 card hobbyist miners to offset them and keep mining as decentralized as possible.

Once things move to the FPGA/ASIC realm that ends.

ZCash like Ethereum will have a large network presence and it will be decentralized just like Ethereum.

It also seems the ZCash team put mining fairness out there from the start with the Equihash algorithm. In other words, they don’t want FPGAs/ASICs and the centralized mining they bring, entering the picture.


When the dev team is openly anti-FPGA/anti-ASIC & pro-GPU-farm (and threatens to change the PoW algo to favor certain form(s) of mining infrastructure), it does obviously have a chilling effect on FPGA/ASIC development work and, if it is possible, it happens very quietly until suddenly one day the hash rate starts to increase beyond what’s profitable for GPUs with very cheap power.

Fair enough; I shall leave this topic be.

BTW, I think the devs wanted zcash to remain CPU-profitable as well & it’ll be interesting to see if they force that issue by changing the equihash params. At this point it would create such a massive amount of whining & drama from GPU owners that I suspect it won’t happen, but you never really know.

One could build an FPGA system that kills GPUs performance, however it won’t be cheap to do so. The cost will probably be the deciding factor. FPGAs can achieve extremely high throughputs when latency isn’t a issue. They also allow for extremely flexible memory designs that can be tailored to fix an algorithm for which equihash would provide many areas where memory could be saved. However in the end it would still require extremely expensive FPGAs likely coupled with extremely costly static RAM to get speeds that are meaningful.

ASICs typically are even faster but the up-front costs are prohibitive for the amount of money involved here. The resulting systems would also be on the order of thousands of USD because of the RAM requirements. It doesn’t make sense to spend so much money on something that is still limited by the memory bandwidth.

I think that SOC FPGAs could provide a little better performance than GPUs with cheaper GPU style RAM. Typically there is a very fast interface in the FGPA fabric to extend the processor functions and the processor could make it easier to pack the memory such that the memory accesses would be more efficient. Still the costs are going to be higher than a GPU. Power usage would probably be lower. Biggest problem is one would have to probably design their own board.

So in the end I think the only real thing blocking FPGAs is economics. Maybe five years down the road when FPGAs with HBM are reasonable cost, things will be different.

With GPUs, companies like NVIDIA are working on ways to reduce power consumption. That is the real cost … For example with a 1050ti I’m getting around 140 SOL/s but the laptop and card are pulling close to 80W. Since my power is close to $0.25 / KWH … power is 1/2 of my costs.

The fact that the dev/team would probably change the algorithm is another factor. That 100% kills ASICs and again currently FPGAs wouldn’t be a minor investment either because of the memory requirements. If N were 100 … then some FPGAs would probably have enough RAM blocks, but N is 200 and that means it takes about 1000x more memory.

You’re right that (100,9) requires about 1000x less memory than (200,9).
But It’s not just N that determines the memory requirements, rather it’s N/(K+1).
So (100,3) would for instance require about 2^5=32 times more
memory than the current (200,9).

I was simply assuming k would remain at 9. I should have indicated that.

i laugh at all this looking back.

people aren’t in favour of asic/fpga solutions as they ‘dominate the market’ apparently.
but people can buy thousands of gpu’s and it doesnt bother anyone.

get fucked.

Anyone can buy a GPU b/c there are many different manufactures and many websites that sell them. In the ASIC and FPGA world there is drastically less competition. Not to mention they are harder to come by.

1 Like

Oh yes there are so many various GPU vendors…

  1. AMD
  2. nVidia

Asics do 2 things only, mine and then hold papers down
Gpus can be made to calculate many different things in many different devices
Even AGP slot cards can still have use

1 Like

Having a large network presence is beneficial for the actual users of Zcash; because it means transactions will be confirmed quickly and with a high level of security as the difficulty increases.

You have the concepts mixed up I think? Network hashrate does not affect transaction confirmation time one bit, since that is a factor of block time, which is constant. Block time stays constant since difficulty rise to combat the increased hashrate. Perhaps you think of a different aspect, not sure.

Neither does security increase if the hashrate goes up through a leap from cpus/gpus to asics. Security is simply a factor of how well distributed the hashrate is, nothing else. ASICs/FPGAs always concentrate hashrate into the hands of a few, which increase the likelihood of collusion, 51% attacks etc. Your arguments are founded on a misunderstanding - that higher hashrate is good in and of itself. That is not what the science behind proof of work tells us.

Dear Community,
Although I’m quite fresh @ cryptomining I think fpga implementation would not be a risk in getting things centralized.
In the research I do I found that it does not need to cost $50 for a entry set. which is a way lower than gpu driven sets.
Sure there are fpga modules which can do more and faster and so on… same on gpu and cpu … more or less same pricerates up.

1 very good reason to look at fpga modules is the hash/watt rates.
I read reports of 50 MH/s running 18 watt from the wall for a setup of 50 dollar stuff…

For now the biggest issue with mining is the power needed to drive the process…

(I would love to run a few sets from an old solar panel and battery… )

Definitely possible, although by now i think the pure volume of gpus and the level of optimization done on equihash by now might outweigh the benefits…
I looked into doing this with an Artix 7 for fun (i created this topic way back).

Any good with c/asm?

Not really, I’m more the conceptual thinker, analytics etc then programmer… unfortunally for this… :wink:
I’ve been looking at cyclone IV fpga (https://www.altera.com/en_US/pdfs/literature/hb/cyclone-iv/cyiv-51001.pdf )
boards are available for less then $25 and a raspberry for controling it is about the same…