Recent Coin Launch & GPU Impact

Has anyone else been following the recent launch of the new coin

I have been keeping track of it on Bittrex just out of curiosity, and it serves as a prime example as to why GPU mining really needs to be seriously looked at before Zcash launch.

The LBRY platform itself or price is not important for this discussion: however what is notable is that the Developers claimed from the beginning that they wanted to be CPU friendly and only launched with a CPU miner. They even implemented a “mining slow start” for fairness. Then as you can see it only took about 4.5 days for a GPU miner to be developed by the public effectively shutting down anyone who was still mining with a CPU. I believe the LBC coin is based off of a version of SHA-256 so this result is not surprising.

As a CPU friendly / mining slow start for “fairness” coin this is a complete failure to mitigate the impact of GPU.

I don’t think that this will be as extreme of a case with Zcash launch because @zooko and @str4d and team are using Equihash, but even in the Equihash whitepaper (section 6-b) it mentions a GPU advantage of a factor of 4:

b) Parallel sorting in practice: The observable speedup
on multi-core CPU and GPU is not that big. The fastest GPU
sortings we are aware of have been reported in [33], where
radix sort was implemented and tested on a number of recent
GPUs. The best performance was achieved on GTX480, where
2 30 32-bit keys were sorted in 1 second 6. The same keys
on the 3.2 GHz Core-i7 were sorted with rate 2 28 keys per
second [40], i.e. only 4 times as slow. Thus the total advantage
of GPU over CPU is about the factor of 4, which is even
smaller than bandwidth ratio (134 GB/s in GTX480 vs 17
GB/s for DDR3).

There are many farms out there like this:

that are very adept at developing GPU software and would love to point their machines at Zcash.

I just wanted to post this as food for thought, I know there are a few Github open issues that reference this topic so the Zcash team is aware of it, but none the less it is still a very real threat to the goal of Zcash as a truly decentralized system.


Having a publicly available GPU miner is not going to do anything to deter people with large amounts of money to spend on acquiring a significant portion of the total hashrate of the Zcash network.

In my opinion, so far as a truly decentralised network is concerned, pool mining is similar to having an individual control a portion of the network - the hashing resource, whether controled by a pool operator or an individual, has a single point of failure.

However, pool mining is a proven way for individuals to benefit from contributing to a network. Given a big enough pool, those miners can still get a reward even if the competition has the advantage of proprietary software to enable the use of more efficient hardware ie: GPUs.

So, again in my opinion, even if everyone were using the same class of mining hardware, pool mining will do more to level the field between small miners and big miners - it's just not that great for decentralisation.

Either way, who is willing to pay for the software they want?

1 Like

I agree with you that having a GPU miner available will not do much to mitigate the GPU effects, either way its just a matter of time. But right now the algorithm is still being developed so there may still be opportunities to tweak the code to level the playing field some if the team sees it as important (which is the entire point of my post). From my understanding (I may be incorrect) there has been no GPU testing up to this point to see what we are dealing with.

I think mining pools are also inevitable, so having a working miner that we can use to pool is also very important. It's race and pooling horsepower will of course ensure more shares for those in the pool. This is as you mentioned bad for decentralization but what other choice do smaller miners have?

A four-fold increase in speed is an advantage of approximately the same order as having cheaper electricity. Given the modest benefit, it seems rational for them to leave it to the community to develop. Furthermore, given that the existing CPU miner is probably not mining at its theoretical peak, the development resources are probably better spent on optimizing the CPU code.


Slightly off-topic but: pools don't have to lead to mining centralization. P2pool and Getblocktemate are both designed to allow individuals to participate in a pool without giving up their ability to choose which transactions end up in the block they're working on.

Unfortunately, neither of those options have gained much traction in the Bitcoin space. Currently, nearly all the hashing power is pointed at pools that singlehanded decide which transactions end up in the blocks the miners work on.

1 Like

Concerning GPU verses multicore CPU in the Equihash paper: The GPU was not able to utilize the full bandwidth which was 8x more. [edit: the rest of this post is wrong] You have to pay 8x more for the GTX480 in order to get the 8x increase in bandwidth. You could buy 8x more DDR3 RAM for the same price and get twice as much done. Seems like CPU wins.

What is the minimum CPU needed to fully utilize a large and fast memory? Can an old cheap board fully utilize memory that is 4x bigger and 4x faster?

@zawy: That's not the right way to look at it. Adding memory doesn't increase bandwidth. The amount of memory used by the algorithm is basically "fixed" for a certain choice of parameters, and so additional memory does not improve performance.

Also, as a related point, it's also not straightforward to compare CPU memory bandwidth and GPU memory bandwidth in terms of performance on Equihash. To maximize the use of GPU bandwidth requires writing code that uses memory in a particular way called 'coalesced access". Otherwise the memory bandwidth utilization (and performance) suffer considerably.

1 Like

First let me apologize ahead of time for any ignorance that the following might show.

As you say, memory does not increase bandwidth, but 4 cores running on 4 GB should be 4x faster without parallel coding because Zcash has currently set n,k to need only 1 GB memory per thread. 4 cores on 4 GB should be faster than trying to parallel code 4 cores for 1 GB. The paper mentioning this possibility implies there is too much bandwidth for a single thread. It appears the CPU is a lot busier than the memory bus. So if there are two threads running on a core, maybe they will not collide when looking at memory.

The blog mentions running 8 threads (2 per core) with 8 GB RAM because the n,k values have been selected for 1 GB per thread. This indicates to me that the code is running efficiently without parallel coding, and maybe it could run efficiently-enough with 32 threads and 32 GB (if the CPU supports 8 threads in each of 4 cores). If is this is not yet near the bandwidth limitation, then maybe it is as good as an equally-expensive GTX400 GPU (which has 1 GB & 32 threads) that needs not-yet-existing parallel coding.

1 Like

The current implementation is still being optimized. So even if it's true now that the CPU is busier than the memory bus, I wouldn't count on that for the release version of Zcash. If the implementation is successfully optimized, then the limiting factor will be the memory bandwidth and running multiple threads of the solver will reduce performance. So I would be careful in making hardware choices given existing benchmarks of the implementation.

1 Like

In looking at this a little more, it appears the GPU and CPU costs are closer than I originally thought. I agree: it looks like GPUs will be at least 4x cheaper. It requires the most advanced algorithm which will probably take awhile to get implemented for Equihash.

Below are the two papers that Equihash got it's numbers. The first references the 2nd just like the Equihash paper.

In the 2nd paper the radix sort (the preferred method in CPUs and GPUs) is about the same for the GTX280 and a 1st generation i7. (The paper appears to be 2011). The faster faster methods substantially tweak the radix sort.

See the 2nd paper that shows a regular radix sort is not much better on a GPU verses CPU. They got 2x better, and the 1st paper above got 4x better.

If the flat distribution of keys in Equihash can be made to take on a pre-sorted distribution, CPUs running just C++'s included Introsort can catch up with a parallel Quicksort for GPUs (1st link beow) and with a GPU radix-merge sort. If the pre-sort is possible, it would help Zcash to become evenly distributed in hashing and in fair initial distribution of the coins. With an more traditional CPU algorithm that expects a pre-sort distribution it should mean faster hashing. I realize this might conflict deeply with how it all is supposed to work.

On the sorted distribution ... The CPU reference becomes faster than GPUSort and radix-merge on the high-end graphics processor and is actually the fastest when compared to the algorithms run on the low-end graphics processor.

Pre-sorted distribution also penalized a GPU bitoinic-merge sort (Warp-sort) by a factor of 4.

The GTX480 and the intel i7 CPU were both 2011-era devices. The GPU used 250 W (600 W peak was the power supply rating requirement) whereas the unspecified i7 was about 100 W (assuming 1st or 2nd generation). Isn't Equihash more energy-intensive than bitcoin? This might make CPUs more competitive. So instead of 4 times more cost-effective in terms of energy, GPUs might be only 60% more efficient.

1 Like

So is this the state of the art discussion or has any progress been made on exactly which direction one should go? I have a test rig and just waiting until I see if it makes sense to buy GPUs or memory or what. Kinda scary that this hasn't been nailed down when it seems like were really close to getting a release in the not too distant future. It takes time to order gear, build it out, and test. I hope we get that time before they start flipping on production.


Last I have heard (as of last week) the Zcash team is still auditing their code for security. And I also believe the audits include benchmarking of code for various platforms CPU, GPU, ARM, until they get the results back, the code is still likely to change which will affect the hardware.

That's why Zooko recently announced that they pushed the release date back:

1 Like

@cosmicv totally with you on the desire to know the BEST SPECS for mining

I see that the date is pushed to late September.. And am curious to know around when the hw discussion will be firmed up.

From my perspective, CPU based, regardless of electricity cost, would be optimal... As the investment cost off buying m (MO/BO, CPU, RAM) is much less than buying g (m + 6GPU) add the GPU prices are so high

With that said, as Ethereum will be going to POS (not piece of sh!t, proof of stake).after Chinese New Year... I'll have a lot of powered down GPU


What kind of GPUs/rigs are you using to mine Ethereum?

That does not look like Zooko made an announcement. Besides, any public comment he makes should first be to the blog and email. Traditional stock market rules should apply internally because the laws have not caught up. The headline of the article is definitively flamboyant but the content is weakly waffling.

Zooko, what's the probability of launch in 2016?

What evidentiary facts do you have that such rules apply here?