AMD > Nvidia
Power Consumption
Higher AMD Game Benchmarks means higher hashrate (most of the time if the miner is optimised)
so @trolloniex you want to say that for zcash AMD is better choice over Nvidia. One of GPUs review website mentioned that GTX 1060 6GB is better than Rx480 despite being half as many Cuda cores as 480 has but its base clock and boost clock is much higher than the other GPUs like 1700~2000. Moreover it has 192 bit as compared to 256 bit memory interface.
What GPU card would you recommend for mining Zcash keeping in view the future optimization in mining software.
I would appreciate your expert opinion.
AMD will always beat Nvidia in price/performance (hashrate wise).
I have Titan Xs and they don’t even get near what a $200 AMD GPU does on every single algorithm (to date).
Most bigger farms go for 470s or 480s because of the low cost, power consumption and heat output but high hashrates.
The fastest AMD cards are the Fury Xs but they are expensive and draw a lot more power.
I personally have various cards but Im thinking of getting more RX480
But according to GPU ranking website, GTX 1060 is much better than 480 with memory clock at 1700 but as per your experience its other-way round.
Nvidia is better for gaming but … check link above
To add to this someone has a linux miner doing 120h/s on a RX470 and 175h/s on a R9 Nano, the highest I have heard for Nvidia is 134h/s on a $600 GPU (private miner).
Some people have already tested.
this all mining we are referencing on pool I believe?
Always referencing accepted shares on pool yes. Or else its a fake ![]()
Further study of duplicates shows that most result from identical subtrees near the top, which yields a totally trivial duplicate test, implemented in my latest commit.
This gives me about a 4% speedup…
I’ll port this to CUDA tonight…
I added cantor pairing coded slots to my solver, like xenoncat and morpav use, which allows me to use 2^10 buckets instead of 2^12. This turns out to give a nice speedup, and is the new default in equi_miner:
-
equi1: 4.9 Sol/s
-
equix41: 6.2 Sol/s
-
eqasm1: 6.9 Sol/s
-
8 x equi1: 22.2 Sol/s
-
8 x equix41: 27.1 Sol/s
-
8 x eqasm1: 27.2 Sol/s
-
eqcuda: 27.2 Sol/s
As before, the speed difference between equix41 (using intrinsics) and eqasm1 (using assembly) just about disappears when running 8 instances,
and the latter now exactly matches the CUDA GPU speed.
Nice. Is your CPU miner faster than xenoncat’s now?
For the contest; how do they select best GPU submission? CUDA/OpenCL? It is nothing written in rules about that…
Even if not faster than xenoncat, this is interesting for non avx cpu I guess.
No; mine is still a few % slower than xenoncat’s.
But morpav’s is quite a bit faster than xenoncat, mostly thanks to compiling with clang using profiling data.
How can I use morpav’s to mine?
persuade someone like nicehashdev to integrate it into their miner…
Did you have a chance to try and optimize my Cuckoo Cycle solvers yet, @nobody?
Hi @tromp,
I benchmarked your CPU and GPU implementations to get baseline measurements and I’ve already started a rewrite of the GPU code in assembly. But I’m having a difficult time trying to visualize in my head where significant performance improvement will come from. Nevertheless, I am very hopeful I will be claiming some portion of your bounties; just waiting for that epiphany moment. ![]()
Is that PTX assembly for NVidia, or GCN assembly for AMD?
Indeed; the edge trimming that dominates Cuckoo Cycle runtime appears to be largely optimization free.
CUDA/PTX for NVidia, but the final code maybe optimized using this open source assembler for Maxwell:
MaxAs - Introduction · NervanaSystems/maxas Wiki · GitHub
Also, I’ve begun to study the intricacies of this open source GCN compiler:
CLRadeonExtender - ClrxToc – CLRadeonExtender
So there’s a good chance that I might port the code over to AMD’s (GCN) platform.
For the sake of the bounty, would it be a fair assessment to say that an R9 290 or an RX 480 is comparable to a GTX 980?
http://gpuboss.com/gpus/Radeon-R9-290-vs-GeForce-GTX-980
http://gpuboss.com/gpus/Radeon-RX-480-vs-GeForce-GTX-980
Considering their price, I’d say they shouldn’t be superior to a GTX980…