Z8 Speeds & Feeds

i5 5200U, ddr3 1600


Average: 57.3721404
Minimum: 17.123964
Maximum: 109.58966

17, 34 70, 109 - seems to be very volatile… Did you use you machine while benching?

I have finally built an alpha miner/node and run a benchmark test.

This is for a fitlet a10 barebone, specs can be found at this link, sans ram and hdd

It has 8gb of 1333 memory, and an adata SSD, the power setting is at 25000 (maxed) AMD A10 Micro-6700T SoC, quad core, should have been running at the full 2.2ghz throughout the benchmark. I ran the test with a ubuntu gnome 16 with gdm3 disabled in console mode to minimise jitter in the results. BTW, ubuntu 16 stock seems to be the best OS for this hardware in terms of stability, even in summer with all the clocks set to maximum possible.

The output:

[ { “runningtime” : 194.31770300 }, { “runningtime” : 74.88017000 }, { “runningtime” : 196.60711700 }, { “runningtime” : 96.71220500 }, { “runningtime” : 98.54244900 }, { “runningtime” : 193.12346400 }, { “runningtime” : 147.57440900 }, { “runningtime” : 97.74850500 }, { “runningtime” : 243.47394200 }, { “runningtime” : 290.78759700 } ]

sum is 1633,767602 so average is 163.3767602

smaller is better, right? seconds per benchmark pass? I’m not sure what causes the variation but the process is stochastic so i would think it would naturally flutter.

Does the zcashd require a routable inbound connection to work, as I currently don’t have a network to put it on with any kind of NAT (although maybe this would be a good time for me to root my sony m2 aqua so I can)

Oh, I don’t need root. I found a non-root app that sets up NAT mappings for me, although I have to tweak my network settings to make them persist. Now it is showing that it can be reached on the network. I just need to set static ip on the connection

On the typical desktop the time is less than 50 seconds. My 2010 PC does it in 70 seconds. Also, I see the Fitlet might run hot, and Zcash mining uses 100% CPU 100% of the time. It uses an AMD 4 core micro-6400T which is about 5 times slower than a new desktop for $400. Your slow numbers show it.

I ran 250 runs on 3 PCs today to get a look at the distribution. The solver is showing results grouped closely into 3 second bins, then wide gaps of 10 to 15 seconds of no results.

Yeah, it runs hot but it always runs hot. It doesn’t seem to bother it stability-wise. I tried several OS’s on it and ubuntu was the only one that was really stable, well, arch was ok too, actually. The windows driver for the GPU bluescreened all the time to the point I deemed it a waste of time.

I am just gearing up at present, obviously this is not a serious production setup for mining but I can build and run it now and this will work just the same on a more capable machine. When the network goes live I will be jumping on the fastest dual channel quad core rigs I can get.

Incidentally, when it’s running the cpu utilisation only gets to about average 50% so obviously the memory-hard character of the algorithm is hitting that speed with the 1333 memory, single dimm ddr3. this would probably be the same for any setup with the same memory type. One of the objectives of the design of the algorithm was to limit the advantage of faster processors. Instead it shows by the performance on this thing that the limit really is memory. So the big grunty machine doesn’t have much edge per price and only video cards will give you the kind of super-fast memory that will gain advantage. But against price you might still find the cost/performance ratio favours smaller machines with faster memory.

This is for mining (solving equihash) right?

Hm, that’s worse than the results in https://speed.z.cash/timeline/?exe=1&base=1%2B9&ben=time+solveequihash&env=1&revs=50&equid=off&quarts=on&extr=on. If you click on the most recent data point on there you get https://speed.z.cash/changes/?tre=10&rev=a8270035c0d3596a59af35bfdccfc7d807e2aed7&exe=1&env=1.

So the average on there is 30s, min 15s, max 63s.

Thanks for the report.

I did not see any increase in the average equihash solve speed for 3 different machines from z7 to z8. The variation jumped up a lot.

The figures I’d like to collect here are benchmark results and some system specifics to go with them.

Note that with the z8 release, the solver times have become much more discrete. You’ll notice that all the times above are approximately integer multiples of the minimum time (about 15s or so). Basically, with n = 200, k = 9 the solver takes 15s to find all partial solutions, then another 15s per partial to find the corresponding full solutions. Since the miner now checks each solution as soon as it is found, that means the average full runtime for the solver should be around 45s (since 2 solutions on average), but the average time to first solution is 30s (and if no solutions found, only 15s). I’ll be playing around with various ideas for reducing the variance - the simplest is to work on all partials at once, which (assuming enough available cores) should reduce the average full runtime to around 30s.


15 second solutions are when no solutions were found? It sounds like the 15 second “solutions” should be added in calculating the average time, but then not used in the divisor. I’m getting 16% of the time at 15 seconds. Instead of 45 second average for 2 solves, this adjustment gives me 51 seconds, so the average solve is 25.5 seconds.

Does a higher difficulty slow the solve time, or does it reduce the probability that any particular solve is the correct one for getting a block?

1 Like

The initial 15s is the baseline time required for all solver runs. The analogue of this for the previous parameters (n = 96, k = 3) was a baseline time of about 47s, and then the time to eliminate partials was anywhere from 3-10s. So the z8 parameters have a much lower “sunk cost”, and while the time to find the average number of solutions is only slightly less, the time to find the first solution is significantly faster.

The latter, the same as for Bitcoin.

So, would it be best to go with a quad channel ddr4 setup with a quad core x99 cpu?

EDIT: Something like this or is it overkill?

That will get you about 3 to 5 times more blocks per month than a moderate 2013 desktop at 3 times the electricity bill ($18 / month). Zcash max’s the wattage, 4x more than idle. It depends on how much memory bandwidth will be full.

I do not think it is too much overkill. It depends on the changes they make. The main thing is that you can’t do better on the RAM speed. It will not be slowed down by the CPU.

That makes my toaster, space heater, oven (etc) look pretty shabby… They can’t even manage a single FLOP between them - let alone many GigaFLOPs… Although, I’m hoping my Tesla can manage a few. :smiley:

Note that the benchmark doesn’t take into account the number of solutions produced on each run (which is most likely to be 2 but may be other values), and the fact that solutions are checked by the difficulty filter as they are produced. In particular, runs that produce more potential “partial solutions” will take longer, but will also produce more valid solutions before the difficulty check. (Almost all partial solutions that pass the IsProbablyDuplicate test, which is quite fast, are valid solutions.) I think we’re going to write another throughput-oriented benchmark which would give a better indication of mining performance.

See also Implement a work-queue algorithm to process Equihash partial solutions in parallel · Issue #1239 · zcash/zcash · GitHub , which should further improve latency when there are multiple threads.

I will benchmark on a 16-core (32 thread) Xeon with 128 GB of matching ram in a few days. Given I can get it mining.

My theory is that amount of cores/threads will be more of a factor than CPU clock speed.

Yes, number of cores is the biggest factor, if you have 800 MB RAM per core plus operating system requirements. You’re going to get about 3 blocks per hour on the ~30-CPU testnet.