Yes, the zcash.conf file is in the .zcash folder. It’s the same document you set up for mining.
You will notice more threads being used while mining if you look at your system monitor, but for benchmarking if you use the solveequihash command it currently only runs on one thread, so you are essentially only measuring your single thread performance.
As I mentioned in another thread, they are still working on parallelizing the code so hopefully we will be able to get faster solve times with more cores.
76 seconds is 10x slower compared to previous comments. Although someone mentioned it might increase like this.
I had to include gen=1 in zcash.conf to get multiple threads going. “gen=1”
is in A-W’s guide, but not the official Alpha guide.
I had to run as root to get CPU core use percentages from 1% to 90% and memory usage from 1% to 40% (4 threads) or 80% (8 threads). Each thread as root used 10% RAM (0.4 GB) which I believe someone else mentioned. Subsequently trying to run as superuser again required rebuilding the block database.
Running the benchmark as root did not speed it up.
On earlier releases I was testing Zcash on a few different computers with a bootable Ubuntu usb stick - which is another option to avoid the need for creating a dedicated Linux partition. The point of which being I think your system might be able to do better without the VM and / or host OS overhead.
I tried to run ./zcash-cli zcbenchmark solveequihash 10 and there was no returned data, just the blinking cursor and eventually mining on 7 out of 8 cores stopped and it basically hung. I tried restarting and running it again but the same problem happened… I am new to a lot of this including Ubuntu so maybe I am missing something obvious…
Note that in the next release (z8), there will be an extra parameter to the “time solveequihash” benchmark to specify the number of threads. Thanks to @Shawn for pointing out this omission.
That’s not how I understand what’s happening with the newly multi-threaded benchmark. Setting additional cores to work cooperatively on a single data-set is not an instance where averaging should be used.
An average could be used to interpret a benchmark result If cores were working individually on different data-sets. But I don’t think that’s what’s happening here.
Imagine shuffling a deck of cards with one hand, vs shuffling a deck with two hands, vs each hand shuffling its own deck.
I assumed it was different data sets because 4x threads = 4x RAM requirements.
I had read on github a discussion about it still having problems of not being representative because some threads would be finished, waiting on the slowest thread to finish before beginning the next data set(s).