Hi, I’m new to ZCash and want to set up Linux machine for learning about mining Zcash. Is there an advantage to setting up a dual boot (separate) Linux machine vs just running it as a Virtual Machine in Windows? Will one do the work faster?
I have a few computers I would like to get up to speed and I am also a bit of a Linux Newb, any information would be helpful, Thank you!
All I can say for sure is that I had trouble trying to compile the source on a computer with only 6GB.
But I will speculate that once you have the program compiled, the amount of memory presently required may not be as much. So, if you can’t compile on your virtual machine, do a real install (perhaps on a spare hard drive?) for the compile and then copy the compiled program and proving keys folders back to your virtual setup if that’s what you prefer.
I’m beating a dead horse here, but not having the option to download an archive of a pre-compiled program folder sets the bar quite high for those of us willing to experiment on the testnet…
Thanks for the information it sounds like I will need all the power I can get so a dual boot will be a better solution.
On a side note I was planning on going with Linux Mint, but since Zooko is using Ubuntu I will go that route.
I don’t think the amount of memory is crucial for compiling. As an experiment (to check out whether my compilation problems were related to the version of debian), I have installed latest debian in a virtual machine (running on Windows) to which I’ve allocated 2 CPUs and 2GB of RAM. I then followed the instructions in the alpha guide, compiled and started zcash without any problems and it mines just fine.
But be aware that the future memory requirements for mining will certainly go up. The current estimate is around 1GB required per mining process.
As far as speed is concerned, there will be a reduction in speed inside a virtual machine. The reduction should not be an order of magnitude, but, depending on the details of the setup, I’d guess it to be on the order of 5 to 20%, if that. Unfortunately the mining daemon has no way of reporting the hash rate (it was removed from bitcoin some time ago), so there’s no simple way of comparing speeds.
Since we’re on the subject, has there been any indication that the 1 cpu/core per 1GB of RAM is the optimal set up?
I’m curious if there would be any performance difference if say you had a 4 core chip with 4GB of RAM vs a system that had a 4 core chip and 16 or 32GB of RAM. Perhaps the alpha algorithm is just not set up to take advantage of the additional RAM? Or maybe that will come with 1.0?
I’m still ploughing through the theory behind the algorithm (described in paper Equihash: Asymmetric Proof-of-Work Based on the Generalized Birthday Problem), but the main idea is that there are parameters that you can set to determine the memory requirement for the algorithm. Here’s an excerpt from the paper:
Our solution is practical and ready to deploy: a reference
implementation of a proof-of-work requiring 700 MB of RAM
runs in 30 seconds on a 1.8 GHz CPU, increases the computations
by the factor of 1000 if memory is halved, and presents a proof
of just 120 bytes long.
In contrast, from the figures it’s apparent that doubling the memory reduces speed negligibly. The final parameters for v1.0 still need to be chosen to set the limiting memory value, but the described behaviour will not change. It is built into the algorithm and the whole idea of choosing this algorithm is to get this behaviour, with as little chance as possible of circumventing it.
Increasing the number of threads from 1 to 8 also has little effect - merely halving the speed. Which brings me to my question.
The speed of solution is memory bandwidth limited. Should one then assume that it is the number of computers, not the number of cores that is the limiting factor. I.e., are 8 computers each with 1 core and 1GB of memory on the order of 5 times times faster than one computer with 8 cores and 8 GB, assuming same memory and CPU speeds?
Or to put it differently: Assuming one has enough memory, does the value of genproclimit=N versus genproclimit=1 mean that each of the N processes will use 1GB and the total speed will be N times faster, or is this the multi-threading mentioned in section IV.B and the memory bandwidth limitation applies in this case as well (as I think it does)?
I am just trying to wrap my head around that paper, most of it is above my head.
It seems as though the algorithm uses parallel sorting to max out the memorys bandwidth of whatever is processing it, CPU or GPU. So the only factors to consider are speed of the processor and the speed of the memory, the amount of memory and number of cores is negated by the bottleneck of the processor trying to access the memory.
This seems to level the playing field for GPU vs CPU since even the best ASIC miner has to access its DRAM at a limited speed.
I think that your memory bandwidth assumption is correct. Therefore 8 computers with the hard-coded (1GB or whatever) amount of memory will solve magnitudes faster than a single computer with 8 cores and an infinte amount of memory.
That’s correct. genproclimit controls the number of mining threads (i.e. simultaneous attempts to find a solution), not the parallelism within each mining thread. Currently the latter is not parallelized.
Since this directly affects the hardware used to mine, will it be possible to tweak the genproclimit values on the miners end depending on their hardware or will we simply have to build the hardware around the final fixed value?
genproclimit is only a parameter of the current implementation. The algorithm doesn’t determine how many instances can be run in parallel (other than as a consequence of memory limitations).