hello, when i mine, only 2 or 3 of my 4 cores run at 100% as i watch system moniter, and they switch so at first core 2 and 4 runs 100%, than after a bit core 2 will go to 0% and 3 go up to 100%. shouldnt they all be running near 100%? also, why is it only using near 2-2.5GB memore if requirements are suppose to be higher? thanks!
Not sure about the core loads.
Memory-wise, it only needs up to 8GB to compile, and to ‘protect’ your coins (send to z-address). The actual mining, and transparent address operations (as far as I understand), only need ~700MB per core.
Yeah, I’m not sure we can use the H/s formula for a huge Cloud instance.
I actually meant more generally. Not sure people are actually dividing by the number of cores. Maybe we should put an actual formula there. What was the formula you were expecting:
h = (1/(H/2))/c = 2/(H.c)
where h is hashrate per core, H is average time for equihash, and c is number of cores? Some people are just doing h=2/H, I think.
Maybe we need to have a separate chart for clouds since you are using so many separate processors it’s probably not possible to accurately measure single core performance, nor is it possible to know the precise memory configuration. So it would only need the average time column, num. CPU’s, memory amount, and maybe a cost?
Yeah, it’s probably OK as it is. If there’s a lot more contributions, I’ll split it out into another table. I’m not totally clear on what 2/H.c adds. The more important number, in a way, is H, since it’s not like adding more cores will linearly add more speed, nor is there much you could do about the memory bandwidth. H seems like a more useful measure of the performance of the whole system?
so have we figured out yet if 4 cores with 8 Threads, is it 1 GB needed per thread? or its just 2gb needed per core? or is it possibly 4gb needed per core which equals to 2gb per thread??! i am still testing myself
also, i notice that when i have 2 differant memory slots full, compared to 1, my results go up about 20%. is this because the more memory slots (on the motherboard) the more memory bandwidth i will have because its pulling from differant slots?? thanks
The reason the hash/core column was added in the first place was to try and develop a baseline hash metric for average performance of a single processor.
This actually started in the Zeropond mining discussion because they needed to find a way to measure performance to compare a GPU to a “modern CPU” and up until that point all we had were average Equihash solve times to use. And as far as coin mining goes, everyone always asks “What’s my Hash power with XX computer?”
And as you mentioned the hashing performance improvement by adding cores is not linear because of bandwith saturation.
The best approximation I could think of was:
(number of cores on your pc) x 2 / (avg. seconds to solve)
For example: 4cores x 2 / 53.6 time = 0.0372 Hash/sec per core.
(The x2 is because each run of the solver produces approximately 2 solutions)
BUT, even this is not super accurate because each core you add for mining uses up more RAM bandwidth. So the actual effect of adding 4 cores may only result in a 3x or 2.5x increase in Hash/sec depending on the systems memory bus/channels. So I felt that just having a single core metric would be the most useful.
At the end of the day all the values in the wiki are only approximations because there are so many CPU/motherboard/memory channel configurations, not to mention background processes/downloads/surfing the internet, etc…are eating system resources when the user was benchmarking are also impossible to take into account.
issue resolved, sorry for waste of post
BUT, even this is not super accurate because each core you add for mining uses up more RAM bandwidth. So the actual effect of adding 4 cores may only result in a 3x or 2.5x increase in Hash/sec depending on the systems memory bus/channels. So I felt that just having a single core metric would be the most useful.
We should definitely keep this in mind for machines that go much farther beyond four cores (or wherever the memory bandwidth begins to become disproportionate to the cores).
For me, if genproclimit = 8 on an 8 core machine w/ 16GB RAM, all eight cores are being used to solve the equihash. So when we run time ~/zcash/src/zcash-cli zcbenchmark solveequihash 100 the values being returned are solves for all of the cores working together, and return a much lower efficiency than if genproclimit = 4 on the same machine. The exponential decay is huge.
so more than 4 cores is essentially useless because of memory bandwidth limitations?!?is that what that means?
It depends on your memory bandwidth. I have had multiple tests comparing 2,4,8 and 16 cores and 2-4 have been the most efficient, 8 inefficient, and 16 the least efficient in all of my tests. Allocated at least 2GB of RAM for each CPU.
nice, good info thanks. have you tested only intel cpus or have you tried the 2-4 core amd cpus also? i am in the process now
Intel 64 only. I am excited to hear how your AMD tests go ![]()
so wierd results so far. my i5 cpu with 4 cores has mined 6 blocks in 2 days with 16gb ddr4 2133 ram. both of my amd machines with ddr3 1600 memorry 16GB have not mined a single block yet, they are 6 and 8 core cpus. when i run the tests, the amd machines are 2-3X as slow as my intel tests. do you think this is because of the amd cpus having way more cores? therefore it actually slows mining speed rather than increasing? i will try the test with 1,2, and 4 cores next on amd
m4.large average for the last 115hours for me is 2.5 blocks per day
c4.large average for the last 115hours for me is 3.5 blocks per day
i get about the same results as the 1 guy who tested AMD on the benchmark charts. but for some reason my 3rd test computer wont connect to the network or download any blocks?
this is the error i get when starting up the server:
wallet/wallet. cpp:680: void CWallet::DecrementNoteWitnesses(): Assertion nWitnessCacheSize > 0 failed.
any idea why? i installed this system same as the 2 before and had no problems??
after a cpl mins i retry and the server connects, so i can run the test. however, it still is not downloading any blocks or updating or connecting to any peers
and after the test it gives me the same error again
I have only been running a couple of days with AMD, but seem to be getting about 1 block per day from both computers. I did add genproclimit=4 (8 for the 8-core machine) to zcash.conf, but not sure if this made any difference. How much does the memory speed have to do with the average solve time? I noticed on the benchmark charts the only systems that were in the 30 range had DDR4 2133 or 2400Mhz RAM.
im still trying to figure out if adding more cores helps at all also, i cannot really tell from my tests. and i also noticed the high Mhz ram is definately leading, but they all also have high end I7 CPU-s, so im not sure what is the biggest differance maker. my I5 with 16gb 2133Mhz ram has gotten me 6 blocks in 2 days, my amd’s have gotten me 0 blocks in 1 day with 1600ddr3
what do your amds test out to avg equihash solve time? mine sits around 55-65 it seems, 6 core processor, 16gb ddr3 1600
whats wierd is my i5 when i run the tests dont get much better results, ussualy around 50-55 but i have gotten a couple in the 40 range with the I5
Mine is also running close to 0% cpu. I did not have the get/gen error in my conf file. Anyone else getting this?
In case I screwed something up:
testnet=1
addnode=betatestnet.z.cash
rpcuser=username
rpcpassword=password
gen=1
genproclimit=4
getinfo yields:
{
“version” : 1000000,
“protocolversion” : 170002,
“walletversion” : 60000,
“balance” : 0.00000000,
“blocks” : 45,
“timeoffset” : 0,
“connections” : 0,
“proxy” : “”,
“difficulty” : 1.21944725,
“testnet” : true,
“keypoololdest” : 1474605784,
“keypoolsize” : 101,
“paytxfee” : 0.00000000,
“relayfee” : 0.00005000,
“errors” : “ALERT: This is just a test of the alert system on testnet3. Alert 2 on 22 Sep 2016.”
}
your on block 45 which means your connected and downloading the blockchain. after you finish it will start mining… i get the same but stuck on block 1 and 0 connections, why wouldnt my client connect when the other 2 systems i have running have no problems.
How are you compiling? I had issues when I compiled on a machine with <8GB RAM (even with a big swap file in place). Looked like it had compiled successfully, but never made connection. The full-test-suite also failed in that instance.
Thanks! I thought it might be something like that, so I left it running over night to see if it would eventually start and it has not. Another machine with the same setup is working fine.