Z9 difficulty = 1


Is anyone else seeing this at 11:30ish pm on Saturday the 27th of August?

I was seeing believable difficulty figures earlier just after compiling but I've stopped and started a zcashd a few times since then.


I have been mining all day, I know of at least 4 times when I ran getinfo and saw a difficulty of 1.0 , I also saw it as high as 370.


It does that a lot. I have a long thread on the problems with the difficulty algorithm and how it does not do what str4d says it does, but the devs do not seem interested. The rule it follows is this: if the sum of the past block and current block times is taking longer than 5 minutes, then next diff=1. Unless the current diff=1, in which case if the current diff is taking more than 5 minutes, then next diff=1. The next rule is: the current difficulty you see in getinfo is the previous block's difficulty. The final rule and 2nd programming error is this: the difficulty is never actually set to "1", it only looks that way. It is actually set to the previous difficulty. That's how I was able to get it stuck on diff=1 for up to an hour on 3 different occasions (by making the hash rate spike by turning on all my CPUs at a low point. Another rule as a statistical result of the 1st deviation from str4d's description is this: the difficulty will vary by 100% when there is no change in the hash rate and will increase by 300% when there is a 30% increase in the hash rate, and will increase by 500% when the two effects combine for between 30 minutes and an hour. This is for z8, and I do not think it has been fixed.

The following is from z8 when I was making it swing up and down by doubling the network hash rate (10 PCs) at specific times during a weekend night. The gaps in the blue trend are when it pretends to be at diff=1. The other color is what my proposed fixe would have been in response to the doubling of the network hash rate. Note that with the current algorithm it will increase 900% when the network has rate increases 100% for 45 minutes, if the increase begins at one of the statistical lows that occur about once every 2.5 hours, in predictable patterns.


This is a know behavior that you will only see on the Testnet. @str4d mentioned in a previous post: "

Note that this is open to gaming, since it is circumventing the difficulty adjustment algorithm to only rely on the untrusted time in the block header, so miners could fake minimum-difficulty blocks. But they have no incentive to do so, since this would only work on the zero-value testnet, and it is more valuable there for miners to test their configurations against the same proof-of-work as for the main blockchain (ie. not triggering this special rule if at all possible).


I saw that. I believe he is only talking about allowing people to fake headers, not that they are or were. I was not faking the headers to get diff=1. He said it relied on the untrusted time, not that the untrusted time was wrong or different from what will be implemented.


Here's the last 10 hours of the difficulty. You can see it ain't no good, but you can't see from str4d's comments that this is expected or different from what they plan to allow in 1.0.


The issue with the difficulty being reported as 1.0 is https://github.com/zcash/zcash/issues/1181

Remember that the only thing that the difficulty adjustment algorithm is trying to control is the inter-block timing. There indeed may be a stability problem with the current algorithm, but it's the effect on block timing that the algorithm should be judged on, not the absolute difficulty.


Staying on time is needed but sudden drops to 1 for 22% of all blocks on the mainnet will invite zero-sum gaming between smart and regular miners. It is a bias towards concentrated mining. If the drops were removed, it will be the correct 576 blocks per day within 2% and prevent gaming. The drops are like asking the Poisson process to not be a Poisson process. I would let users wait 10 minutes once a day as an unavoidable Poisson problem instead of inviting gaming by trying to prevent it. I would at least increase it from 5 minutes to 7.5 minutes which will occur in 5% of the blocks instead of 14%. That's assuming the code error I'll describe below is fixed. The error is why 22% of all blocks are currently displaying 1.

It is going to "diff=1" if the SUM of the previous 2 blocks was > 5 minutes. This is causing it to display "1" a lot more often than intended. The best I can decipher is that one of the two sections of code below are not working as intended. I know this because if the past difficulty was 1 and > 5 min, then it is 1 again, as intended. But if the previous difficulty was not 1, then it is 1 if the SUM of the previous TWO blocks were > 5 min. My fear is that it is the first section, which I believe is suppose to carry over to the mainnet. The nature of the problem tells me it's not my confusion and I was disappointed to see hiding variables and creating variables were the solution.

if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + 
params.nPowTargetSpacing*2)   return nProofOfWorkLimit;


if (networkDifficulty && Params().GetConsensus().fPowAllowMinDifficultyBlocks) {
    auto window = Params().GetConsensus().nPowTargetSpacing*2;
    while (blockindex->pprev && blockindex->nBits == powLimit &&
         blockindex->GetBlockTime() > blockindex->pprev->GetBlockTime() + window) {
              blockindex = blockindex->pprev;

On another subject, possibly related to the wide swings is that I can't see how the following can be correct because it's such a huge change in ActualTimespan instead of an adjustment. It is basically saying "let only an average of 12.5% of ActualTimespan count".

nActualTimespan = params.AveragingWindowTimespan() + (nActualTimespan - 

If the last difficulty instead of the average of the past 17 difficulties is the basis for adjustment, then that should be changed to prevent the wild swings. My first graph above is:
next diff = avg past 17 diff x TargetTimespan / nActualTimespan


The first graph is average of past 17 block times with the current algorithm. Note that it does not look like a good rolling average like it is supposed to be. The predictable changes are what I see as a problem since it is indicative of the predictable difficulty. The 2nd graph is if it had been what I think was the code's intention, the equation at the end of my last post. The second one looks more like a normal network with fewer forced oscillations. Due to the way I had to estimate the 2nd one, maybe even those oscillations would not be present.


Concerning the difficulty adjustment algorithm:


They chose Digishield v3 which is why they are having the oscillations. The problem is that v3 does not take an average (or sum) of the difficulty along with an average (or sum) of the block times in order to keep it on time without oscillation. The correct math that requires no limitation on ups or downs and no diff=1 is:

next diff = avg past 17 diff x TargetTimespan / nActualTimespan

Digishield v3 had to make the up and down swings have different limits in order to reduce the oscillations they created from not taking an average of the difficulty. Dark gravity would have a problem from not letting the average of the past difficulty and time spans be over the same range. If the above is not responsive enough to multipools, 17 can be replaced by a lower number, maybe as low as 6.


The "minimum difficulty block" behaviour only occurs on testnet (this is the same as for Bitcoin).

I agree that predictable oscillations are undesirable; I can't give a commitment to fixing this but I'll raise it at today's engineering meeting, and argue for analysing it in more detail.


It shows 1 but it isn't what the difficulty is actually being set at. By "only occurs on testnet" do you mean it will stop pretending to go to "1" and actually go to "1", or that it will not show 1 and not go to 1?

For the meeting, there are two points I want to suggest.

1) It goes into oscillations because v3 is setting the next difficulty based on only the previous difficulty. This causes positive feedback on itself without regard to the network because the rolling average or sum of block times is 17. The D just keeps getting ramped up. They had to put a limit on fast it goes up as a result. They said in reddit this caused terrible oscillations, so they made it ramp more slowly than it ramps down to reduce the osciallations, which Zcash has inherited, but it still has oscillations.

2) The most agnostic and mathematically correct way to adjust it is to use previous difficulties and times over the same number of block periods as I've bolded above. There is an unavoidable tradeoff between the longest wait time per day and the protection against network attacks. v3 and dark gravity are doing things based on presumptions about the attacks, which is to say they "got religion" which causes problems a different kind of attack can profit from.

Cold agnosticism without sympathy for wait times or beliefs about attackers is needed. It takes the data as is, and reacts accordingly without expectations about the future. Since v3 used positive feedback and n=17, maybe 17 blocks in a non-feedback like I'm saying (the equation and first graph above) will not ramp fast enough. That graph was when I tried to get it to oscillate on the network with 43% of the total hashrate, and you can see my equation reacted slower than v3. Below is a graph with the same equation for 8 instead of 17. You can see there is a 10 minute wait time twice a day. This may be an overestimate. But you can see it reacts as fast as v3 but does not overshoot. It's not pretty and smooth which means attackers will have to take more risks in timing when to jump in. It will stay on time without regard to the attacks or random variations because it's mathematically correct and simple. No limit is set on up and down ramping because it does not argue with the observed data.

The green is time to solve in seconds for that block. Blue is v3. "Red" is the equation I'm suggesting, same as the above with 17 replaced by 8 which can be restated as:
D= (avg past 8 D) x (150 seconds) / (avg past 8 block times)


I mean that the minimum-difficulty-block behaviour does not occur at all on mainnet.

There was not time to consider this in the engineering meeting, but I'll raise it again.