Mining z8, should difficulty ever be exactly 1.0?

I’ve been querying zcashd from time to time for the last few hours and these are the kind of difficulty figures I was getting until now:

171.78503850
196.29819348
264.01794180
172.92174971

Edit: I just stopped and restarted and the difficulty looks believable now.

On the testnet, if a block hasn’t been mined in over double the block interval, a minimum-difficulty block is permitted. Whenever one is mined, getmininginfo will show 1.0 for the current difficulty (since it just shows the difficulty of the last block). However, the block after the minimum-difficulty block is required to satisfy the last non-minimum difficulty (unless it too takes longer than 5 minutes).

1 Like

From my log, 70% of the time it’s jumping down to 1 if the previous block took less than 5 minutes. I could have a rounding error where I can’t see it, but 25% are less than 4 minutes. 21% of the time it jumps down to “1”. I am looking back 2 blocks because the difficulty is what it was on the previous block (from what str4d just said) and the “1” is chosen based on the block before that.

Ah, you misunderstand my comment. It’s not that a min-difficulty block is allowed if the previous block took longer than 5 minutes, but that a min-difficulty block is allowed if the current block will be mined more than five minutes since the previous block.

I’m confused, how do verifiers know that 5 minutes passed? (Since they may be verifying the chain years later!)

Each block header contains the difficulty it was mined at (nBits), and full nodes only accept a received block if (among other things) nBits matches what the node calculates should have been the difficulty for that block. The special testnet difficulty check is pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2, which can be verified at any future date since it relies on data in the block being checked and the previous block.

Note that this is open to gaming, since it is circumventing the difficulty adjustment algorithm to only rely on the untrusted time in the block header, so miners could fake minimum-difficulty blocks. But they have no incentive to do so, since this would only work on the zero-value testnet, and it is more valuable there for miners to test their configurations against the same proof-of-work as for the main blockchain (ie. not triggering this special rule if at all possible).

2 Likes

If by “min-difficulty block is allowed if the current block will be mined more than five minutes since the previous block” you mean “the next block difficulty will be set to 1 if the current block is taking more than 5 minutes to complete” then I understood. I said I had to look back 2 blocks (from my gathered data) to see what the trigger for “1” was because you said getinfo shows the previous block’s difficulty.

From the data the rule is apparently “if the sum of the current and previous block is taking longer than 5 minutes then set the next difficulty to 1, unless the previous difficulty was 1, in which case make it 1 again only if the current interval is taking more than 5 minutes”. I can see this from the data, but I can’t convert your wording to the data, so I am wondering if the programming is correct. Especially since it seems to be jumping around too much, and it would not jump around as much if it followed the rule your words seem to be saying.

It is selecting D=1 22% of the time and a Poisson calculation indicates that if nothing other than statistical variation were occurring, it should be selecting D=1 based on this rule 23% of the time. Therefore the network was probably stable during the night and the difficulty is reacting to expected statistical variation in an attempt to not have long delays between successive blocks. I got 23% based on it going to 1 after an average of L=2.88 intervals, using the equation in wikipedia for Poisson process P(k) = L^k / k! / e^L for k=2 since 2 blocks were found in an average of 2.88 intervals. This assumes the difficulty was already set correct for the network such that the network average was L=1 per 2.5 minutes. If you switch to the rule “go to 1 in next block if current block is taking more than 5 minutes” it will go to difficulty=1 13.5% of the time (k=0, L=2) if the network hash rate is as stable as it appears.

Here’s the difficulty data I have for a 7.3 hour period. The 1st column number is units of 1.0 = 2.5 minute interval at the difficulty in the 2nd column, assuming getinfo shows the difficulty of the previous block. Notice D=1 twice in a row at times in a way that is expected. Because of my sampling method, 1.9 may sometimes be 2.0.

I’ll try to work on an algorithm that accounts for the Poisson process, and you just plug in how often 10 minute wait times are acceptable. 5 minutes seems too short as that will occur 14% of the time due to it being a Poisson process. Although 14% isn’t too bad. If it changes too much simply due to the statistics, it invites smart mining that will be able to mine at less electrical cost or EC2 expense at the expense of others (if it also sticks to a 2.5 minute average). On the other hand, paying for smart mining to be smart at the expense of other miners will reduce statistical wait times for the benefit of fewer long waits.

0.8	154.9
0.9	148.6
1.9	142.2
0.6	1
1.4	131
0.5	120.7
1.7	113.7
1.3	1
1.9	107.4
1.0	1
0.5	100.3
2.0	94.31
1.0	1
1.6	88.48
1.4	1
0.2	82.05
2.1	76.81
1.3	1
0.4	71.48
1.3	67.48
0.5	63.69
0.4	60.39
0.7	57.42
0.3	53.48
0.4	50.32
0.5	47.63
0.7	45.68
0.2	43.37
0.4	41.97
0.6	40.37
0.6	39.79
1.3	39.67
0.3	40.51
0.6	41.62
0.6	42.74
0.7	45.17
0.7	47.81
0.5	51.47
0.2	55.94
0.3	60.4
1.0	65.65
1.8	71.36
1.1	77.56
1.0	1
0.9	84.31
1.0	91.64
0.4	99.61
1.4	108.2
0.3	117.6
1.3	127.1
0.7	136.4
0.4	145.6
0.7	153.8
0.9	162.4
0.6	170.8
1.3	178.5
1.7	187.2
1.5	196.2
0.5	1
0.3	205
1.5	214.3
1.5	222.5
0.4	228.6
1.8	230.7
2.1	1
0.9	1
1.4	237.1
0.3	1
1.6	240
0.3	244.7
1.9	244
0.9	1
0.9	241.8
1.3	236.2
1.3	1
1.3	233.9
1.9	226.8
1.9	1
1.9	218
1.9	1
0.3	209.6
0.9	204
0.8	195.8
0.9	183.4
0.3	170.5
1.9	157.8
0.6	1
1.6	146.4
2.2	1
2.0	1
1.7	1
1.9	139.5
2.0	1
0.5	1
1.9	132.5
1.9	1
2.2	1
0.4	1
1.5	120
0.3	107.4
0.3	98.62
1.0	90.23
0.3	82.59
0.3	75.35
0.6	68.77
1.0	62.07
0.4	56.19
0.9	51.36
0.8	46.62
1.1	43.25
0.5	40.03
0.6	37.93
0.9	36.4
0.5	35.72
0.9	35.31
1.1	35.86
0.6	37.13
0.4	37.96
0.4	39.72
0.6	42.1
0.5	45.76
0.2	49.22
0.7	53.21
0.7	57.56
0.3	62.2
0.8	67.43
1.4	73.3
0.6	1
0.9	79.2
0.2	86.09
2.1	93.54
1.9	1
0.3	101.6
0.3	109.3
0.3	118.9
1.8	128.2
0.5	1
2.0	136.6
2.1	1
0.6	1
1.6	146.7
0.4	1
0.2	154.2
1.4	161.6
1.5	165.3
0.7	1
1.1	166.1
1.9	163.9
0.9	1
1.7	161.6
2.0	1
0.6	1
1.5	160.7
0.7	156.9
0.5	1
0.7	152.4
2.0	147.6
1.3	1
0.3	139.3
0.4	134.5
2.1	126.8
1.9	1
0.9	1
1.7	121.4
0.7	117.8
1.0	114.1
0.7	110.3
1.1	104.7
0.8	99.47
0.5	93.98
0.1	89.12
0.5	85.63
0.5	82.12
0.3	79.89
0.9	78.33
0.3	76.38

If the currently displayed difficulty refers to the previous block, then difficulty=1 takes twice as long as difficulty between 50 and 60. The average time it took for difficulty=1 to be solved was about equal to the time it took to solve the much higher difficulties that were supposed to be triggering the “1”. The time slots before and after the “1”, or any average I take of pairs can’t show that the difficulty was reduced. It’s like the variable was not actually set to 1, but kept the previous value. Is this another programming error?

Huh, what’s the motivation for adding what appears to be a security bug to the testnet difficulty check? Shouldn’t the testnet rules be as close as possible to the mainnet rules with any differences having to be clearly motivated by usability for testing?

Oh I see now, there may be far fewer miners on the testnet, and it’s useful to keep finding blocks regardless of that.

1 Like

Yep; it’s more important for the purposes of the testnet that it continue to find blocks.

The difficulty was just now stuck at 1 for most of the past hour because the algorithm made a wide swing upwards to 340 after an influx of computers. Only 12 blocks were solved in the past hour even as this is the peak hash rate for the day and even though the difficulty says “1”. The difficulty’s not “1”, it’s just displaying that.

Also, the wide swings (500% when there is a 30% change in the hash rate) might be because it’s checking if the sum of the past 2 blocks > 5 minutes instead of just the previous block.

The following is the best I could find

next D = (avg of past 24 D) x SQRT (150 / (avg of past 24 times))

Stays on track 24/hour, only 50% swings in difficulty in response to 30% hash rate changes, only one 10 minute transaction delay per day (no need to jump down to “1”), and I would not know how to cheat it.

Just now as I was typing this it went back down to 60, an hour after it was at 350. Of course that is when EC2 servers should kick in if the algorithm is not changed or fixed. My percentage of blocks gained stays the same so I know hash rate is not changing.

When the real coin starts, you will not see such big swings as people cheating others will even it out.

Are the difficulty problems going to be fixed in z9? I was able to make it swing by 900% twice today when there were only about 50 computers on the network and got it to issue 44 blocks in 30 minutes.

How exactly were you able to “make it swing by 900%”?

1 Like

By increasing network hash rate by 30% for 30 minutes. [edit: actually, there were probably only the equivalent of 20 good PCs on the network, and I added an equivalent 8 more so that’s like 40% “more” but only 30% of the “post” network) If other people join in to simulate a mining pool joining and then exiting, it might get stuck on d=500 possibly indefinitely (until network gets that big) and issue out only 5 blocks per hour. I’m fine-tuning when I join and exit to see what it is most sensitive to. The idea to test is to see if I can make it resonate and concentrate hash energy when it is low, making others pay the high hash energy cost when it is high. It has a PID controller characteristic (a momentum based on a belief about how the network is supposed to behave, i.e a Derivative factor) which makes it open to manipulation. The equation I suggested above is a PI controller that will not resonate. But I now think the following is better and posted to github

next difficulty = (avg of past 30 difficulties) x 150 seconds / (avg of past 30 blocks’ seconds / block)
or to save some computation replace both averages with sums since they are both divided by 30.
[edit: now I think 18 is better because it is more responsive to my type of “attack” see below]

By overshooting and undershooting “100%” based on random variation, the current algorithm is inviting concentrated mining to take advantage of others. It’s like it was written by someone who wanted to exploit coins that use it. It will be harder to see once it begins because the exploitations even out the swings.

The number of computers on the network based on the current algorithm is about 1/4 the time-weighted average of the difficulty, 1/8 the peaks, and about 1/2 the valleys.

Here’s the difficulty per block with the current algorithm (blue spikes)
when I was trying to get it to spike up, and what it would have been if the equation was
next D= (sum past 18 D’s) x 150 / (seconds of past 18 D’s)