I looked at the distribution of the past 635 blocks as they were received at my node, so this does not rely on timestamps.

median: 90 to 100 seconds (my data is not more accurate on this because my samples were once every 10 seconds)

mean: 144 seconds

less than 10 seconds: 7.3%

less than 150 seconds: 65%

more than 10 minutes: 1.75%

Everything checks out according to theory and to simulations I did using libreOffice calc with `=-LN(RAND())*150`

The simulations were 635 rows of this, then the above statistics were applied, then forcing a recalculation of all the RANDs to give another sample of the statistics to see how much they vary. You can just do it for 6000 rows, but the median will not be correct and you can’t judge the variation in < 10 sec and > 10 minute categories.

At first I thought there was a big problem because the peak of the histogram below is not near 150 seconds. If you do an average of 10 consecutive blocks and put those into bins of 10 second averages, then it is a λ=10 and the peak will be closer to 150. By just looking at each block like this, λ=1 which skews the peak closer to zero.