Just added Mean and Variance tracking on one of my test rigs. 5X GTX 1080 Ti FE GPUs, Linux 16.04, EWBF 0.3.4b miner. I am consistently seeing higher variance on the less stable GPU. GPU0 is most unstable on this rig and has a significantly lower over clock than the rest.
Original idea was to find the fastest way to detect a dying GPU for auto restart (I currently use the average). However, the variance will show a GPU dying in one 30 sec cycle with a very high signal to noise while the Avg takes several 30 sec cycles and is prone to false positives if set too tight.
I am also running now on a 7GPU main rig and seems to also indicate higher Variance on less stable GPU’s. I cant draw any concrete conclusions without seeing EWBF source code, but I plan to explore using this in my automated over clock software to see if I can find the optimum over clock settings significantly faster than using the average (iterative routine using averages is painfully slow, and prone to hysteresis).