GTX 1080 ti mining statistics and auto over clock

Just added Mean and Variance tracking on one of my test rigs. 5X GTX 1080 Ti FE GPUs, Linux 16.04, EWBF 0.3.4b miner. I am consistently seeing higher variance on the less stable GPU. GPU0 is most unstable on this rig and has a significantly lower over clock than the rest.

Original idea was to find the fastest way to detect a dying GPU for auto restart (I currently use the average). However, the variance will show a GPU dying in one 30 sec cycle with a very high signal to noise while the Avg takes several 30 sec cycles and is prone to false positives if set too tight.

I am also running now on a 7GPU main rig and seems to also indicate higher Variance on less stable GPU’s. I cant draw any concrete conclusions without seeing EWBF source code, but I plan to explore using this in my automated over clock software to see if I can find the optimum over clock settings significantly faster than using the average (iterative routine using averages is painfully slow, and prone to hysteresis).

Var3

Your FE can do 744 sol? which brand is that? My giga 1080 Ti extreme can only do 700 - 730 sol at 100% power

FE’s are all the same no matter who you buy them from (Nvidia stock card, my cards are MSI). These are my crappy cards, I get 770-780 on good ti FE cards, with an occasional 790 (depending on the over clock).

Cards are at 100% power. I run them lower when the temp goes over 65C, usually at night ( my rigs auto adjust power settings based on temp).