# Mining efficiency question

I frequently see conversations in mining forums that extol the virtues of reducing voltage input on GPUs to increase the efficiency. People post proudly that they are getting 4+ sol/watt etc.

My question is this: Why?

Example: If I’m getting 400 sol/s at 80% and I crank up my voltage to 100% and I get 460 sol/s, why wouldn’t I do that as long as the additional coin I’m mining is worth more than the additional electricity I’m using? Sure, your efficiency (sols/watt) goes down because of diminishing returns, but who cares as long as each additional watt increases your profit?

What other factor am I missing here? Does this practice reduce the useful life of the card?

I think that you save much more … because in practical situation - gtx 1060-3GB 120W - 280-290sols, the same card with 68W - 265-270sols. You save 52W and lost 15-20 Sols … its less than 10%

You reduce dramaticaly production of heat … than you save more of energy for cooling. If you have 20 cards, you save 1kW + cooling

all electronics are sensitive for extensive heat … esp. capacitors and problem with heatsink paste. This must reduce of life of card, but question is how … but if you have a 3-year guarantee, you do not have to worry about it

1 Like

Good thoughts. My numbers, using the cryptocompare calculator on a GTC 1070 FE:

410 sol/s @ 110W = \$38.70 profit per month
460 sol/s @ 150W = \$41.50 profit per month

So the question then becomes does the extra heat generated cost me more than \$2.80 per card per month to either move the hot air outside or cool it with my AC.

Currently using a box fan to just blow the hot air out the window. It consumes 58W for a cost of \$4.18 per month. Not hard to see it’s worth it to generate the extra heat as long as I have more than 2 cards (I have 16).

Sure, my central AC will work a little harder with the window open and the hot rigs in the house, but it seems to me at first blush that I’m still coming out on top with the higher power consumption, but I feel like I’m missing something because I frequently see people epeening about their Sols/W.

The power draw of each card will increase exponentially compared to the hashing power generated. As an example. the first 10sol/s increase might cost 10W, the next 10sol/s increase might cost 20 watts…and so on.

At some point the electricity costs will overtake the incremental value of the extra coins you are making. When that breaking point appears depends on your electricity cost, whether you need the heat, or if you have to use extra electricity to transport the heat away. It’s all about finding the sweet spot, I guess.

The chart below shows my desktop GTX980 overclocked with different settings and power consumption. There is a correlation between profit per year (Y-axis) and the sol/W (X-axis), but there are also some instances were a lower sols/W can give me a better profit.

Thank you! I made a similar scatter chart graphing my profit (x axis) and watts (y axis). I guess I’m fortunate enough to live in a place where my electricity cost is so low (10 cents per KwH), I hit the power limit of my cards before the diminishing returns reduce my profit.

I tend to just seek to get my cards to about 3.6 sols per watt. If I am in that ballpark, then I am happy. I use 1070s and 1080 tis. My electricity is included in my rent (to a point). I don’t want to add 20% electricity costs to get 5% more sols as the land lord might choose to adjust my rent Remember, even if you have a warranty, you will have downtime for broken components. Higher heat leads to higher issues with broken components. If it takes 2-3 weeks turnaround to get a card back then was it worth running at the higher outputs?

Thanks for the feedback! Lucky you to have free-ish electricity!

I have always been under the impression that fluctuating heat is a bigger problem for electronic components than high(er) heat, and that if you keep the heat on the card steady (not rising and falling) it doesn’t matter as long as you stay under the “overheat” threshold.

I think it would be very hard to quantify to which degree heat below the manufacturer’s max specifications actually decreases longevity. Do we actually have any trustworthy data on that? For the newest cards, such data doesn’t even exist. And it would be extremely hard, if not impossible, to accurately predict where the tipping point is in regards to sol/W vs. hardware longevity vs. Watts vs. future price change etc.

I think that is question for AMD/NVIDA staff and this is probably internal information under NDA… we may speculate about it only

Tesla P40
Mean time between failures (MTBF) Uncontrolled environment: 703379.3 hours at 35 °C
Controlled environment: 9913208.1 hours at 35 °C

its big differencies … i would like to know what controlled mean - constant humidity etc … ?

PS: but first value mean over 80y life

In general - dust contaminants, humidity, and UV

1 Like

diff hits 11Meg from 6meg during few minutes its nice decetralization network 5 meg power controled by one subject …

Thank nicehash. It is distributed on a grand scale

I know this thread is a bit old but still.

The idea behind under volting, which will result in under current, is to achieve better over-clocking. Heat being the main impediment to over-clocking and currents being the main contributor. It is not about longevity!

You might achieve lower consumption under same clocks or simply more clocks per same power consumption.

The down side is stability, the level of 5V for logical true has more historical than practical argumentation. The idea is that you need more voltage to keep noise from affecting bit state, yet low enough that the silicon dissipate minimum heat. Both factors contributing to stability and a golden mean is achievable.