ASUS 1080ti Strix and choice of risers

I’m new to mining, and I’ve tried to read up on what I can. I am in the process of building myself a rig of 6 1080ti GPUs, and I have bought 2x EVGA 1300W G2 PSUs to pull this off.

First, I have ordered some SATA risers (v007)…and I have just realised that this a no-no. That I shouldn’t use these because the 75W standard for PCI-e exceeds the specifications of the SATA connectors. So instead I ordered some v006 with a 6-pin connector.

The problem for me now is how to connect everything. There are 6 VGA connectors on each PSU, and each card needs to be powered through 2x 8-pin connectors + riser. Just to cover the two 8 pin connectors of three GPUs, I need to use all 6 connectors on the PSU. How should I then go about to power the risers safely.

I have thought about some solutions, but I’m a bit confused. Hopefully some of you could help me sort out if the following 2 suggestions would work and be safe:

  1. Would it be safe to use 2x SATA cables with a Y-adapter combining both of these SATA cables into a 6 pin connector for each riser? The 4 SATA outputs on the PSU would then cover 2 risers in total. I could then use one of the Periferal Molex connectors to power the last of the three risers.

  2. The PSU is shipped with several GPU cables, and some of these have both an 8 pin and a 6 pin connector on the same cable. Will I be able to connect the 8 pin to my GPU and have the 6 pin power the riser at the same time? A separate 8 pin cable would of course be used to power the second connector on the GPU itself. This way, all 6 VGA connectors on the PSU would be used to power 3 GPU and 3 risers.

  1. In short, your plan should work; another way is simply to get 6-pin → MOLEX cables and purchase 006C risers (or I believe 009). You are correct regarding the maximum specification for the PCIE port, but keep in mind that cards very rarely draw that much. In your case specifically, I’m seeing around 280-300W at 100% TDP. Not sure if you’re planning on opening up the power limit, but each 8-pin connection to the card is rated for 150W, so in theory, your cards will likely be drawing a vast majority of their power through the 2x 8-pin connections.

  2. Again, I believe this should be fine. It’s definitely recommended that, however you divide the loads, that each load is fully powered from one PSU. Though voltages are tightly controlled on these units, there will be small, asynchronous variations that could accelerate component wear over time.

Don’t skimp on anything. You’re looking at ROIs of over a year now on a new 1080 Ti…so you’re going to need them to run a long time in order to get your money back.

Thanks for replying! I just came to think about a possible problem with suggestion #2, though.

The max wattage through an 8 pin connector is 150W, right? If I then connect one 8 pin to my GPU and one 6 pin (running on the same cable) to the riser, this would add up to a theoretical maximum effect of 150W+75W=225W…all on the same cable, directly from the PSU. Are the cable itself and the PSU connection suitable for this?

Yah, 8 pin to GPU 6 pin to riser should be fine.

The PSU cables are usually 18AWG and less then 3 ft, which is rated to handle 10A DC. The maximum current per conductor (There is a total of 8 conductors for you PCIE cables) running at 225W, 12VDC is 6.25A.



The worst case current is going through the 6 out of the 8 cable that is shared by the 8 pins and 6 pins. For conservative result let’s ignore the other 2 conductors, and only consider the 6 shared by both. The 6 pin has 3 +12VDC loops, the 225 W is sum of all loops. Therefore in worst case each loop is only drawing 225W/3 = 75W.

If you want to include the ampacity calculation, the result 1.25 is very conservative for an open air cable.

1 Like

I’m running 6 1080ti cards with v7 Sata risers and a single EVGA T2 PSU (1600) no issues. Showing about 1450 watts from the wall.

2 Asus Strix
2 Gigabyte Aorus
2 Gigabyte Gaming OC

All the cards require 2 x 8 pin connectors except the Gigabyte Gaming OC which use 1 x 8 and 1 x 6

1 X 8 pin to 6+2&6 pin cables are designed to handle up to 225W (150W via the 6+2 pin connection and 75 through the 6 pin). Downstream cabling on the PSU side is fine to handle this loading. As an example, I power each of my 1080 Tis with a single 8 pin cable that branches into a 6+2 & 6 pin on the load side.

Thanks to all three of you, such great help.

@MinerBob: It might run ok with only SATA connection to the risers, but since I don’t have to, I’ll go the safest route. These cards will be clocked to the max, and I want to play it safe when it comes to power. I’ve settled on the connecting stuff like this:

For PSU#1:
-For each of the three 1080ti there will be 1x 8+6 pin cord split between the card itself and a v006c riser.
-One double Molex cord will power the two extra connectors by the PCI-E slots on the motherboard (Asrock H110 Pro BTC+)
-One SATA cable to a small SSD
-CPU and ATX cable as usual

PSU #2 will power the 4th, 5th and 6th GPU/riser. I plan on linking it to PSU #1 with an add2psu.

According to your replies and what I’ve read so far, I should be good to go!

@Sprucemoose what is your Overclock look like for 1080 ti Strix? i have 4 Strix and it doesn’t look good at all!!

Sols/W is about 2.86.

Thanks for create the topic too! need to change my set up!

@lbnguyn I’ve done some tweaking, and I’ve settled on the following settings for my two 1080Ti Strix (similar settings for both GPUs):

Power limit 85%, Clock +115, mem +300

One of the cards were able to run at +120 clock speed, but that made it unstable from time to time. All GPUs are a bit different, though, so in order to find the exact sweet spot, you have to do a lot of tedious testing. They are also able to run mem +700, but that didn’t do anything for the hashrate, so I lowered it.

The best option for increasing the sol/W would be to lower the power settings. Turning it even further down would make for an even better efficiency.

Any recommendation technique to do tedious testing?

I tried to set some random setting and see if the programs can run if not i would close it or if the computer freeze then i would restart.

Thinking of even running benchmark on this mining rig as well but it might not every worth to tried it right?

In order to do testing, you have to do each GPU by itself. Let all cards run on stock except the one you want to clock. Start with my clock speeds in the above post and see that it runs some hours stable. Now, you want to increase the core/mem incrementally and then do another couple of hours to check for stability…if stable, increase more before running another test. This takes a lot of time.

The other way to do it (what I did) was to start at a “high” overclock that had a good chance of being unstable, then start decreasing until you reach stability. It’s a bit faster this way, as a really unstable GPU will lock up almost instantly (doing it from the bottom up, you have to test each setting for hours).

When you’ve found the setting for that specific card, you got to do it all over again for the next card, and so on.

Also, reducing power limit increases sol/W, but the total sol/s will be lower. If you don’t pay for electricity, you would be better off maxing sol/s and not worry about sol/W. Some like to bring sol/W as high as it can go, that’s all up to you.