[GPU Problems] PCI 1x doesnt recognize GPUs while 16x via risers does

I’ve been mining crypto for a few months without a problem. A 1 GPU system isnt really worth the inefficiency so I decided to upgrade. I got a good price on two more RX470s and a GTX1070. Here is my current functional setup:

MB: ASRock H81 Pro BTC R2.0
CPU: Intel Pentium G3260
GPU: Sapphire Radeon RX470
RAM: 1 x 8gb stick of some kind
PSU: Corsair AX1200i
HD: Intel 180gb SSD
Risers: Powered – 6 x eBay (SATA connection), 2 x local vendor (Molex connection)

UPDATE: New USB 3.0 cables make no difference. New risers arriving tomorrow, and a new MB the day after. Will update if this solves it. If not, it must be a PSU problem.

I have tested each of the risers and GPUs successfully on the 16x PCIE slot and they all work. I have an HDMI cord plugged into the 16x slot (whether or not there is a riser this works). This leads me to believe that the problem has something to do with the MB and not the risers themselves. Here is what I’ve tried:

• Removing the MB battery
• Resetting CMOS (many, many times)
• Disconnecting SSD
• Using both and only one of the MB 4-pin molex power connectors
• Both Linux and Windows 10 to rule out OS problem
• In BIOS:
o Changing PCI to Gen1 from Auto
o Choosing PCIE as default instead of onboard
o Disabling IGPU
• Dummy HDMI in GPUs attached to 1x risers
• Everything is plugged into the proper spot on the PSU
• Same processes on GTX1070
• Installing several different versions of AMD drivers including the mining edition

• Is it possible that different risers of USB 3.0 cords could fix my problem?
• Do I need to jump the 1x slots?
• Anything else I haven’t tried?

The fan on the GPUs that arent recognized don’t spin, but the unit heats up.

Tl;dr my risers all work fine in the 16x PCIE but don’t work in the 1x despite the trying the bulleted suggestions and therefore cannot get more than 1 GPU recognized on my rig

Please excuse the rat’s nest of cords.

Thanks for the help

[img]//cdck-file-uploads-global.s3.dualstack.us-west-2.amazonaws.com/zcash/original/2X/1/10453b7cc0c19b976f4468d765772df2261350cd.jpg[/img] [img]//cdck-file-uploads-global.s3.dualstack.us-west-2.amazonaws.com/zcash/original/2X/2/2f0df611e968c39a170fb5951820e6fe779d42f5.jpg[/img] [img]//cdck-file-uploads-global.s3.dualstack.us-west-2.amazonaws.com/zcash/original/2X/4/43e310763c90f0670c20eb8e24f09cb131877407.jpg[/img]

1# yes I had almost the same problem where it was running stable for like 2 months and after a reboot it just did nothing after a few hours of testing I came to the conclusion that 1 pcie riser was bad.
2# well no. no you don’t need the 1x slot you can stick it in any kind of pcie slot(some older motherboard do have different kind of slots. look it up in the manual) . and no you don’t need tp use that 1x riser if you have enough pci lanes in your CPU.
most cpu just got 16 lanes and most of the time this erman you can just use 2 gpu’s with it

Interesting. To clarify: my MB has one PCIE and 5 1x slots. All GPUs function properly on PCIE. I am trying to use the 1x slots with risers because it is the only way to utilize the rest of my GPUs.

My motherboard problem after 2 months. I have 5GPU and SSD M2 yesterday Freeze reset and SSD M2 slot stop working and two last pcie. I clear CMOS next two pcie slot dead. I put motherboard the box and send to MSI :frowning:

Everything started 3 weeks ago from the strange drops of sol / s on one of the cards.

maybe pcie dead

Sorry to hear that. Although that is potentially good news for my situation. How many MBs have you gone through?