Installing Debian 8 in an SSD RAID1 envorionment fails at grub install, yet the same environment, same hardware using HDD's completes successfully.

How can this be?

ECS H110-C3D MB
Startech Marvell 88SE92XX PCIe controller

Have tried both Sandisk and ADATA SSD's, both fail at GRUB install.
Using WD traditional (spindal) HDD's works successful.

Yes, 8.5. No I didn't file a bug report, honestly I'm not sure what's at fault.

Is it the motherboard and it's BIOS? It is flashed to the latest.

Does the problem lie with the controller? It is a cheap controller that uses a Marvel chipset. Perhaps an Adaptec RAID controller would do the trick?

What I don't understand is why the success with traditional hard drives? Same RAID 1 configuration, same hardware. Somewhere in the communication chain (BUS) the SSD's are being seen as different the a traditional HDD.

I'm not convinced it is an OS problem. Trying to install Windows 7 on the SSD RAID 1 config with the same hardware yields BSOD. But is it the controller or the MB?

That's not a good sign if you BSOD on the other OS. You've flashed to the latest so something's up. Try one last time with the old tech trick. That is, one stick of RAM and see what happens. Sadly I don't think it's the controller yet but clues are sparse and others have done RAID with SSD.

Let me broach the old RAID 1 didn't save us topic. In 2 decades I have only seen pain in RAID 1. For some reason folk get the idea if they RAID 1 they don't need a full backup. Odd thinking there. And many RAID 1's don't work as we used to expect them to. That is, you unplug one drive and it keeps working or more.

The BSOD only happens with SSD RAID 1, not the HDD RAID 1.
This dedicated server is for a client and they are only paying for 2GB DDR3 so there is only 1 stick. This stick has passed testing. I thought of that too.

We have had tremendous luck with RAID1 or 5 over the years. It is an option we offer for our dedicated hosting service. As well as daily full backups with retention.

I'm new to SSD's and we have only just started implementing them in production servers. I am not fully aware of how they function logically or how they communicate with a controller vs how a spindle HDD's communicate. I hate being in this situation where I know something is wrong, but lack the expertiese to identify the cause.

We have implemented SSD RAID 1 configurations previously, but didn't have this issue. The difference was in the MB's and controllers. I think this is the first situation where we are using a PCIe controller and / or a controller that is not 100% hardware and backed by an authoritive company such as LSI or Adaptec.

Unfortunately on hand we only have a couple of PCIe RAID controllers and they are all cheap. The other promonent controllers we have are PCI. This MB does NOT have PCI.

This is why I think you report it as a bug. As you may or may not know some code may have race conditions and you may have uncovered such a bug.

OR it's just something in the code of the controller. To test that out you change out the motherboard for another model.

On the far outside chance. Did you check if all SSD drives have their firmware current? I don't see this detail in your post.

I did not... But again, I used ... er attempted RAID 1 using 2 sets of SSD's... 2 SANDISKs and 2 ADATAs... both are the same result.
Seems unlikely to me that this could be anything other than the controller not communicating properly with the MB and SSD's for whatever reason.

I am really feeling like the bus comm is different for SSD's vs HDD's.

I didn't ask because many won't reveal secrets so I was hoping to see if there was any new firmware for any of the parts involved. Remember while I've seen this, it doesn't apply due to it was years ago and you have quite different generation gear.

Looks like you just get to file a bug report. Remember the words race condition.