They definitely made more than a little money from this. For example, my ex–mother-in-law kept paying for AOL dial-up after she was already paying for AT&T DSL, thinking that was the only way she could keep using AOL. And yes, she would still log in through the AOL browser.
I did something similar, I patched a Gigabyte Z77-DS3H bios with the SAMSUNG_M2_DXE module to get a Samsung SM951 AHCI M.2 SSD working. It's not SATA and not quite NVMe, PCIe AHCI existed for a short while before NVMe became ubiquitous.
This was to enable a screamer of a Hackintosh based on Mavericks which didn't have native NVMe support at the time.
…AHCI is the SATA controller standard, how and why did they put in extra effort to make it not work?!? (I'm not questioning they did in fact break it, it's in line with other dumb things HW vendors do… just… ugh!)
Well m.2 sata would be using one lane as sata with an ahci controller elsewhere (probably embedded in a chipset or cpu).
PCIe AHCI have the controller on the m.2 device, maybe better than sata speeds? But I think to boot from an unexpected AHCI controller you might need a boot rom? And why would you put a boot rom on a device that's all about storage?
> But I think to boot from an unexpected AHCI controller you might need a boot rom?
The whole point of AHCI was that you wouldn't need a boot ROM. You already have firmware support for talking to an AHCI controller to find drives and locate an operating system, because that's how the system boots off the built-in SATA ports. The firmware modules required to do the same with an AHCI add-in card are the firmware modules you're already using, so the add-in card doesn't need to bring them along in an option ROM. Same for drivers in the OS.
Most likely, the motherboards in question simply never implemented code to probe for any other AHCI devices beyond the built-in one, and would have been equally unable to boot from an AHCI controller card providing more SATA ports.
Definitely was faster than standard SATA but nowhere near NVMe potential. It still came with all the AHCI warts as AHCI was optimized for spinning disks and had 32 commands per queue, while NVMe supports up to 64,000 and so on and so forth. The major limiting factor was utilizing SATA-based command structures over PCIe.
My guess is at the time it was a familiar protocol stack which apparently made it easier to retrofit legacy systems before NVMe? AHCI was already well-supported across major OSes then (Windows, Linux, macOS), but UEFI firmware still needed to understand this abomination.