Dell 13th gen. servers and NVMe/Bifurcation

 In case anyone is wondering, the Dell 13th generation servers can do PCIe Bifurcation.  

So first question for those not in the the know, what the heck is PCIe Bifurcation?  In simplest term is take a PCIe slot and sub-divides it into multiple slots.  IE a PCIe x16 slot can be divided into two x8 or four x4 slots.

Ok great buy why would one want to do this?  Well many people have found the wonders of NVMe M.2 hard drives, and run a NVMe M.2 adapter PCIe card. Sometimes the desire to run multiple M.2 cards is desired, but there simply isn't enough slots available.

A quick search on ones favorite shopping place for IT gear will show a number of cards that allow one to put two to four M.2 NVMe drives on a single card, however in order to use them, the PC must support Bifurcation, and with out it the computer will only see the first drive.  For the record, there are cards that can have more than one M.2 NVMe drive even if the PC doesn't have Bifurcation support, some have a RAID controller chip on them, some have basically a PCIe switch on them.  They are not very common place and are pricey.

In this case the case study is a Dell PowerEdge r930, with a $20 dual M.2 card from Amazon.


Note on depending on the card slot six might not be usable as the card hits the screw for the heat sink.

The setting change...also even though this machine has something like 10 PCIe slots only six of the have the setting.


Dell r720xd and VMware ESXi 8.0

 -Even though the server or the CPU's are not on the HCL (Hardware Certified List), it does work.  Even with a Xeon E-2600 v0 CPU (with the CPU bypass setting).  The Perc H710 mini mono RAID card is recognized.  The onboard NICs are recognized.  

-Mellanox ConnectX-3 40Mbps network cards are no longer supported

-There are three different versions of Perc H710; the firmware and drivers are all the same.  The third revision is actually PCIe Gen 3, however by default it is set to Gen2 mode.  The change is easily done in the RAID card settings.  Thanks to the Art of Server for pointing that out.

-The XD model has twelve 3.5" hard drive bays.  In order to make that work there is no real-estate for an optical drive or the LCD screen normally seen on PowerEdge Servers.  There is an optional "flex-bay" kit which adds two 2.5" bays at the rear of the machine.  Interestingly when one adds the flex-bay, it disables the two onboard SATA connectors, even though the drives in the flex-bay are operated by the HBA/RAID controller.  









 

HP Proliant DL80 Gen. 9

 Hewlett Packard Proliant DL80 Generation 9 is an entry level server with many corners cut to make the price cheaper than the normal enterprise hardware.

Here are a few notes:

-only one Power Supply

-no dedicated ILO port

-eight memory slots, and only four can be used in single CPU configuration

-the base model has no riser cards, and can only use three PCIe cards with a single CPU installed and five with two CPU's.  

-there is three? options for adding a riser card(s).  One of which is a GPU kit, which includes another fan

-converting to dual CPU's will need at least two more fans, three to make it "redundant".  it will operate without have the extra fans but it will complain about it at post and run the existing fans at full speed.

-the built in RAID controller is a B140i.  It has two SFF-8087 ("mini-sas") connections and two normal SATA connectors.  All of those connectors are controlled by the RAID card.  IE if one plugs a single drive into the normal SATA port one needs to go through the HP Storage Administrator and make a RAID0 in order to use it.  It only does SATA, SAS is NOT supported.  RAID5 is supported!  System RAM will be used for cache.  Interestingly Windows will see the virtual volume as intended, however my Linux utility, Parted-Magic, as well as the VMware installer, sees all of the drives individually.  The card can have it's personality switched to be a standard SATA controller.

-The standard backplane is four port, despite there being 8 drive bays. One can just add a 2nd four port backplane, the physical retention is there as is a second special power lead.   There also appears to be a single 8-bay option.   The standard backplane also claims to be not hot-swappable, I am not sure if that is because of the B140ii controller or the back plane.  Speaking of drive bays, the 3.5" bays do not have any LED indicators.  The backplane is connected to the system board via a SFF-8087 connector (at the motherboard end) to a four port SATA break out cable to the four ports of the backplane.  

I ordered a HP 790487-001, which is said to work for the DL60, DL80, and DL120.  This four port SAS/SATA backplane differs in that it has a SFF-8087 connector and little "fingers" to operate the LEDs for the drive sled.  

-VMware ESXi v7 does see the storage and the network cards.  However where as Windows does recognize RAID logical volumes, ESXi does not.   In this case a four drive RAID5 was setup, and VMware only sees four individual drives.  





Original backplane on the left, upgraded one on the right

Notice the "fingers" to operate the drive sled LED's

Both backplanes installed, note the SFF-8087 isn't installed yet.