SuperMicro BigTwin SSG-2029-DN2R24L...and VMware ESXi Part#1


This 2U server holds two compute nodes and has a shared backplane to 24 U.2 dual path NVMe drives.



Each compute node has a X11DSN-ts motherboard, containing a M.2 slot, twelve DDR4 memory slots, dual LGA-3647 sockets (this one has Xeon Gold 6140 CPU's).  For networking they have two Intel 1gb/10gb RJ45 NICs plus two more Intel 10gb NICs that are strictly node-to-node communication.


This system poses some questions.  How will VMware react to have having two hosts accessing the very same hard drives?  Is VM failover possible with out vCenter?  What will vCenter think about the shared datastore?  How will storage vMotion work?  And what about those internal NICs?   

ESXi 7 was installed onto the onboard M.2 NVMe drive.  Virtual switches were setup, one for the added in PCI-e 100gb NICs, one for the two onboard 1/10gb NICs, and a third for the two internal NICs (the ones for node-to-node communication).  A single VM was created on each host, and connected to the internal switch.  It was confirmed that a VM on host-A could communicate to a VM on host-B.

A datastore was created using one of the U.2 NVMe drives on host-A.  Host-B didn't pick up this datastore until after a reboot.  I am sure there is someway of refreshing how ESXi scans for new/existing datastores that are local.  Additional disks to the datastore were added as an extent on host A.  Host B automatically picked up the size increase, without having to reboot.

A VM on host-A was stood up; host-B could not register that VM until it was shutdown. After registering the VM on host B, host A did not seem to care or know anything was different, other than it still showed the VM as powered off.  Rebooting host A shows those VMs as invalid.  VMware file locking is taking affect in the background.  After shutting down the VM, waiting a few minutes, refreshing the screen, that VM could not be powered on; on host-A.  It appears the only way to bring the VM back to the original host is to unregister the VM and register it.




HP Smart Array p420 RAID card notes

 A couple of quick notes...

The controller WILL support larger than 4tb drives.  I have used both Seagate ST6000nm0034 6tb drives and Seagate Exos 12tb SAS drives.

The ROM utility will misreport the drives, but do not worry.


ILO however will see the drive correctly.

HP Storage Administrator also reports the drives correctly.


If for some reason one needs to re-order the drives; the RAID card will figure it out.  The RAID card will notify one that on boot that the drive order was changed and has been corrected.  In my case I was originally running 3.5" drives bolted into the case, and not in the drive cage.  Through a bunch of trades/migration I was able to acquire a drive cage.  Somehow despite taking notes of the drive position on the SAS break out cable and matching that to the drive cage, it still wasn't correct. 


 
Dealing with large drives can be a PAIN!  This is in now way unique to HP, but when one modifies the RAID structure, or has a drive failure; doing all the recalculation takes forever.  In my case, migrating a RAID1 (mirror) of two12tb 7200rpm SAS drives to a three drive RAID5, took over 24hours, and that was just the 1st step.  That is correct, there is two steps, one must first recalculate the array, then another recalculation for Logical drive, which also takes forever and a day.   So this is another reason why it might be better to buy more drives of a smaller size vs. fewer and larger drives.  Also keep in mind the type of RAID one is using, RAID-5 will take more processing than RAID-10 and RAID-1.  RAID 6 is also deserves a good look at, where available. 




NVMe namespace notes:

 Just a bunch of notes for my future self, as I am sure I will forget, and for anyone else who may benefit. 

What is a namespace?  Think of it just like a partition on a normal hard drive.  Except the partitioning of the drive is done at the hard drive level not at the operating system level.  Thus the operating system sees the name spaces as unique hard drives.  IE if one had a 1tb NVMe, and setup four 256gb name spaces, Windows would think there was four 256gb NVMe drives in the system.  

FROM LINUX (PartedMagic was used):

nvme list <--show what nvme name spaces are present and the model of the drive

nvme id-ctrl /dev/nvme0 | grep mcap <--shows" tnvmcap (total NVM capacity) and unvmcap (unallocated NVM capacity) attributes.

nvme id-ctrl /dev/nvme0 | grep cntlid <--show the controller(s) ID's

 

DELETE NAMENSPACES

nvme detach-ns /dev/nvme0 –n 1 –controllers=0 <-- detaches the namespace "1" from the controller "0" 

nvme delete-ns /dev/nvme0 -n 1 <--deletes the namespace "1" from the controller "0"

nvme ns-rescan /dev/nvme0 <--rescans the drive "0"

 

CREATE NAMESPACES

nvme create-ns /dev/nvme0 -s 26214387 -c 26214387 -b 4096  <--creates a 100gb name space, formatted to 4k

nvme ns-rescan /dev/nvme0 <--rescan
 

nvme attach-ns /dev/nvme0 -n 1 -c 0x1  <--attach the namespace to the controller
 

nvme ns-rescan /dev/nvme0  <--rescan


Ignore this stuff....just notes to me...

nvme create-ns /dev/nvme0 -s 7499000000 -c 7499000000 -b 512

11,995,709,440

3,840,755,982,336<--cap

----4096---

1,500,295,305 <-cap/256

1,000,196,870 <-cap/384 ==too big

960,188,995<-cap/400== too big

936,769,751<=cap/410==3.84

923258649<-cap/416== 3.78

857311603 <-cap/448 ==3.51tb

750,147,652 <-cap/512 == 3.08tb

93,768,456<--cap/4096 == 384gb

26,214,387<--example ==100gb

---512---

93,768,456 <--cap4096 ==48gb

948,334,810 <--cap/410 ==485gb

1,500,295,305 <-cap/256  ==768gb

3,800,000,000<---1950gb

5,000,000,000<---2560gb 

6,500,000,000<---3330gb 

7,500,000,000<--3840gb


 



NVMe U.2 drives in machines with out U.2 backplanes

 I just discovered these and figured it worth the share.

NVMe drives that are in 2.5" hard drive form factor (versus the more familiar M.2 "gum stick" variety), look just like a normal SAS hard drive.  However the pinout is slightly different, and one needs to have different backplane and connections to the PCIe bus than SAS.  IE one can not take a server physically configured for SAS/SATA drives and utilize these U.2 drives. 

This is where this adapter comes in.  Mount the drive to the PCIe card, plug the card into the computer, and one can now utilize that hard drive.  Another feature of this specific card is one could use a normal 2.5" SATA hard drive on this card as well.  One just needs to feed a SATA cable to the card.  Both are very handy options to for instance adding a single 2.5" drive to a machine that is doesn't have any extra drive bays, or the bays are configured for 3.5".