This 2U server holds two compute nodes and has a shared backplane to 24 U.2 dual path NVMe drives.
Each compute node has a X11DSN-ts motherboard, containing a M.2 slot, twelve DDR4 memory slots, dual LGA-3647 sockets (this one has Xeon Gold 6140 CPU's). For networking they have two Intel 1gb/10gb RJ45 NICs plus two more Intel 10gb NICs that are strictly node-to-node communication.
A datastore was created using one of the U.2 NVMe drives on host-A. Host-B didn't pick up this datastore until after a reboot. I am sure there is someway of refreshing how ESXi scans for new/existing datastores that are local. Additional disks to the datastore were added as an extent on host A. Host B automatically picked up the size increase, without having to reboot.
A VM on host-A was stood up; host-B could not register that VM until it was shutdown. After registering the VM on host B, host A did not seem to care or know anything was different, other than it still showed the VM as powered off. Rebooting host A shows those VMs as invalid. VMware file locking is taking affect in the background. After shutting down the VM, waiting a few minutes, refreshing the screen, that VM could not be powered on; on host-A. It appears the only way to bring the VM back to the original host is to unregister the VM and register it.