SuperMicro IPMIView

IPMI (Intelligent Platform Management Interface) is a pseudo standard for managing, accessing, and configuring servers.  Companies like SuperMicro, Quanta, Celestica, are petty good about sticking to those standards, even HP and Dell allow limited use of them.  One might also see/hear the term BMC, which is really just the hardware that runs the IPMI interface.  

When dealing with older hardware it can be a real PITA to administer machines.  Things like getting the remote KVM/Console to work with modern day browsers can be a real headache. Each manufacture, and often each model have their own idiosyncrasies. 

IPMItools, is a software package one can download, install, and issue commands a remote machine.  Think of it like 'PowerCLI" for the BMC.  Now enter SuperMicro's IPMIView.  It takes IPMItools one step further.   It is a GUI that allow one to add in many systems into a control panel, flip through those systems, and do remote access, management, and configuration.  IMHO the biggest feature is having an easy way to get at the remoteKVM/console of the machine.   No more making SSL and security exceptions for each machine one administers.  Hence the name, this tool is made for SuperMicro systems, however since it operates with industry standards it also functions with many other manufactures.

https://www.supermicro.com/en/solutions/management-software/ipmi-utilities

IPMItools does require Java to run, and it make require some initial security settings, but it beats making changes for every single machine. Each system is a bit different so somethings work, some don't....IE hardware monitoring. 






Firmware updates of Dell PowerConnect 6224/6248 switches

Who fools around with switches that are over 15 years old?  Well I do!  They still work, and when basic 1gb connectivity is all that is needed they still work, why not?  Besides new 24 port managed switches aren't exactly free.

Often the firmware on these switches is so old and insecure, that even if the Web GUI is enabled it's a struggle to get it function properly. Thus it is necessary to do the operation from either an SSH session or preferably the local console/serial port.  The newest firmware v3.3.18 is from 2019, so it isn't great in terms of being updated, but it's better than the original 2007 code! 

Here is the basic steps to update the firmware on them. 

1. Obtain and unpack the firmware from Dell.  Make note of the location of the files. https://www.dell.com/support/home/en-us/product-support/product/powerconnect-6224/drivers

2. Obtain and install a terminal software (i.e. Putty or TeraTerm).  Connect to the switch.

3. Obtain and install a TFTP server software (i.e. TFTpd64), point the server to the location of the unpacked firmware files.

4. from the SSH session, enter these commands:

-en

-copy tftp://address-of-tftp-server/firmware-file-name.stk image

-show ver (make note of which image has the newer firmware)

-boot system image1  (or image2 depending on your system)

-copy running-config startup-config

-reload

5. IF GOING FROM FIRMWARE v2.X TO v3.X; During the boot process, choose option #2 to get the alternate boot menu, then choose option #7 "update the boot code", then normal boot.  Failure to do this will cause the system to boot loop.

Dell PowerEdge r520 dual vs single CPU

 If one every decides to add a 2nd CPU to a Dell PowerEdge r520, besides the CPU, CPU sink, extra system fan, one also needs a different PCIe riser card.

What?

Turns out the riser cards are different!  If one puts a 2nd CPU in to a system without replacing the riser, the system works, however the system LCD will be amber, flashing, and giving an error: :HWC2005 system board riser cable interconnect failure".  The machine still functions just fine, it just has the annoying alerts.




Dell PowerEdge: unable to Web into iDRAC after upgrades

 
After upgrading Dell x40 PowerEdge servers the web interface for IDRAC may give the above error.  Fortunately one can still get at it using the IP address. 

The issue is that the iDRAC webserver is checking the name in header with what is programmed in the IDRAC web server.  One can either disable this check (which is what I choose, as my servers get moved re-purposed quite often), or set the value to match the FQDN.
1: SSH into the host
2: racadm set idrac.webserver.HostHeaderCheck 0




VMware iSER iSCSI targets showing up as "unconsummed"

I was swapping out an ESXi host, and I couldn't get our iSCSI target to mount.  The paths and targets would show up, however it would say the volume was "un-consumed".   Which is odd as the datastores in question are in use by other servers and many VMs live on them.  I do have one storage server that has issues with it's identifier due to a signature mismatch, and I have to forcefully mount it.  However this time the same behavior wasn't present.

Normally from the ESXi shell I issue:

esxcfg-volume -l   this will spit out all of the volumes and some basic details visible to ESXi, normally I would see the volume in question, however this time no volumes where present.

Turns out the issue was MTU!  The switch ports, the ESXi kernel ports, the storage server were all set to MTU 9000.  However, I missed the virtual switch!  It was still at 1500, changed it to MTU 9000, rescan the storage adapter and all is good.

iSER & RDMA don't fragment their packets, they just drop them!  



Windows 2019 Software RAID

 Just a quick test...This server has U.2 Samsung MZWLL1T6HEHP NVMe drives, a fresh install of Windows 2019 server.  For the first test I took the three NVMe drives and created a dynamic volume (software RAID5).  Then I repeated the test of a single drive.  

three drive, RAID5 software

Single drive


SuperMicro servers featuring PCIe 5.0 and 200gbps NICs

 Hot off of the factory floor!  SuperMicro ASG-1115S-NE316R

AMD Epyc 9004 series CPU




e3.s form factor drives.  For the record I dislike this form factor because I fear the "fingers" the drive interface extends beyond the body of the drive.  I can see some of these getting broken by sloppy and mistreatment.



SuperMicro AI GPU Server

 

SuperMicro ARS-111GL-NHR  G1SM-G

Nvidia A02 72 Core GH GraceHopper 3.4ghz CPU

16896 GPU cores

https://www.supermicro.com/en/products/motherboard/g1smh-g

https://www.supermicro.com/en/products/system/GPU/1U/ARS-111GL-NHR



A little concerned about these human hair thick wires just ran across the front






The server is physically very long, some cheating to get the server rack door to close with 200gb DAC cables.

RAID5 vs RAID10

Quick and dirty test.  The Guinee pig is a Lenovo ThinkSystem SR655.  With an Avago RAID 930-24i card with 4gb cache.  Four Koxia 1tb SAS 12gbs SSDs.  VMware ESXi v8 is installed on to the host.  The four drives were set up into a RAID5 configuration, read ahead enabled, drive cache is enabled, cache policy is set to Write Back.   The RAID virtual drive was presented to VMware; a Microsoft Developer Windows 11 VM was imported to the datastore using thick provisioning.   Atto disk benchmark software was ran with both 4gb and 8gb tests.  Then, the VM, datastore, virtual drive, was torn down, rebuilt as RAID10, and retested.





I found the results somewhat surprising.   We often hear about the "RAID 5 write penalty", or "RAID 10 is just faster", etc., etc..  Well this test shows the opposite to be true.  The write speed on RAID5 is actually better!  One theory is that well, three drives are doing the work of writing in RAID5 where as in the RAID10 only two drives are writing.  


Burnt up Network Card?

 I have seen this in decades.  This Mellanox 100gb card was causing the server to be unstable.  Now, I don't know when the card went bad.  The server was repurposed, and the 100gb NIC was added.  So I don't know if my co-worker put the NIC in already burnt, or it burnt up in this Dell R620 Power Edge

Notice the discoloration in the lower right corner, and that capacitor is also a different color.




Enterprise SAS SSD vs Consumer SSD

 The time was allocated to do a quick test to compare consumer grade SATA SSD drives to Enterprise grad SAS drives.   The SAS drives are 12Gbps, the SAS card is also 12Gbps.  The tests were done on the same Windows 10 desktop with the same LSI SAS card, expect where noted a PowerEdge r730 was used.

Baseline...Samsung Evo 840 1tb SSD on 12gb SAS controller, Windows 10

Samsung MZL1ls960 960gb SAS on 12gb SAS controller, Windows 10

Samsung MZL1ls960 960gb SAS on 12gb SAS controller, on Dell r730 PowerEdge on Perc H730 RAID card w/ 1gb cache, test set to 2gb, Windows Server 2019


Dell MZ-1LT3T8c 3.8tb SAS on 12gb SAS controller, Windows 10


Impressive results!  12gb SAS SSD drives are very near NVMe performance, and nearly double that of the SATA drives.  Is the performance due to having more cache?  Or is it because the SAS drives are 12Gbps vs 6Gbps on the SATA drives?  It should be noted that during testing the SAS drives consume roughly 2 more watts at idle 5 more watts during the test.   The SAS drives were also MUCH warmer to touch where the SAS drives were ambient temperature. 

Dell x30 PowerEdge Servers running NVMe

 I always assumed that generation 13 PowerEdge Server (r430, 530, 630, 730, 930 etc.)  where not able to run NVMe natively.  Many people run the M.2 "gum-stick" form factor drives on PCIe->NVMe adapter cards. 

As it turns out one can!   The parts are not cheap, but it basically a special PCIe controller card (Dell P31H2 16 Port PCIe x16 Extender SSD NVME Controller), SAS cables, and a special back plane.  The PCIe card runs four cables out to "special-ish" ports on the back plane, and four of the 2.5" drive bays can now run U.2 NVMe drives.















Windows 11 & Hardware

 Windows 11 has several hardware requirements that if they are not met, one cannot install the operating system.   Or can you? 

  
Windows 11 requires the following minimum requirements:

-TPM 2.0 modules (Trusted Platform Module (encryption))

-1ghz 64bit CPU

-64gb hard drive

-4gb RAM

-Graphics card compatible w/ DirectX12 and WDDM 

-720p or better display

Seems more than reasonable, except, the real bugger for most of this is the TPM part.  That rules out basically any machine that is older an Intel I-series generation 8.   So let's say one has a computer with an Intel i7-7700? Let's say one has a Dell PowerEdge r640 server running VMware ESXI v7 with Intel Xeon 414r CPU's?  Well those machines are more than fast enough, but they doesn't pass the TPM check; so Microsoft says: "Go pound sand, and then go buy a new PC.  Also what if one is doing virtualization?    

Well on can bypass this hardware check. There is no downside to this; the machine will run Windows 11 without problems.  It isn't going to BlueScreen, it isn't going to be slow, it isn't going to blow up; well at least not because of pseudo hardware requirements.  I have done it probably a dozen of times and have yet to run into any issues related to the hardware requirements.  One machine has even been running for since early 2022.

After booting to the Win11 ISO, at the very first screen where it asks about Language, time/currency, and Keyboard.  Hit "shift" and "F10".  This will bring up a Command Prompt.  

In that Command Prompt window, type: "regedit"

In RegEdit,navigate to : HKEY_LOCAL_MACHINE\SYSTEM\Setup

Create a new key named: "LabConfig"

In LabConfig create the following keys of DWORD value set to "1"

-BypassCPUCheck

-BypassRAMCheck

-BypassSecureBootCheck

-BypassTPMCheck

Close the Regedit and Command Prompt windows and install Windows as normal.  

https://www.tomshardware.com/how-to/bypass-windows-11-tpm-requirement


Bonus:  If one is using a USB key to install Windows, make the changes, and those settings will stick on the install media, as it is "writeable".  So one only has to do these steps once.

Bonus #2: If one doesn't want to deal with associating the machine to a Online Microsoft Account, simply don't have the machine connected to the internet until one is done installing Windows.  In other words; don't connect it to the internet until after setup is complete.

ProxMox notes from a NOOB

Here are a few notes and things that stuck out to me while kicking the tires of ProxMox.  Keep in mind these are coming a VMware Admin with very weak Linux knowledge.   I will keep adding stuff as I learn.

Uses QEMU/KVM for virtualization

LXC for containers

Corosync Cluster Engine for server communications

Proxmox Cluster File System for cluster configuration

 -If installing ProxMox v8 crashes on install, see if v7 works, if it does then do an in place upgrade

-Neither v7.4 or v8.1 seem to recognize Mellanox CX-3 40gbE/InfiniBand network cards

-The vCenter equivalent: it is just built in to the webUI of each host and works on a distrusted/cluster model, IE there is not appliance to install, no software keys, no dedicated IP.  Imagine if the ESXi WebGUI had basic vCenter functions built in, ie joining and managing multiple host in one interface, vMotion, and replication.

-There are oddities about moving VMs back and forth from LVM to/from ZFS.  A VM built on a ZFS volume cannot live migrate, cold migrate to a LVM volume; a template that lives on an LVM volume cannot be spawned to a LVM volume; IF there is a replication job attached.

-by default a network "bridge" is created.  Network cards can be added/subtracted as necessary; very much like the "virtual switch for VMware/ESXi


-The default install, will have several "nag screens" about wanting one to have a paid subscription.  No judgment here: "gotta pay the bills".   The default update repository is the from the paid tier, one must disable it and point updates to the "no-subscription" tier, to get updates and lessen nags. 

-The ProxMox virtual machine tools, actually QEMU (vmtools equivalent) is a separate download.  It must be installed for any "thin provisioning of VM memory.  IE a Windows VM running w/o the tools set to 8gb of RAM will consume 8gb of RAM on the host.  With the tools it will take some amount less.  

-That same ISO (proxmox virtio-win-xxxxxx.iso) will most likely be needed for installing Windows.  Things like the hard disk controller (depending which one was chosen on VM creation) will not be seen and require drivers to be installed.  

-Replication jobs!  If they fail and one wants to delete the job, and they seem to not go away through the GUI.  Go to the shell and type: "pvesr lit" to show the jobs.   Then "pvesr delete JobID --force"

-A template cannot migrate to a different host unless the name of the storage on both servers is the same.  

-A template cannot be cloned (image spawned from) to a host that doesn't have the same storage name.  If the template exists on local storage it cannot be cloned to another host, and the dropdown box for what storage to use is blank. One has to clone the machine on the same server where the template lives, then migrate the cloned VM.

-One down fall of a cluster, if one has a cluster and more than 50% of the hosts are offline, one cannot start a VM.  So say for instance there is a hardware/power failure, something that brings down half of the hosts.  A VM is off that needs to be powered on.  If the cluster doesn't have quorum, the VM's won't start!

-configuration files for the VMs live here: /etc/pve/qemu-server; if things go way wrong one can move the config file to another host by: 

mv /etc/pve/nodes/<old-node>/qemu-server/<vmid>.conf /etc/pve/nodes/<new-node>/qemu-server

-virtual disks can be accessed here: /dev/<disk name>

Rename VM LVM Storage Name VIA SSH

cd /dev/pve

lvrename /dev/pve/vm-100-disk-0 /dev/pve/vm-297-disk-0

-There does not seem to be an easy way via the GUI to rename storage.

 the 1st icon is running VM, the 2nd a VM that is being migrated, the third is a template. 


v7.4 Cluster
v8.1 Cluster



Things I really like about ProxMox:

-ability to migrate running VM's from one host to another; even without shared storage
-ability to backup VM's
-ability to replicate VM's
-no need for a dedicated management VM/appliance

Challenges:

-I had an unexpected host failure; the VM tired to migrate to a different node; at the time there is only local storage, not even Ceph..  The local node volumes also where not the same.  After the node came back online, it had the VM's virtual drive, as it couldn't migrate, another server in the cluster had the config file.  Getting things back inline was a real chore.  Simply moving either the virtual hard drive or the config file was not working.  Sure one could blame it on me for not setting up shared storage, or not having the datastores named the same.  However why is HA not doing checks before attempting a migration?

-Another unexpected host failure; two nodes are disconnected from the cluster.....Nodes 1, 2, 3, and 6 are all up and joined together and report nodes 4 and 5 as offline.  Nodes 4 and 5 believe they are online, and the other four nodes are off line.  Removing and re-adding the nodes to a cluster is not straight forward and not doable via the GUI.

------------------------------------

-Abbreviated: Instructions to upgrade from v7 to v8

    -from the shell of a given node type:

    -pve7to8

    -apt update

    -apt dist-upgrade

    -pveversion

    -sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

    -apt update

    -apt dist-upgrade

------------------------------------

-To remove a damaged node from a cluster:
    From the damaged node:
    -stop and pve cluster services:  systemctl stop pve-cluster
    -stop corosync services:           systemctl stop corosync
    -restart in single host mode:      pmxcfs -l 
    -delete corosync config:            rm /etc/pve/corosync.conf
    -delete corosync folder:            rm -r /etc/corosync/*
    -delete reference to other nodes:  rm -r /etc/pve/nodes/*
    ***VMs living on the damaged server will be lost.....the virtual hard drive will still be there

------------------------------------

qm list (shows VMs)

qm start/reboot/reset/stops vmID (starts/safe shutdown and startup/reboots/hard powers off off a VM)

------------------------------------

To change a server name:
edit the following files:
nano /etc/hosts
nano /etc/hostname
nano /etc/postfix/main.cf
reboot now

------------------------------------
service pve-cluster stop
service corosync stop
service pvestatd stop
service pveproxy stop
service pvedaemon stop
and then
service pve-cluster start
service corosync start
service pvestatd start
service pveproxy start
service pvedaemon start

--------------------------
If one has a cluster and some of the nodes are off line for a long time, like if one is attempting to be energy usage conscious, or similar; when that node comes up, the cluster database will be out of sync and the recently powered up node will think it has a master copy of the cluster configuration.   Be patient and wait.  The nodes will sort themselves out and the copy of the cluster database with the highest revision number will be replicated around.

------------------------
Clusters & Quorum!
If one has a cluster with 50% or more of the server nodes down, VMs cannot be changed or powered up.  The Cluster database needs to have half of the servers plus one up and online.  In my case I had six nodes, all of the VMs were consolidated onto three nodes, the vacated hosts where shutdown to save energy.   This caused problems, until a 4th host was re-powered back on.   This could make for very interesting disaster recovery scenarios. 


SAS cable connectors

 Just some general information that might be useful.

This is a "mini-SAS" SFF8087

This is also a "mini-SAS", SFF-8643, typically seen in SAS 12g applications.

This is an OCuLink PCI-e SAS SFF-8611 4i; as seen on a PowerEdge r640.

The cable on the left is a "double wide" SAS cable, used on SOME HP and Dell RAID cards.  As in this case an HP Smart Array P440.

The cable on the right is a PCI-e Slimiline SAS SFF-8654 8i cable