Windows 11 & Hardware

 Windows 11 has several hardware requirements that if they are not met, one cannot install the operating system.   Or can you? 

  
Windows 11 requires the following minimum requirements:

-TPM 2.0 modules (Trusted Platform Module (encryption))

-1ghz 64bit CPU

-64gb hard drive

-4gb RAM

-Graphics card compatible w/ DirectX12 and WDDM 

-720p or better display

Seems more than reasonable, except, the real bugger for most of this is the TPM part.  That rules out basically any machine that is older an Intel I-series generation 8.   So let's say one has a computer with an Intel i7-7700? Let's say one has a Dell PowerEdge r640 server running VMware ESXI v7 with Intel Xeon 414r CPU's?  Well those machines are more than fast enough, but they doesn't pass the TPM check; so Microsoft says: "Go pound sand, and then go buy a new PC.  Also what if one is doing virtualization?    

Well on can bypass this hardware check. There is no downside to this; the machine will run Windows 11 without problems.  It isn't going to BlueScreen, it isn't going to be slow, it isn't going to blow up; well at least not because of pseudo hardware requirements.  I have done it probably a dozen of times and have yet to run into any issues related to the hardware requirements.  One machine has even been running for since early 2022.

After booting to the Win11 ISO, at the very first screen where it asks about Language, time/currency, and Keyboard.  Hit "shift" and "F10".  This will bring up a Command Prompt.  

In that Command Prompt window, type: "regedit"

In RegEdit,navigate to : HKEY_LOCAL_MACHINE\SYSTEM\Setup

Create a new key named: "LabConfig"

In LabConfig create the following keys of DWORD value set to "1"

-BypassCPUCheck

-BypassRAMCheck

-BypassSecureBootCheck

-BypassTPMCheck

Close the Regedit and Command Prompt windows and install Windows as normal.  

https://www.tomshardware.com/how-to/bypass-windows-11-tpm-requirement


Bonus:  If one is using a USB key to install Windows, make the changes, and those settings will stick on the install media, as it is "writeable".

Bonus #2: If one doesn't want to deal with associating the machine to a Online Microsoft Account, simply don't have the machine connected to the internet. 

ProxMox notes from a NOOB

Here are a few notes and things that stuck out to me while kicking the tires of ProxMox.  Keep in mind these are coming a VMware Admin with very weak Linux knowledge.   I will keep adding stuff as I learn.

Uses QEMU/KVM for virtualization

LXC for containers

Corosync Cluster Engine for server communications

Proxmox Cluster File System for cluster configuration

 -If installing ProxMox v8 crashes on install, see if v7 works, if it does then do an in place upgrade

-Neither v7.4 or v8.1 seem to recognize Mellanox CX-3 40gbE/InfiniBand network cards

-The vCenter equivalent: it is just built in to the webUI of each host and works on a distrusted/cluster model, IE there is not appliance to install, no software keys, no dedicated IP.  Imagine if the ESXi WebGUI had basic vCenter functions built in, ie joining and managing multiple host in one interface, vMotion, and replication.

-There are oddities about moving VMs back and forth from LVM to/from ZFS.  A VM built on a ZFS volume cannot live migrate, cold migrate to a LVM volume; a template that lives on an LVM volume cannot be spawned to a LVM volume; IF there is a replication job attached.

-by default a network "bridge" is created.  Network cards can be added/subtracted as necessary; very much like the "virtual switch for VMware/ESXi


-The default install, will have several "nag screens" about wanting one to have a paid subscription.  No judgment here: "gotta pay the bills".   The default update repository is the from the paid tier, one must disable it and point updates to the "no-subscription" tier, to get updates and lessen nags. 

-The ProxMox virtual machine tools, actually QEMU (vmtools equivalent) is a separate download.  It must be installed for any "thin provisioning of VM memory.  IE a Windows VM running w/o the tools set to 8gb of RAM will consume 8gb of RAM on the host.  With the tools it will take some amount less.  

-That same ISO (proxmox virtio-win-xxxxxx.iso) will most likely be needed for installing Windows.  Things like the hard disk controller (depending which one was chosen on VM creation) will not be seen and require drivers to be installed.  

-Replication jobs!  If they fail and one wants to delete the job, and they seem to not go away through the GUI.  Go to the shell and type: "pvesr lit" to show the jobs.   Then "pvesr delete JobID --force"

-A template cannot migrate to a different host unless the name of the storage on both servers is the same.  

-A template cannot be cloned (image spawned from) to a host that doesn't have the same storage name.  If the template exists on local storage it cannot be cloned to another host, and the dropdown box for what storage to use is blank. One has to clone the machine on the same server where the template lives, then migrate the cloned VM.

-configuration files for the VMs live here: /etc/pve/qemu-server; if things go way wrong one can move the config file to another host by: 

mv /etc/pve/nodes/<old-node>/qemu-server/<vmid>.conf /etc/pve/nodes/<new-node>/qemu-server

-virtual disks can be accessed here: /dev/<disk name>

Rename VM LVM Storage Name VIA SSH

cd /dev/pve

lvrename /dev/pve/vm-100-disk-0 /dev/pve/vm-297-disk-0

-There does not seem to be an easy way via the GUI to rename storage.

 the 1st icon is running VM, the 2nd a VM that is being migrated, the third is a template. 


v7.4 Cluster
v8.1 Cluster



Things I really like about ProxMox:

-ability to migrate running VM's from one host to another; even without shared storage
-ability to backup VM's
-ability to replicate VM's

Challenges:

-I had an unexpected host failure; the VM tired to migrate to a different node; at the time there is only local storage, not even Ceph..  The local node volumes also where not the same.  After the node came back online, it had the VM's virtual drive, as it couldn't migrate, another server in the cluster had the config file.  Getting things back inline was a real chore.  Simply moving either the virtual hard drive or the config file was not working.  Sure one could blame it on me for not setting up shared storage, or not having the datastores named the same.  However why is HA not doing checks before attempting a migration?

-Another unexpected host failure; two nodes are disconnected from the cluster.....Nodes 1, 2, 3, and 6 are all up and joined together and report nodes 4 and 5 as offline.  Nodes 4 and 5 believe they are online, and the other four nodes are off line.  Removing and re-adding the nodes to a cluster is not straight forward and not doable via the GUI.

------------------------------------

-Abbreviated: Instructions to upgrade from v7 to v8

    -from the shell of a given node type:

    -pve7to8

    -apt update

    -apt dist-upgrade

    -pveversion

    -sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

    -apt update

    -apt dist-upgrade

------------------------------------

-To remove a damaged node from a cluster:
    From the damaged node:
    -stop and pve cluster services:  systemctl stop pve-cluster
    -stop corosync services:           systemctl stop corosync
    -restart in single host mode:      pmxcfs -l 
    -delete corosync config:            rm /etc/pve/corosync.conf
    -delete corosync folder:            rm -r /etc/corosync/*
    -delete reference to other nodes:  rm -r /etc/pve/nodes/*
    ***VMs living on the damaged server will be lost.....the virtual hard drive will still be there

------------------------------------

qm list (shows VMs)

qm start/reboot/reset/stops vmID (starts/safe shutdown and startup/reboots/hard powers off off a VM)

------------------------------------

To change a server name:
edit the following files:
nano /etc/hosts
nano /etc/hostname
nano /etc/postfix/main.cf
reboot now


SAS cable connectors

 Just some general information that might be useful.

This is a "mini-SAS" SFF8087

This is also a "mini-SAS", SFF-8643, typically seen in SAS 12g applications.

This is an OCuLink PCI-e SAS SFF-8611 4i; as seen on a PowerEdge r640.

The cable on the left is a "double wide" SAS cable, used on SOME HP and Dell RAID cards.  As in this case an HP Smart Array P440.

The cable on the right is a PCI-e Slimiline SAS SFF-8654 8i cable


Dell 13th gen. servers and NVMe/Bifurcation

 In case anyone is wondering, the Dell 13th generation servers can do PCIe Bifurcation.  

So first question for those not in the the know, what the heck is PCIe Bifurcation?  In simplest term is take a PCIe slot and sub-divides it into multiple slots.  IE a PCIe x16 slot can be divided into two x8 or four x4 slots.

Ok great buy why would one want to do this?  Well many people have found the wonders of NVMe M.2 hard drives, and run a NVMe M.2 adapter PCIe card. Sometimes the desire to run multiple M.2 cards is desired, but there simply isn't enough slots available.

A quick search on ones favorite shopping place for IT gear will show a number of cards that allow one to put two to four M.2 NVMe drives on a single card, however in order to use them, the PC must support Bifurcation, and with out it the computer will only see the first drive.  For the record, there are cards that can have more than one M.2 NVMe drive even if the PC doesn't have Bifurcation support, some have a RAID controller chip on them, some have basically a PCIe switch on them.  They are not very common place and are pricey.

In this case the case study is a Dell PowerEdge r930, with a $20 dual M.2 card from Amazon.


Note on depending on the card slot six might not be usable as the card hits the screw for the heat sink.

The setting change...also even though this machine has something like 10 PCIe slots only six of the have the setting.


Dell r720xd and VMware ESXi 8.0

 -Even though the server or the CPU's are not on the HCL (Hardware Certified List), it does work.  Even with a Xeon E-2600 v0 CPU (with the CPU bypass setting).  The Perc H710 mini mono RAID card is recognized.  The onboard NICs are recognized.  

-Mellanox ConnectX-3 40Mbps network cards are no longer supported

-There are three different versions of Perc H710; the firmware and drivers are all the same.  The third revision is actually PCIe Gen 3, however by default it is set to Gen2 mode.  The change is easily done in the RAID card settings.  Thanks to the Art of Server for pointing that out.

-The XD model has twelve 3.5" hard drive bays.  In order to make that work there is no real-estate for an optical drive or the LCD screen normally seen on PowerEdge Servers.  There is an optional "flex-bay" kit which adds two 2.5" bays at the rear of the machine.  Interestingly when one adds the flex-bay, it disables the two onboard SATA connectors, even though the drives in the flex-bay are operated by the HBA/RAID controller.  









 

HP Proliant DL80 Gen. 9

 Hewlett Packard Proliant DL80 Generation 9 is an entry level server with many corners cut to make the price cheaper than the normal enterprise hardware.

Here are a few notes:

-only one Power Supply

-no dedicated ILO port

-eight memory slots, and only four can be used in single CPU configuration

-the base model has no riser cards, and can only use three PCIe cards with a single CPU installed and five with two CPU's.  

-there is three? options for adding a riser card(s).  One of which is a GPU kit, which includes another fan

-converting to dual CPU's will need at least two more fans, three to make it "redundant".  it will operate without have the extra fans but it will complain about it at post and run the existing fans at full speed.

-the built in RAID controller is a B140i.  It has two SFF-8087 ("mini-sas") connections and two normal SATA connectors.  All of those connectors are controlled by the RAID card.  IE if one plugs a single drive into the normal SATA port one needs to go through the HP Storage Administrator and make a RAID0 in order to use it.  It only does SATA, SAS is NOT supported.  RAID5 is supported!  System RAM will be used for cache.  Interestingly Windows will see the virtual volume as intended, however my Linux utility, Parted-Magic, as well as the VMware installer, sees all of the drives individually.  The card can have it's personality switched to be a standard SATA controller.

-The standard backplane is four port, despite there being 8 drive bays. One can just add a 2nd four port backplane, the physical retention is there as is a second special power lead.   There also appears to be a single 8-bay option.   The standard backplane also claims to be not hot-swappable, I am not sure if that is because of the B140ii controller or the back plane.  Speaking of drive bays, the 3.5" bays do not have any LED indicators.  The backplane is connected to the system board via a SFF-8087 connector (at the motherboard end) to a four port SATA break out cable to the four ports of the backplane.  

I ordered a HP 790487-001, which is said to work for the DL60, DL80, and DL120.  This four port SAS/SATA backplane differs in that it has a SFF-8087 connector and little "fingers" to operate the LEDs for the drive sled.  

-VMware ESXi v7 does see the storage and the network cards.  However where as Windows does recognize RAID logical volumes, ESXi does not.   In this case a four drive RAID5 was setup, and VMware only sees four individual drives.  





Original backplane on the left, upgraded one on the right

Notice the "fingers" to operate the drive sled LED's

Both backplanes installed, note the SFF-8087 isn't installed yet.


Windows 10 Activation & Licensing (late 2023)

Ever since 2015 Microsoft has allowed computer users to "upgrade" to Windows 10 using the license key from Windows 7 and Windows 8.  This allowed many older computer that came from manufacture with Windows 7 to have a new lease on life by legally using a newer operating system.  During the late fall of 2023 Microsoft has closed this loop hole.  

Today if one takes a Windows 7 computer and installs Windows 10, it will not activate.  Even if one types the license key, it will not activate.  So where does that leave those of us with older machines hanging around?  Well...

YOU CAN CONTINUE TO USE A COMPUTER WITH AN UNACTIVATE COPY OF WINDOWS 10!

The computer will continue operate just fine, it will not one day stop working because of the activation status. 

There is some caveats though.  Here is a list of them:

-"Activate Windows" watermark shows up on the bottom right corner of the screen, and will be visible no mater what, even if one is watching a video, playing a game, or writing a novel.  

-premium features are locked out

-optional updates are not available, but critical updates are

-desktop personalization is restricted: wall paper, taskbar settings, lock screen, themes

https://royalcdkeys.com/blogs/news/unactivated-windows-10-what-are-the-disadvantages

https://www.pcworld.com/article/2104344/its-official-upgrades-using-windows-7-and-8-keys-are-dead.html

https://softwarekeep.com/blog/what-happens-if-you-don-t-activate-windows-10


Dell PowerEdge r930 hardware guide

 ***Work in Progress, will continue to update until the project is finished****

Dell PowerEdge r930 is a 4U, four CPU system.  This is some notes from working on them.  Finding information on them was sparse, so hopefully someone else can benefit from my research.  The biggest help was this source: from Dell.  This link as well.


When using four CPU's the minimum RAM configuration is eight memory modules.  One stick of RAM in the #1 back of each of the eight memory riser cards.


It doesn't matter so much what hard drive controller is used, but it MUST bee in the very first slot.  Remove the I/O shield put it into slot one and use the lever to secure it down.   In this case it is a single Perch 730H.


To pull out the fan tray, just remove the outer four fans, then use the release to free up the handle.  In order to work on the CPU's or the back plane, all memory risers must be removed, the four fans, and the fan tray.

The network card is the same "form factor" as found on other r630/730's..it has the proprietary connection.  There is a special riser card that takes the special NIC and converts it to PCIe.  That card also has a special slot for one to install the SD memory card modules.



This is the network riser card without the NIC daughter board

ONE HBA/RAID card in a 24 drive system: the SAS cables from port A go to port A on the SAS Expander, as is the same for port B.  The top SAS port from the backplane goes into the furthest port on the SAS Expander.  The bottom SAS port on the backplane goes to the closest port on the SAS Expander.  Use the wrong cables or in the wrong spot, the system will complain.





These machines come standard with the ability to run four hard drives.  They can be converted to run twenty four.  It is a PITA, but very doable.  The first one I did took a few hours, the 2nd was a breeze.

Parts needed:

-backplane

-SAS expander

-backplane SAS cables

-SAS cables from HBA/RAID to SAS expander

-front bezel 







Quantum Lattus Part 2

 The Lattus has two 1gbps network cards, which is all fine, but even with mechanical disks it doesn't take too many of mechanical drives to saturate a 1gb NIC.  So I attempted to add a 10gb NIC.  

Turns out that the S30/40/50 using the the SuperMicro system board, the placement of the PCIe slots does allow for a standard height low profile card; in either direction, going the left side of the board (away from the CPU) there isn't enough room in height.  Nor going to the right as the card hits the CPU heatsink.

The S10/S20 does work either direction.  If going away from the CPU one must remove the 2.5" drive bay.  I didn't feel comfortable having the circuitry of the 10gb NIC basically touching the system board so I choose to run the NIC towards the CPU.  Doing so meant I had to remove the PCI-e SATA card.  Meaning I could only drive six hard drives, or five data, and one OS drive if running something like TrueNAS.  I found a 6gbps 2-port SATA card that is super small that fit in the other side; which brought my drive controller up to 8 drives.




Also I found these SuperMicro 10gb NICs that are based on the Intel x520 chip; they are physically smaller in nearly every dimension than the OEM Intel.






Quantum Lattus

 I don't know much about the product, it was designed to be a "cheap and deep", massively scale out storage system.   The nodes are "dumb" and controlled an automatically deployed by PXE boot a master node.   Western Digital did much of the early R&D, that intellectual property was sold to Quantum.  This particular line has reached it's End Of Life.

These servers hold 12 SATA 2.5" drives in a 1U extra deep case 33~34" long.  

The Lattus S10 & S20's have an Asus CMB-a9sc2 motherboard, it has a dedicated IMPI port, dual 1gb NICs plus a dedicated IMPI.  8gb DDR3 ECC RAM.  Six onboard SATA ports, a PCI eight port SATA card. It also has a 2.5" hard drive tray.  Intel Xeon E3-1220L v2 CPU at 2.3ghz (2 core, 4 threads, 17 watts).

Motherboard datasheet

The Lattus S30, S40, & S50's have an SuperMicro x10SL7 motherboard, it has a dedicated IMPI port dual 1gb NICs.  It has eight onboard SAS/SATA ports.  Intel Xeon E3-1230L v3 CPU at 1.8ghz (4 core, 8 threads 25w).  

 The BIOS password is "Adm1n", the IMPI credentials are ADMIN/ADMIN

S10/S20 

S10/S20 rear

S10/S20 with the SATA controller card removed.

S30/S40/S50




One of the S40's running TrueNAS