Dell PowerEdge r930 hardware guide

 ***Work in Progress, will continue to update until the project is finished****

Dell PowerEdge r930 is a 4U, four CPU system.  This is some notes from working on them.  Finding information on them was sparse, so hopefully someone else can benefit from my research.  The biggest help was this source: from Dell.  This link as well, also the factory manual.


When using four CPU's the minimum RAM configuration is eight memory modules.  One stick of RAM in the #1 back of each of the eight memory riser cards.


It doesn't matter so much what hard drive controller is used, but it MUST bee in the very first slot.  Remove the I/O shield put it into slot one and use the lever to secure it down.   In this case it is a single Perch 730H.


To pull out the fan tray, just remove the outer four fans, then use the release to free up the handle.  In order to work on the CPU's or the back plane, all memory risers must be removed, the four fans, and the fan tray.

The network card is the same "form factor" as found on other r630/730's..it has the proprietary connection.  There is a special riser card that takes the special NIC and converts it to PCIe.  That card also has a special slot for one to install the SD memory card modules.



This is the network riser card without the NIC daughter board

ONE HBA/RAID card in a 24 drive system: the SAS cables from port A go to port A on the SAS Expander, as is the same for port B.  The top SAS port from the backplane goes into the furthest port on the SAS Expander.  The bottom SAS port on the backplane goes to the closest port on the SAS Expander.  Use the wrong cables or in the wrong spot, the system will complain.

The two back-plane cables are special (p/n: 085WG0 and 09HT8M) are special, even though they have an industry standard SAS connection, they are actually proprietary. Normal SAS cables will plug in, but not work. The cable from the SAS controller to the SAS expander is p/n: 0KN3YV.




These machines come standard with the ability to run four hard drives.  They can be converted to run twenty four.  It is a PITA, but very doable.  The first one I did took a few hours, the 2nd was a breeze.

Parts needed:

-backplane

-SAS expander

-backplane SAS cables

-SAS cables from HBA/RAID to SAS expander

-front bezel 





MORE "FUN"!!!!

The generation 13 PowerEdges can run U.2 NVMe drives!  The backplane is different, the cabling is different, a special controller card is needed.

The backplane backplane is very similar to the normal SAS/SATA 24 bay backplane, which the obvious physical difference being the addition of eight extra SAS-style connectors.   One can use the machine with the special PCIe cards run eight NVMe drives and sixteen SAS/SATA drives.  One can use the NVMe/SAS backplane and not use the special PCIe cards and cables, however every time the machine powers on, there is an error at post, that one must hit F1 to continue.

The cabling of the NVMe capable backplane with two PCIe controller cards.

The cabling of the normal SAS/SATA 24bay backplane.

Picture of the NVMe capable backplane with the extra connections circled in red.


Normal SAS/SATA 24 bay backplane, notice there is only two SAS-style connectors


In iDRAC the machine recognizes the NVMe backplane, even though there is no PCIe controller cards installed. 

The error message during post, that is shown every time, that requires one to press F1 each time.

The NVMe setup also reserves eight of the drive bays for NVMe only, SAS/SATA drives will not work in those designated bays.


Here is what it looks like both NVMe cards and the cabling.  It is very tight to run all the cables.
The NVMe cables are special; besides being molded to "fit" into the r930 chassis, something about them is different.  The system will not allow one to use generic SAS cables nor get them out of order.  The part number for the r930 cables are as follows:
cn-09k88-48570-42h-007m (dp/n: 09rk88) and cn-066fk9-48570-42j-004n  (dp/n: 066fk9)





Quantum Lattus Part 2

 The Lattus has two 1gbps network cards, which is all fine, but even with mechanical disks it doesn't take too many of mechanical drives to saturate a 1gb NIC.  So I attempted to add a 10gb NIC.  

Turns out that the S30/40/50 using the the SuperMicro system board, the placement of the PCIe slots does allow for a standard height low profile card; in either direction, going the left side of the board (away from the CPU) there isn't enough room in height.  Nor going to the right as the card hits the CPU heatsink.

The S10/S20 does work either direction.  If going away from the CPU one must remove the 2.5" drive bay.  I didn't feel comfortable having the circuitry of the 10gb NIC basically touching the system board so I choose to run the NIC towards the CPU.  Doing so meant I had to remove the PCI-e SATA card.  Meaning I could only drive six hard drives, or five data, and one OS drive if running something like TrueNAS.  I found a 6gbps 2-port SATA card that is super small that fit in the other side; which brought my drive controller up to 8 drives.




Also I found these SuperMicro 10gb NICs that are based on the Intel x520 chip; they are physically smaller in nearly every dimension than the OEM Intel.






Quantum Lattus

 I don't know much about the product, it was designed to be a "cheap and deep", massively scale out storage system.   The nodes are "dumb" and controlled an automatically deployed by PXE boot a master node.   Western Digital did much of the early R&D, that intellectual property was sold to Quantum.  This particular line has reached it's End Of Life.

These servers hold 12 SATA 2.5" drives in a 1U extra deep case 33~34" long.  

The Lattus S10 & S20's have an Asus CMB-a9sc2 motherboard, it has a dedicated IMPI port, dual 1gb NICs plus a dedicated IMPI.  8gb DDR3 ECC RAM.  Six onboard SATA ports, a PCI eight port SATA card. It also has a 2.5" hard drive tray.  Intel Xeon E3-1220L v2 CPU at 2.3ghz (2 core, 4 threads, 17 watts).

Motherboard datasheet

The Lattus S30, S40, & S50's have an SuperMicro x10SL7 motherboard, it has a dedicated IMPI port dual 1gb NICs.  It has eight onboard SAS/SATA ports.  Intel Xeon E3-1230L v3 CPU at 1.8ghz (4 core, 8 threads 25w).  

 The BIOS password is "Adm1n", the IMPI credentials are ADMIN/ADMIN

S10/S20 

S10/S20 rear

S10/S20 with the SATA controller card removed.

S30/S40/S50




One of the S40's running TrueNAS


Formula 1

 Anyone find it odd that both VMware AND Broadcom both sponsor Formula 1 cars, but two different teams? BTW for those of you not familiar, Broadcom is in the process of taking VMware over.



Here is a bit how on how VMware is helping out McLaren.





10gb vs 25gb Ethernet when concerning electrical usage

Controlling the amount of electricity being used in computing use has become increasingly more important over the years.  Wither it is in the home lab, and the consumption directly affect one's own pocket book or the datacenter/server room has heating or power limitations. 

In this case I thought it might be interesting to look at what impact of networking speed has on electrical use. One could conclude from simply looking at number of and size of the heatsink on the cards that there must be a difference.  For this test my trusty Kill-A-Watt was pulled out of storage.  A normal desktop was used, a Dell 3rd Gen i3, the specifics are irrelevant.  Each time the machine was booted off of USB thumb drive into "Parted Magic" (a Linux Rescue Utility); I chose that OS over using Windows so other variables such as drive indexing or Windows Updates wouldn't skew the results. The machine was left to idle for 5 minutes to let things settle down before measurements where taken.  Extra GBICs where removed, and the same DAC cable was used.  The machine was plugged into a 10gb switch; no 25gb switch was available for testing, donations are welcome! :)

Bare machine: (no additional hardware) = 32watts

Machine w/ 10g Intel i520 NIC no cable =35watts

Machine w/ 10g Intel i520 NIC w/ cable =37watts

Machine w/ 25g Mellanox CX4 NIC no cable =40watts

Machine w/ 25g Mellanox CX4 NIC w/ cable =42watts

Machine w/ 25g Mellanox CX5 NIC w/ cable =41watts

Machine w/ 25g Broadcomm BCM51414 w/ cable =41watts

Machine w/ 25g Chelsio w/ cable =49watt (very hot to the touch!)

Intel on top, Mellanox CX4 on bottom.

Chelsio on top
Broadcom in the the middle
Mellanox CX5 on the bottom

Another future test would be to test under load.


Watchguard M270

 More electronic recycling.  This Watchguard M270 firewall, while is still supported and the latest code works on it, it was sent out for scrap!   So let's take a closer look at it.

This device had customer data on it, I didn't know the password, nor the IP scheme.  Next I attached to the console port, and rebooted it into the recovery mode, I didn't find a password reset option.  So a factory reset had to be done.

1. Power off the Firebox.

2. Press and hold the Reset button on the back of the Firebox.

3. While you continue to hold the Reset button, power on the Firebox.

4. Continue to press the Reset button until the Attn indicator begins to flash.

5. Release the Reset button. ...

6. Wait for the reset process to complete.

Once done, open a web browser to https://10.0.1.1:8080, use credentials admin & readwrite

Sadly the feature keys for this product are now gone. I didn't realize the reset would wipe them.  The box will not download it's key from the Watchguard mothership, it complains about being expired.  Now the box is pretty much useless as a Watchguard, as it will only allow one client to get through the network.  I have seen some people run PFsense on this hardware, so that could be an option.

Interesting observations:

-there is a way to get into the BIOS via the console cable, but no one seem to know the password.  There is a way to re-flash it. 

-CPU: Intel Atom C3558 @ 2.2ghz, 4 core, 4 threads, 16watt, BenchMark 2417

-Memory: 4gb RAM DDR3 sodimm

-Hard drive: 16gb mSATA

-Marvell 2.5gbps ports?  I saw mentioning of this during boot, but that might be an internal switch or something, as from the command line and the GUI all the ports report as 1Gpbs

-the power supply is a "wall wart"....the big plastic transformer/power supply is actually inside of the case, and the barrel plug fits into the system board.  Interesting!  I guess it makes replacement simpler, and from installation point of view I'd rather rack a full width appliance vs. a smaller one and then have to deal with securing the external transformer/power supply somewhere else in the rack.








Dell PowerEdge r620 TrueNAS Build: part 2

 Finally getting around to this project again.   

TrueNAS Core 13.05 was installed onto the 64gb SSD drive.  The eight Evo 840 1tb drives were set up into two groups of four RAIDz2 pools, then mirrored.  The drives have a lot of hours on them, so I could arrange them better IOPs or space, but I am going error on the side of caution here and go for higher resiliency.  The usable space is around 3.5tb.

 One of the 1gb cards was setup with a static IP address.  The Mellanox 40gb card was setup with a different IP arrange.  The rack that this system will live in has a Dell Force10 switch that has six 40gb ports (two of setup in a trunk for uplink) and forty-eight 10gb ports, there also is a separate 1gb switch. 

Then iSCSI service was started, bound to the 40gb interface.  A zPool, portal, target, and extents were setup.  For the meantime a Dell r730 running VMware ESXi v7.x with a single 10gb interface was setup for iSCSI.  The connections was made, the datastore presented.  Storage vMotion a Windows11 VM to the TrueNAS, and ran Atto Disk Benchmark.





Using 10gb networking, this is faster than my twenty drive SAS 7200rpm system.

Using 40gb networking, there is some gains to be had.  It appears that the system is being held back by 10gb networking.








Dell PowerEdge r620 TrueNAS Build: part 1

I have eight Samsung Evo 940 SSD 1tb drives that needed a deserving home; so I decided to build  another TrueNAS to server as a VMware shared storage appliance.  A Dell PowerEdge r620 server was picked out a to use, as it had eight 2.5" drive bays.  The dual CPU's e5-2650's should be way more horse power needed to be an ISCSI/NFS target.  Eventually they will be upgraded to v2 CPU's as they are stupid cheap on eBay and it is roughly 30% more performance with no extra electricity. 

An Intel 64gb SSD was brought out of retirement, installed into a slim-CDROM to 2.5" HDD adapter. This would be where the OS will be installed.  However first, I installed Windows Server to it just to test out the hardware and then update the firmware.  

First issue I ran into is that even though this server has eight 2.5" drive bays, the backplane only had four drive connectors!  I have seen other r620's with only four drive bays, but the other side was a metal face, this one had drive sled in it!  Interesting!   As luck would have it I found another r620, that had a backplane that did have all 8 drive connectors but was only two hard drives, so the back plane got swapped.  


This server has a Perc H710 mini monolithic RAID card with 512mb of battery backed cache.  TrueNAS works much better using an actual hard drive controller, not a RAID card where the OS officiates the disk info.  It appears that the r710 does not support non-RAID-ed disks.  I attempted to pull out the RAID card, to see if the machine would see the drives; it would not.  On a previous build (a Dell r320) I removed the RAID card moved the cable to to a different mini-SAS-8087 36 pin connector, thus the system ran the drives off of the onboard SATA controller.  This Perc H710 uses a SAS-8654 cable?  I didn't have a one into two SAS sas-8087 to go from the single onboard connector to the two back plane connectors.

Well, time to gamble a bit and attempt to flash the Perc into "IT-mode"...basically re-writing the firmware to make it think it is a some sort of LSI 2208 series SAS HBA controller.  The H710 does have a faster processor than H310, and with spinning disk one probably would not max out the controller; with SSD's getting to that ceiling is a real possibility.  It turns out there is not one but two different versions of the H710, one is PCI-e 2.0 vs 3.0 "capable".   If one searches the web for great resource: "Art of the Server", he has lots of great information on these.
Following the steps here on Jon Fohdeesha's site the BIOS changes were made.  The RAID battery removed, the drives removed, the ROM on the Perc was erased, via the FreeDOS utility, although I did erase it three times as it would error out on the 2nd step. The new firmware was then written, the SAS identifier reprogramed.  All was successful!  During post, one now sees the LSI/Avago BIOS and menu screen.  Also interesting is that the Dell "System setup" also see the HBA.