Quick and dirty test. The Guinee pig is a Lenovo ThinkSystem SR655. With an Avago RAID 930-24i card with 4gb cache. Four Koxia 1tb SAS 12gbs SSDs. VMware ESXi v8 is installed on to the host. The four drives were set up into a RAID5 configuration, read ahead enabled, drive cache is enabled, cache policy is set to Write Back. The RAID virtual drive was presented to VMware; a Microsoft Developer Windows 11 VM was imported to the datastore using thick provisioning. Atto disk benchmark software was ran with both 4gb and 8gb tests. Then, the VM, datastore, virtual drive, was torn down, rebuilt as RAID10, and retested.
Ramblings of an IT Professional (Aaron's Computer Services) Aaron Jongbloedt
RAID5 vs RAID10
I found the results somewhat surprising. We often hear about the "RAID 5 write penalty", or "RAID 10 is just faster", etc., etc.. Well this test shows the opposite to be true. The write speed on RAID5 is actually better! One theory is that well, three drives are doing the work of writing in RAID5 where as in the RAID10 only two drives are writing.
Burnt up Network Card?
I have seen this in decades. This Mellanox 100gb card was causing the server to be unstable. Now, I don't know when the card went bad. The server was repurposed, and the 100gb NIC was added. So I don't know if my co-worker put the NIC in already burnt, or it burnt up in this Dell R620 Power Edge
Notice the discoloration in the lower right corner, and that capacitor is also a different color.
Subscribe to:
Posts (Atom)