I/O Benchmarks, VMware, RAIDs, & SATA vs SAS

The need for more space and less power consumption in my home lab has arisen.  I pulled out a RAID 5 array composed of four 15k 146gb SAS drives in favor of a RAID1 array composed of two Western Digital 2tb Red NAS drives.  This server would now have a 256gb SSD, a RAID1 WD 1tb Enterprise drives, and a RAID1 2tb Red drives.  The hope is that the slightly faster Enterprise SATA drives and the decent RAID card would be sufficient, and the NAS drives should be fine for data.

I ran some benchmarks; this time I ran them with a larger data set, as the 512mb cache on the RAID card throws off benchmarks.  We need to test the drive arrays, not the RAID card cache.







four drive RAID5 146gb 15k SAS

two drive RAID1 1tb WD Entperise SATA

two drive RAID1 1tb WD Red NAS SATA

I am really surprised that WD Red NAS drives out performed the WD Enterprise (WD1003FBYZ) drives.  Both are 64gb cache, the Reds are 5400~7200 rpm (Intelipower) and the Ent. are 7200rpm.

I/O Benchmarks, VMware, SSD's, & 6gbps SATA

I have been a big fan of sticking a SSD drive into VMware hosts for a long time.  Using Host Cache (where the ESXi host will use the SSD for it's swap file location), and redirecting the VM swap file to also be on SSD.  In my home lab I upgraded from a Corsair  Force90gb SDD purchased back in 2012 to a Crucial M4 256gb.  A bit more room, and I figured a 3+ year newer drive might have some performance gains......turns out not really.






I also picked up a cheap SATA 6gbps controller hoping that might help pick up some more speed...turns out ..meh...not really.  ALSO, most SATA controllers are not supported by ESXi v5.5 and newer.  I had to take advantage of a community hack to get it to work: