ESXi v5.5 and SSD's

Two great new features with v5.5:

VSAN: This is a dynamic shift if storage methodology, for the past 10 years or so we have been pushing people to shared storage SAN & NAS, for many reasons; flexibility, speed, shared access, dynamic sizing, etc.. etc..  Also if one wanted to take advantage high availability aka vMotion (where your virtual machine can float between VMware servers). 

No longer does one need shared storage for a Highly Available VMware cluster!  VSAN is essentially another software SAN that lives on ESXi.  I did say "another"; products like LeftHand/HP VSA, RocketVault, even FreeNAS, and OpenFiler, just to name a few.  Even VMware tried this once before, their VSA appliance was tried, and was ultimately given a death sentence.  VMware's VSA's weak points were that it was limited to three hosts and the licensing costs neared what one could buy an entry level SAN for.

So how does VSAN differ than VSA or any other software storage product?  It uses 3 to 8 nodes to contribute to the storage pool; note any number of devices can attach to the storage pool.  Also each contributing server needs to have direct attached storage and SSD.  Lastly, this product is integrated in the hypervisor, it is not an appliance/VM running on top of the hypervisor. 

At the time of writing this blog, the product has been in beta for quite sometime, since VMworld 2013.  The beta is a free and open product to test.  I don't like that one NEEDS to have SSDs; I understand why, the whole reason is to make replication and higher IOPs.  It is just that enterprise level SSD is expensive.  It also is a pain that one must have at least three hosts; the reason for it is to prevent the 'split brain syndrome' (where there is a disconnect, and both parties think they are the 'master').  Lastly, official pricing/licensing hasn't been released but more than likely it will be an advanced feature that won't be offered in the Essentials/Standard packages.  Those three things make it harder for the SMB to deploy this feature.  It is easier to justify spending dollars on a box where one can point to and say that: "this is what we spent $20k" on vs. PDF file with license numbers on it.

Speaking with a peer, they were testing using Intel 910 PCIE SSD cards, and their results showed that VSAN was much faster than their NetApp.  Which proves the theory that moving storage back to local host, closer to where it is needed can be faster.

FLASH READ CACHE:  In a nut shell if there is SSD in the system one can set a chunk of that drive aside to be used as cache for any/each VMDK to be used as cache.  It is only read-cache, not write cache.  It is set at the VM level; there is no cluster/vAPP wide setting that can quickly be applied.  So if one has dozens of VM's this becomes a pain.

In my lab, I was about to deploy this feature, however like all v5.5 new features it can only be manipulated in the Web Client, it can only be configured for VM's of hardware level 10 VM's; again flipping a VM to hardware level 10 means that it cannot be manipulated via the thick/c# client.  Also the SSD drive must be blank before enabling it.  In my lab I already the SSD drive in use by Host Cache and moved all of the VM's swap file to the SSD.  Eventually I will get around to undoing those settings, formatting the SSD drive, turning on Flash Read Cache, then turn Host Cache, and relocate the swap files to SSD.  This SSD drive is now unavailable for anything else; Flash Read cash, and host cache, it is not a datastore, so one cannot relocate VM swap files there or anything else.

No comments:

Post a Comment