ESXi SSD older RAID hardware

I decided to add an SSD drive to one of my lab ESXi boxes; so I can utilize the Host Cache features, and test out the VMware VSAN beta (which requires SSD).  I simply mounted an SSD drive into an HP caddy, booted into HP Offline Array Configuration Utility, made a RAID0 (with only one drive it is the only option), booted into VMware.

ESX did not recognize the drive as an SSD drive.  There is a way to force ESX to categorize a drive as SSD, but do I really want to do that?  To me if the system doesn't recognize it, there must be underlying issues.

http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc_50%2FGUID-99BB81AC-5342-45E5-BF67-8D43647FAD31.html

Turns out it is a bug in the Smart Array P400 Firmware when using Vmware.  Also it was discovered that this RAID card when dealing w/ SATA devices will only transfer at 1.5Mbps, where SAS is 3Mgbs.  Interesting, so will there be much of an improvement it the SSD drive is limited by the 1.5Mbps connection?  Also the P400 does not have trim support.

So options:
-attach the SSD drive to the onboard SATA control off of the motherboard, and find a way to MacGyver a way to mount the drive inside of the case but not in the hot swap bay.   Oh wait, there isn't any onboard SATA connectors!

-Buy a PCI/PCI-e SATA card, and MacGyver the drive mounting.

-Use an PCI-e SSD drive.  Will a consumer grade card work with ESXi?  Paying for an enterprise card on the VMware HCL for my lab use is not an option.

-Upgrade the P400 to a P410.   Will the volume just be recognized?  Second hand cards, with cables, and battery run over $150.

Increasing size of ESXi5.x datastores

Victim is an ESXi 5.1 running on a HP DL380 G5 with a Smart Array P400 RAID card.  I migrated the 6 drive RAID 5 to 7 drive RAID 5.  Logging into Virtual Center (vCenter Appliance v5.5) using the Vmware Vsphere Client.  The configuation showed that the new space was now available.  Normally one clicks on the "Increase" button, and add an extent.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1017662

For whatever reason, despite seeing the extra space, I was not able to add the extent.
I then used the Vsphere Client and logged directly into the host; from there I was able to add the extent and expand the datastore.

New Line of Western Digital Sentinels

Windows Storage server 2012 R2
Xeon CPU
DS5100 2.3ghz dual core, 8gb ram
DS6100 2.5ghz quad core, 16gb ram, comes w/ 2nd AC supply
Upgradeable to ECC 32gb!!
Raid 0, 1, 5, & 10 options
Marvell RAID accelerator
Does have VGA ports & USB 3.0
http://www.youtube.com/user/WDBusinessStorage
Another interesting thing is that there is a SEPARATE 2.5" hard drive for the operating system on the 5100, the 6100 get two drive for the OS.  This gives more usable space, and drive rebuilds won't take quite as long.


Vmware VM migration withOUT shared storage~

When Vmware release ESXi 5.1, the ability to migrate VM's from one host to another without having shared storage!  Enhanced vMotion/Enhanced Storage vMotion.  So yes...two ESXi hosts with only local data store, the VM can be bounced back and forth.  However, the VM must be powered down first.  Microsoft has Vmware beat here, MS allows running machines to be migrated.  Also one must have vCenter installed, configured, and running.

Don't have vCenter but got a project that requires to move a bunch of VM's?  No problem!  Download and configure the vCenter Server Appliance.  A free 60 day trial is included.  It is NOT necessary to join it to Active Directory.

Update; with the Web Client one can migrate VM's LIVE across local storage.  It cannot be done with the thick/C# client.   FWIW, it is greatly faster than Microsoft's Live Migration, at least thus far with out truly doing back to back testing.

Windows 2012 NIC Teaming & Hyper-V

Recently I was repurposing some used servers.  Much to my dismay, I am putting Windows 2012 on them for use in a Hyper-V cluster.  First issue we noticed is that if you have Server 2012 R2, it cannot be managed by the 2012 Virtual Machine Manager.  So instead of loading R2 on these boxes, we are loading R0.  
Why are we loading Hyper-V?  The answer is MONEY!   Vmware has been jacking around there licensing lately, especially for those providers who offer hosting services.  They actually charge by the gb of ram allocated!  Then Microsoft, gives you free four VM's if you run their Hypervisor.  So buy a copy of Windows Enterprise, get four free VMs, on VMware one would have to buy four copies of server.  OUCH!  Anyways I digress.
I really like the built in network card teaming feature in 2012.  The only way I know how to get to it is by the Fisher Price Server Manager.  To create a team just click on NIC Teaming, add the NICs, give it a name, choose the type of team one desires (FYI switch  Independent seems to work without any configuration changes necessary to the switch.

After that is complete one may notice that the team's status should be a 2gbps connection.  One may also notice that the individual NICS now only have the "Multiplexor Protocol" enabled and all the normal protocols such as TCP/IP have moved to the team.

After adding the Hyper-V role, the team gets striped from most of its protocols, a new one is added called 'Hyper-V Extensible Virtual Switch', and a new NIC shows up called VEthernet which has the protocols and IP configuration.

TIP!  If you are going to team NICs together for use in Hyper-V and don't have a separate NIC for mgmt. Create the team first, then add in the Hyper-V role.  I did it the other way around and had some very strange results.

Video card/monitor woes with high screen resolution

My bedroom PC needed to be retired, it was a Dell Precision 630; dual Xeon 2.8ghz CPU's w/ 4mb cache, 400mhz bus; a real screamer in its day.  Alas blown capacitors on the motherboard started to make the PC slow, odd video issues, and occasionally not boot.  Besides, I suppose it is time I finally let go of my last Windows XP machine.  This machine has an ATI Radeon 256mb  AGP video card.  That card drives a Dell 27" (2550*1440 resolution) and an IBM 21" (1600*1200 resolution).

Its replacement is a Dell Precision T5400, Xeon L5420 2.5ghz Quad Core w/ 12mb cache, 16gb ram FB-5300.  It is upgradeable to a 2nd cpu, and these "L" Series CPU's only require 50 watts of power vs. the normal 75~100 watts.  It currently has a Nvidia GeForce 210 1gb ram PCI-E video card.

While setting up the T5400 it was on a different 21" monitor running at 1600*1200 resolution.  Once moving this machine to its new home, I was greeted to see this:

That machine booted up into Windows 7, and was just fine.  Until I moved the screen resolution to its native 2550*1440, it then went pink and fuzzy again; dropping it to 1920*1080 it was fine.  I knew the monitor was fine since it ran at full resolution on the old pc.  I swapped out the DVI cable, same result.  I swapped to the VGA connector (DB15), the picture was clear but maximum resolution was 2048*1150.  Went back to DVI port on the video card, used a DVI->VGA adaptor, 2048*1150 was the result.  Switched to HDMI, 1920*1080 resolution. 

I tried five different video cards, and nearly the same results!  Interestingly the ATI based cards show properly on post, only the Nvidia cards were fuzzy and pink at post.  So I have ruled out the monitor, the dvi-cable, and the video cards as possible issues.  It was starting to look like a flakey motherboard or something else strange.

A friend of mine mentioned that perhaps I need a dual link DVI cable.  Huh?  What is this dual-link you speak of?
http://www.tomshardware.com/reviews/tft-connection,931-8.html
http://en.wikipedia.org/wiki/Digital_Visual_Interface
I acquired one of these Dual-Link DVI cables, and tadah!   No more pink fuzzies and full resolution!!!

hard drive benchmarks.....v1.1

More not super scientific benchmarking....All test were on an HP ML350 G5, dual quad core, SmartArray P400 w/ 512mb ram, dual nics in multipathing mode, running VMware ESXi 5.1.

Two 146gb drives in RAID 1

Six 600gb drives in RAID 5

These next three are all going to a Western Digital Sentinel RX4100; the two nics are bonded using LACP.
Raw Disk Mapping

Normal Windows File Share

VMDK

There are subtle differences in the three WD Sentinel tests; do be fair I only ran the test once, so the differences COULD be statistical noise.
This last one is running the benchmark directly one the Sentinel itself.  Notice how much performance is being lost over the network.  I would be curious to find out if this a result of how the network is configured, the NICs in the Sentinel, or just what can be expected with network traffic.  By looking at these results, one could conclude that the 5400rpm drives in this machine are NOT the bottleneck.

Cisco USC 240

 
Cisco "new" hardware. 
It has two SD card slots, and the machines "BIOS" kind of sort of is located on the SD card, there is one blank partition on it, if one chooses to load an OS on it.  The 2nd SD slot is for having a mirror copy.
 
Nearly all of the machine is configured via a web page.  Including RAID card settings.  I could not find the "silence alarm" button. 
 
 
 
The machine has on board SATA/SAS controller, which is very limited.  There is an option to upgrade it to a "mezzanine" card.  The mezzanine card will only allow the use of 8 drives and is not supposed to be used with any other RAID cards.  This particular customer changed their mind on the purpose of the machine to where it needs more drives.  So Cisco UCS-9266 (made by LSI Logic) was ordered.  For whatever reason Cisco was rather specific in that if it is in a 2 CPU system the card goes in slot 4, and slot 3 in a single CPU system.
 
This UCS-9266 card has two ports on it, the backplane has four ports on it.  It will not recognize any drives beyond 12.  There is other cables offered by Cisco a UCSC-6, UCSC-4, and UCSC-2; I was unable to discover what the differences between the cables is, nor find a picture.  I am also unsure what the machine came with.  Apparently Cisco ships these machines with one of two different back planes for the hard drives, and to use the UCS-9266 we needed the other one.  Supposedly the UCS-9271 will work as it has 8 ports.
 
Another point of interest: when ordering, keep in mind if one is getting a 16 drive capable system or a 24 drive system.  On the 16 drive system, the last 8 slots for hard drives do not have SAS/SATA connectors, despite there being drive blanks there.  Cisco will not sell the parts to convert a 16 drive system into a 24 drive.
 


Western Digital Sentinel RX4100

The big brother to the RX4000.  This one comes with 4gb of ram has the same processor.  I didn't study it real closely but it appears to have the same motherboard as the DX4000, except this one has an actual VGA port, which the RX4000 lacks. It appears to have a propriety power supply, and the motherboard slot looks like a PCI-E card where the SATA connector board plugs into on both models.

Watchguard XTM22 issues & throughput speed

The patient is a Watchguard (WG) XTM 22; it is the middle of the bottom tier of firewalls offered. Some speed issues where being noticed on a Comcast High Speed connection with their "Boost" package.


The first part of trouble shooting was to find out what the connection is capable of; by directly connecting a PC to the Motorola Docis 3 modem.

After a bunch of pointless trouble shooting steps the modem was moved from Eth0 to Eth2. What is interesting about this is that on this family of WGs Eth0 & Eth1 are 100mbps connections while the rest are 1gbps. The cable modem also has a 1gbps connection, but since ones internet speed is no where near 100mb it should not matter.

Great improvements! However WG could not provide an answer why this helped, nor why we are only getting less than half of the speed available. More trouble shooting! First all of the fancy proxy features were turned off, no IPS, no AntiVirus, no WebBlocker, just straight up filtered rules any out going rule.

The issues appears to be in the proxy engine of the firewall. Next a test with a filtered HTTP/HTTPs rule. A different test PC was used, so their might be a slight scew.


WG offers an AntiVirus scanning, called Gateway AntiVirus (GAV). It was turned off for this test.

Not much of a difference w/ GAV turned off. This time IPS was turned off.
IPS appears to be the biggest culprit.  It should be noted that this is a SOHO firewall, it is meant for roughly 5 computers.  Turn on all of the security features such as WebBlocker, GAV, IPS, ect. really takes a toll on the little ATOM cpu contained in this box. 

One thing that WG has done in thier newer version is to have a "reputation defense" feature.  WG maintains a hosted database of sites they consider both safe and harmfull.  If this feature is turned on, everytime a computer goes to the website, a lookup is done.  If the requested site is on the "safe" list, it bypasses all of the security features.  Likewise if the site is considered "harmfull" the site is blocked, and no further scanning by the WG appliance is done.  This frees up many cpu cycles on the WG firewall. 

hard drive benchmarks v1.0

Dell PowerEdge R510 w/ 1gb H700 controller; 2tb SATA 7200rpm * six drives RAID 5 vs. 146gb 15000rpm SAS drives RAID 1

i7 Quad core; Seagate 750gb Momentus XT hybrid drive vs. SanDisk 128gb SSD

Western Digital DX4000 Sentinel; RAID 5, 2tb 5400rpm *4


Notice how much faster the PowerEdge R510 is than everything, even the single SSD!

HP DL320 G5

This is an odd "entry level" server that HP released to the market.  This one has a Xeon Quad core 2.1ghz @ 1066 bus.  It maxes out at 8gb or ram (FAIL!!!).  It has an onboard SATA RAID controller, a B110i, a pair of 1gbps NICs, and two hot swap bays.  We decided we would use this machine as a loaner VMware server. 

Except we ran into some issues,  first of all we like to load VMware onto SD or USB drives which gives freedom for changing the local storage.  This machine has a vertical USB slot, so the thumb drive sits vertically, not a problem in a 2U, 3U or tower, but this is a 1U!  Apparently HP makes a 90 degree adapter, one could also most likely use a 1' USB extension cord.  I did some digging at Microcenter and found a short 8gb thumb drive that after some carving would clear the cover. 

Next issue was that VMware ESXi 5.1 (even the HP version) doesn't recognize the on board NICs; so we had to source a PCI-E Intel dual port Intel NIC.  Then the next issue was that onboard RAID; VMware didn't recognize the drives as an array, it saw both SATA drives as individual drives.  I had a Smart Array P200 from a previous upgrade.  However this requires a SAS->SATA cable, plus again being only 1U, there is only so much real-estate space to deal with.  
.......pictures and revising coming.....
The left arrow is pointing towards the P200 RAID card, notice the lack of space left.
The right arrow is pointing towards the USB thumb drive.

New life to old servers....part#1

The purposes of this project is to reuse a retired server and give it a 2nd life, experiment with FreeNAS, and give my VMware cluster shared storage.

The patient is a Dell Power Edge 600sc; it is a PIV 2.8ghz (512k cache & 533mhz bus, sadly it is limited to Northwood cpu's (aka 512k cache and 533mhz bus are maximums).  It has been upgraded to 4gb of pc2700 ram.  Two 64-bit PCI-X network cards were added so down the road we can play with multi-pathing and VLAN-ing iSCSI traffic.  This box has the ability to run six PATA drives, but sadly they are only UDMA33.  So I first bought an el-cheapo 4 port SATA card to run four 1tb SATA drives.  This card would only do JBOD or RAID0, no good.  I could use the built in RAID levels that Linux/FreeNAS offer with ZFS file system, but I feel better with hardware doing the RAID operations.  I ran across a PNY/Netcell Revolution 5 port RAID card w/ 64mb of ram; surely this would be superior. 

Turns out this company went extinct years ago.  I did find a good support site though.  So with four 1tb drives installed here were the options I was presented with:
-RAID 1, 3.8tb, four drives (the ONLY option when choosing to add all four drives at once)
-RAID 1, 1tb, two drives (when I attempted to add a 2nd RAID 1 using the remaining two drives, it kept telling me the array was unrecognized).
-RAID 3, 1.8tb, three drives; this is really what the rest of the world calls RAID 5
There is newer firmware available so we will give that a try; another two things that were mentioned is an undersized power supply and slowing the drives down from 3gb to 1.5gb.  There is a management software, that is written for 32bit Windows XP & 2003, I hope it will work off of MiniXP from Hiren's boot CD.

FreeNAS comes in both 32 & 64 bit flavors.  This CPU is 32bit, so we have no choice; however I did find out that some of the PIV Northwood CPU's did have 64bit instruction code in them.  So the PowerEdge got the latest BIOS applied to it  (which includes more CPU microcode).  I found just such a CPU from my recycling efforts, unfortunately the pins were bent beyond repair. 

FreeNAS also does something interesting with the boot drive.  The ENTIRE drive will be consumed by the operating system.  The OS actually only consumes 2gb; 1gb for the OS and a 2nd 1gb for a backup image, there is some for logging and swap but point remains the same, that extra space is unavailable for use.  So even if one uses a 1tb drive to boot from, that space will ONLY be used for the FreeNAS OS.  This is the reason many people boot off of IDE->Compact Flash adaptors or USB.  Sadly this PowerEdge is too old to support boot to USB, I do have a IDE->CF adaptor but no 2gb CF card.  Therefore, I will reuse an old 80gb PATA drive.

.......pictures and revising coming.....

Western Digital DX4000 NAS

A pretty interesting little box, it is a four bay SATA NAS box, powered by an Intel Core 2 Atom 1.8ghz processor and 2gb of ram.  The OS is Windows 2008 R2 Storage Server Essentials.  We discovered that it runs better with more ram (duh!).  This machine comes with a single 2gb DDR2  DDR3 sodimm module; I replaced it with a 4gb module, unfortunately it has been reported by others that it will not see 8gb.  :S  BTW, there was no stickers or any other "Warranty Void if Seal Broken" items disturbed by removing the cover!
Here is a good review of the appliance.

Another interesting point is that this particular one is the 16tb raw model, it comes preconfigured for RAID5, aka 12tb usable.  The drives used are HGST, which is actually a Hitachi drive.  Hitachi was absorbed by Western Digital not that long ago.

Pay to use SAS on a new server?!?! WTF HP?

HP Proliant G8 servers, the "entry level" ones have a fully functioning RAID card, but if one wants to use SAS drives, one must purchase a $100 license key. 

In addition to needing a license key for the ability to use SAS drives on the RAID card.  If one wants redundant power supplies, there is a different “cage” and a yet another license key required to make it work.
 
What a bunch of cr@p!   Looks like another way HP wants to squeeze more money out of us.