AssuredSAN/Quantum QXS iSCSI/FiberChannel SAN/NAS

I had the chance to re-purpose a Quantum QXS 1200 2U12 4th Generation Storage array.  I am just learning here, so take everything I say here with a grain of salt.  This post will continue to develope as I learn and test some more.

These are actually manufactured by DotHill Corporation, which Seagate acquired.  Quantum has some special firmware on the machines, and targeted them to fit into their StorNext platform.

The one I am working on as previously mentioned is a QXS 2U12, which is equilvant to a AssuredSAN 4xxx series.  It has twelve 3TB 3.5" 7200rpm SAS drives in it (I have seen other models around with 10TB SATA's).  It has two controllers, each controller has a 6gb cache module with a super capacitor.  The controllers each have four SPF ports, a SAS expander, a micro USB port for serial based CLI configuration, and 3.5mm headphone jack that is actually another serial connector that gets one into an ASCII driven menu.  

The Converged Network Adapters can operate in either 10Gb iSCSI mode, 16Gb Fibre Channel, or both!  I find that flexiblity a major plus; most vendors require a different controller all together.  The ports need to be configured to accept the type of GBIC installed.  Even after a firmware upgrade I was unable to make the changes via the web GUI. I did find it in the menuing via the ASCII menu. From the CLI one can change the charteristics,  in my case I set it to be in hybrid mode, where the first two ports are Fibre Channel and the second two are 10Gb iSCSI.  For both controllers, set all ports to use iSCSI protocol. # "set host-port-mode iSCSI"   For both controllers, set the first two ports to use FC protocol and the second two ports to use iSCSI protocol. # "set host-port-mode FC-and-iSCSI".

Firmware upgrades: the system must be error free before hand.  I had FC GBICs inserted in the controller when it was configured as iSCSI.  The system complained about the configuration profile, and the attempted firmware update left the system in a quasi failed state.  The primary controller took the upgrade but then appeared to be bricked.  Fortuntaly removing the offending GBIC and rebooting via the secondary controller allowed the updates to apply sucessfully.

Also another FYI, for some reason Putty would not work, despite trying several baud rate settings, with for serial based communication.  However TeraTerm works flawlessly.  No idea why.

ASCI menu on the left via the "head phone jack/Service-1" port; CLI via the USB port on the right.


Speed test connected to a Windows 2019 Server connected via 8Gb Qlogic FibreChannel

Speed test connected to a Windows 2019 Server connected via 16Gb Qlogic FibreChannel

Speed test connected to a Windows 2019 Server connected via 10gb iSCSI
Notice that the NIC utilization never even hits 6Gbps.  This is a single DAC cable going from SAN to server.

What Device manager shows when both FC ports are connected.

Rear of the unit showing both controllers. 

Each controller node contains three "special" DDR3 2Gb RAM modules, a supercap for keeping that cache memory alive during a power outage.  The OS seems to be loaded onto Compact Flash drives.  I am told the supercap keeps the data alive long enough to write the data to the CF card, and the OS is stored onto Non-volital RAM. 

Intel SR0NX Celeron 725c, single core CPU @ 1.3ghz

Default credientials are username: "manage" and password: "!manage"

TrueNAS revisited....iSCSI testing

As mentioned on previous posts, the new TrueNAS is a PowerEdge R320 w/ 48gb of ECC RAM, four HGST 10tb SATA 7200rpm hard drives, an NVMe drive, and a 10gb NIC.  The primary pupose of this box is to be a backup target, however it might get used as a shared iSCSI target from time to time, so it seemed worth while to do a bit of benchmarking.

The test client is a Windows 2012r2 Server, running as a VM that lives on VMware ESXi v7, installed on a HP ML350 G8 connected via a 10gb NIC.

For a baseline, here is the OS drive, that lives on three Seagate 6tb 7200rpm SAS drives, in a RAID5 configuration, connected via a HP Smart Array P420 with a 2gb cache.


TrueNAS was setup for iSCSI, a LUN created on the RAID10 SATA drives, connected to VMware, a datastore created, then an additional virtual disk created (thick provisioned, eager zero), and the same test ran.


TrueNAS was again setup with a LUN, but this time on the single PCI-e NVMe drive.  


Just for comparison, this is the very same Crucial NVMe drive as a local drive in a VMware server.

Here is a shot of what the 10gb NIC utilization looks like during each of the tests.  Interestingly the maximum transfer rate is roughly the same for each test, which would indicated we are reaching maximum network throughput; however the benchmark results would say there is still room for more speed.


FURTHER UPDATES:
Finally got around to swapping the E5-2450 v2 CPU for an E5-2418L v2 CPU; according to Intel's ARK page the maxim CPU wattages is 95w vs 50w.  At idle according to the Kill-A-Watt the usages went from 100watts down to 90watts, and according to the iDRAC from 94watts down to 70watts.  A quick iSCSI and SMB benchmarking shows no slow down in file transfers; not that I expected them to as I have file compression and dedupe turned off; so the CPU doesn't have much to do.