Repurposing/reformatting used NetApp Drives (520 -> 512 sectors)

 TLDR:

From a Linux System:

sg_scan -i  <--shows what drives are attached to the system

sg_readcap /dev/sg#  <--shows information on the drive in question

sg_format -v --format --size=512 /dev/sg# --six  <--reformats the drive in question to 512 sectors


NetApp and other manufactures will often format their drives to a different sector size, 520 is most common.  Unfortunately most computers/Operating systems can't deal with them; most only speak 512, or even 4k.  In my case I have a bunch of Toshiba 900gb SAS 10k RPM drives with NetApp firmware on them.  Interestingly the SAS controller doesn't even show them at post.  Using my trusted PartedMagic bootable thumb drive, the normal tools "Disk Health", "Partition Editor", and "Erase Disk" don't even see the disks.  They do show up in the hardware inventory though.  The solution is to do a low level format of the drives and resetting them to 512 sectors.

Open a command prompt; type: "sg_scan -i"; this shows what drives are attached to the system, in my case the physical drives start with "sg" then a number.  In my case it shows SG0 and SG1.  

Next verify what the drive specifications are:  "sg_readcap /dev/sg0"; in my case it shows that drive is set to a sector size of 520.  

Then type: "sg_format -v --format --size=512 /dev/sg0"; in my case I got a message about being illegal.  Normally this would work, but something peculiar about how NetApp treats the drives.  To get around this add the "--six" parameter.   This switch uses 6 byte MODE SENSE/SELECT to probe disk, instead of 10 byte mode; no idea what that means, but whatever it works.

The process took me about 7 hours on a PCI-e LSI SAS2008 Fusion controller doing a pair of 900gb 10k drives.  Reboot the machine, the drives show up at post, and are usable by the operating system.


Bonus Linux commands

type: 

"smartctl --all /dev/sg#" this will show further details on the drive including the hours of usage, power cycles, etc. 

"smartctl -H /dev/sg#" this will show health statics

"smartctl -t /dev/sd# short (or long)" this will conducts a smart test, FYI the short test takes about 5 minutes and the long about 2 hours on a 900gb 10k SAS drive

Free AD Blocking

Probably old news to most readers by now, but still deserving of a quick write-up.  

PI-Hole is a free (donation-ware) software that was initially designed to run on Raspberry Pi hardware.  The popularity of this software and use case meant there was enough demand for the product, that it was also developed for Linux and container environments.  Pi-Hole is a local DNS (and optionally a DHCP) server to put on the network that will compare all DNS lookups to a publicly maintained list of AD servers.  When browsing the web or even YouTube, many of the ads will be replaced with a blank box.  

Think of like this...a user goes to a webpage, that computer does a DNS look up, and returns the IP, then the computer displays that webpage.  On that webpage there are calls to other internet servers that host the ads to be displayed, which also require a DNS query.  If a PI-Hole is acting as the DNS server, and that internet ad server is on it's list, instead the users' computer getting the IP information back, it gets a "I can't find this server" response.

Why run a DNS filter?  Well ads can be very annoying so there is that.  Also a sizeable chunk of malware comes through "side-jacking" or "ad-jacking"; where the ads being sent to us actually contains malicious code.  Then bandwidth, simply not having to download the ads can yield savings on bandwidth.

I don't have any Raspberry Pi hardware, and didn't want to invest in that eco-system.  So I first tried using a Windows10 VM running Docker for Windows.  I didn't have any luck, something to do networking.

I then installed it as an application on  Ubuntu VM, using this YouTube Video from "Craft Computing" I ignored all of the recursive stuff in favor of the more basic setup.

Installation Steps:

Install Ubuntu Server 20.04 (https://ubuntu.com/download/server), my VM is only 1vCPU, 4gb RAM, and 20gb disk

Install Pi-Hole - sudo curl -sSL https://install.pi-hole.net | bash

Set the Web Admin Password - pihole -a -p [password]

For my home-lab, and maybe some of you also already have a local DNS server and would rather not migrate to only a PI-Hole DNS server.  Not a problem just setup a "Conditional Forwarder".  Under Settings-> DNS--> Conditional Forwarder:

Local Network = your network address IE: 192.168.100.0/24

IP address of your DHCP server (router) = your DNS server (yes it says DHCP server and router...)

Local domain name = your.domain.local 


After two months of usage, I have almost no issues to speak of.  Once in a while when doing a Google search, sometime the the first result is shopping AD for that very same item, which maybe what one wants, well it is blocked.   Not a big deal.  I have never had to actually log back into the PI-hole for restarting system or changing settings. It doesn't catch all ads, but the basic settings catch a large chunk.  Ad servers after are another "whack-a-mole" game. 


LoadBalancer.org

LoadBalancer.org  Is one of many layer 7 load balancers on the market today.  I got turned onto them because they have an alignment with Cloudian object storage devices.  Think of Cloudian, as an on premises S3 storage buckets.  They are great for "cheap and deep" storage.  Cloudian nodes are not load balanced natively, and during jobs such as backups where it is the target, a single node can get over worked while the other nodes are bored. This is normal behavior when one is relaying on round robbing DNS for distributing load.

The LoadBalancer.org product is significantly cheaper than some of their competitors.  Their support is based out of the UK, so it is a bit more difficult to get a support person on the phone if one is in a different time zone.  They offer both virtual appliances and physical.  

I did a proof of concept built as an appliance, running on a retired VMware ESXi host that had a 10gb networking.  From beginning to end, I had it functioning in roughly an hour.  Super easy and straight forward, instructions for many use cases are laid out for users on their support site.  For the POC it did it's job by alleviating the "hot node" issue and allowing backups to take less time as multiple Cloudian nodes could do work at the same time. 

The physical appliances are rebranded Dell PowerEdge servers.  I had several problems bonding the 10gb NICs on our appliance.  Support was not much help as they know their product really well but not so much the network switch side of things.  Our issued ended up being odd behavior out of the Cisco Nexus 9k.  Word of advise: when re-using ports on a Cisco 9k run the "set default" command on the port before configuring it for it's new purpose.  Something was sticking in the configuration and the team was not cooperating until doing this for each port.

When using multiple VLANs the appliance breaks out the VLAN as a separate interface.  Think of it, and manage it just as if it was a separate physical NIC.  Also when looking at performance graphs keep in mind the difference between MBps and mbps.








Repurposed/Recycled Sophos Firewall

 This Sophos XG210 came to me to be recycled, after pulling the cover off I noticed that it has DDR3 RAM, an SSD, and a VGA port.  I then said myself: "Hey wait a minute, this looks like a normal PC".  Hooked up a VGA cable and a USB keyboard, powered it on; I was greeted with a very familiar American Megatrends BIOS, then it booted into a specialized Linux OS.  Next I used my PartedMagic bootable USB boot drive to erase the drives then install Windows 10 just as a proof of concept.  Windows saw all the hardware, including all the NICs!

Intel Celeron G1820 CPU at 2.7ghz

8gb DDR3 RAM

Intel 120gb SSD hard drive

six 1gbps network ports

USB 3.0 ports

There is an internal PCI-e slot, that could be used but might require some creativity.