Backup Exec 2012 & VMware ESXi

Using Backup Exec 2012 (BES) to backup Virtual Machines (BES) to talk directly to the VMware environment (aka agentless).  It works fairly well.  One can restore individual files without installing the agent on the VM, the backup jobs continually give warnings about not being able to do granular restores without the agent.  Just to be clear, if one wants to restore the VMDK, click on the vCenter/ESXi host and hit restore, if one wants to restore files from the VM, click the server, then click restore.  Along that lines, while restoring the at the VM level, it looks like that is all one can restore; lets say one needs the VMX.  One would have to do a complete VM restore.
Notice only the VMDK is there for restoring.
It also appears that one must back up the whole VM or nothing.  In this case I only want to backup the one VMDK but not the second.
There is checkboxes next to each VMDK but once cannot uncheck them.
One can right click on a VMDK and choose the "include/exclude" option which brings up a new window, where it looks like a filter is created to exclude that VMDK, however one cannot click the "OK" button.

Another side note, BES has no idea what to do w/ the vCenter Management Appliance. 






OpenFiler...a Free NAS

OpenFiler (OF) is an open source file server.  One can use it for anything they would normally use a NAS for, it is much the same as thing as FreeNAS.  This one supports CIFS, iSCSI (block level as well!), NFS, and it can be joined to an Active Directory.  It doesn't to DNLA or any other media services that FreeNAS does.  It also doesn't do the ZFS file system.  Part of the reason why is that the development of OF has stopped.  Version 2.99.2 is the most current release.

My victim is a Dell Optiplex 755 desktop.  It has been upgraded to an Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz, 8gb of ram, an Intel PCIE dual port NIC, four hard drives, 250gb, 750gb, and a pair of 1gb drives.  The cpu and ram maybe over kill on this build, but I had it laying around so this was a good way to put it to use.  The goal of this build is to be shared storage for another VMware lab.

One could install the OS on a 4gb USB thumb drive, but others advise against it.  Although if one did a manual install of the OS and had the the swap partition on a different drive, I don't see why not.  So that being said, OF was installed onto the 250gb SATA drive.  One of the two ports of the dual port NIC is used for CIFS and management, the other port and the onboard NIC are to be used on the iSCSI VLAN.

I ran into a ton of issues; the software is really unpolished, and is basically beta level grade.  Rather than tell you all of my road blocks I will just tell you the fixes and what I learned.

-There is a bunch of updates to do via the GUI interface, reboot when done.
-When creating volumes, only 95% of the drive can be used.  Something is wrong with the coding, some say they fixed it by actually editing the HTML that is for the GUI, some create their partitions either via the CLI or w/ GParted.
-The final release is v2.99.2, however the link on the download page from SourceForge is v2.99.1; Here is a link for v2.99.2 http://sourceforge.net/projects/openfiler/files/
-One cannot create mirrored RAID sets in the GUI.  To fix the issue run the following command from the console:
conary update mdadm=openfiler.rpath.org@rpl:devel/2.6.4-0.2-1
This will install the correct mdadm files, once installed type the next command in.
ln -s /sbin/lvm /usr/sbin/lvm
This ensures that the GUI is populated correctly.
-Creating partitions can be frustrating because then never seem to take; the solution seems to be to either create them outside the GUI have the starting cylinder be 80 higher than the suggested.
-There are more updates, not available via the GUI, from the CLI: run these commands:
conary update conary
conary updateall
conary update openfiler
-at one point while trying to create a volume on the software RAID1, that page would never finish, and no matter what I did from that point I could create a volume.  Either creating the partition with GParted or one of the updates took care of that issue.

I have it presented to a VMware lab as an iSCSI target; thus far the only thing it is doing is a destination target for backups.  So far so good.  Near as I can tell there is almost no cpu usage during file transfers.  It lacks the ability to easily see network, cpu, and disk usage.
......more to come....