HP releases Storevirtual Lefthand OS 12.5

Just this morning I noticed that Lefthand OS 12.5 was now available for the HP Storevirtual platform. I was surprised by this as I hadn’t seen any announcement about it.

Anyway, taking a look at the release notes the minor update has the following new enhancements:

  • Two-node quorum
  • Support for iSCSI split network
  • Support for VSA on RHEL 6.6 and CentOS 6.6
  • MEM driver for vSphere 6.0
  • SCVMM 2012 R2 support

Additionally there are a number of bug fixes.

See here for the full release notes:Ā http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04735172&lang=en-nz&cc=nz

The most interesting enhancement is the Two-node quorum which I’m hoping to test shortly. This is great news for people wanting to deploy VSA for ROBO environments and is a timely release with the upcoming announcement of VSAN 6.1 at VMworld 2015 which is rumored to bring metro clustering (something Storevirtual already supports).

Anyway, stay tuned for further updates on Lefthand OS 12.5!

HP Storevirtual LefthandOS 11.0

Just this morning while browsing the Storevirtual CMC updates FTP share I noticed an announcement file detailing the new LefthandOS 11.0.

Anyway, it is due for release in October according to the blurb and will come with the following new features:

  • Sub-volume auto-tiering with Adaptive Optimization (Exclusive feature for HP StoreVirtual VSA)
  • Support for Microsoft System Center Virtual Machine Manager
  • Smarter Centralized Management Console and Online Upgrades
  • Easier configuration and more granular control of application-managed snapshots with Application Aware Snapshot Manager for Microsoft and VMware environments

The first thing the caught my eye was the auto-tiering! This would be a great addition to the VSA product and from what I have found on the internet it seems that the auto-tiering will be automated and not rely on a schedule like the 3PAR gear. Sadly at this stage the feature won’t apply to existing physical models which I can understand why. It would be nice to breathe some life into them though!

Also the updates to the CMC will be greatly welcomed. Lets hope that it works better than the previous iterations and fingers crossed that it brings in full support for internet proxy authentication!!!

Anyway, when it’s released i’ll test the upgrade and post about the experience šŸ™‚

HP LeftHand CMC 10.0 Changes

HP’s Lefthand / P4000/ StoreVirtual product has had a major version upgrade with it’s announcement of Lefthand OS 10.0. This release will be the first to drop the SAN/iQ moniker in favor of the company name that created the product before HP’s aquisition a few years ago.

The release of this software upgrade was slated for the 4th December if I’m not mistaken but interestingly their FTP site now has the updated patches/upgrades as of the 26th of November.

I had the chance to download the new files (with some difficulty, I get the feeling their FTP site is taking a hammering at the moment!) and have since installed the new version of their Centralised Management Console or CMC.

Going into this upgrade I had high hopes for its new support for an internet proxy for the downloading of patches, something that has really let the product down previously in my opinion. In any case, the new version now allows you to specify a SOCKS proxy…yay!

Now, the bad news…

It does not allow you to specify any authentication for the proxy…argh!!!! In our environment this is a real pain from a security perspective and as such is not going to help. For now it will be back to downloading the media from an alternative location and copying it to the CMC. This in itself can prove to be tedious, particularly when the CMC decides that the downloaded media is corrupt and needs to re-download it! Oh well…baby steps eh šŸ˜›

CMC 10.0 Proxy Setting

On a more positive note, the new version now supports ActiveDirectory integrated authentication. So far I can’t see where this is configured but I’m guessing you’ll need to have your nodes upgraded to version 10 first…i’ll post an update on this shortly.

Further to this there is now an additional SAN status page panel showing all active tasks which should prove to be extremely useful, something that was lacking previously, especially when managing multiple clusters from a single CMC by more than one administrator. Again I’ll post more on this when I see it in action. In the meantime here’s a shot of the Active Tasks pane, not very exciting but gives you an idea.

CMC 10.0 Active Tasks

So that seems to be about it for now, I’d be keen to hear from any others that have found more new features that I’ve missed. Once I’ve fully downloaded all of the new patches I’ll upgrade one of my test VSA clusters and post about that, hopefully I’ll then be able to integrate the cluster security into AD šŸ™‚

Thanks for reading!

vSphere Home Lab: Part 1

After getting the VCAP5-DCD exam out of the way I started to work out what hardware I’d buy for creating a new home lab for my DCA study. Up till now I have used my main gaming rig as a home lab, running an 8-core AMD FX cpu and 32GB of RAM. While this has served me well it isn’t ideal and doesn’t have the flexibility I’d like.

I started trawling through numerous blogs about other home labs and liked the idea of using the Supermicro uATX motherboards that support the E3 Xeons and IPMI. However, after a lot of looking mostly on Amazon (here in NZ the only place I could find boards from was going to cost me almost $400 NZD per board…) I gave up. It was going to be too risky ordering pc gear from overseas and not have the confidence I’d get the right memory modules, etc. Don’t get me wrong, I’d love to have some, in particular the MBD-X9SCM-iiF as it has the two onboard 82574L LAN ports as well as the dedicated IPMI port. But for what I needed I could not justify almost doubling my budget, particularly as the E3 Xeons, such as the E3-1230 would set me back almost $400 NZD a piece too.
Instead I opted for more AMD based gear šŸ™‚
Here is the spec I came up with:

3 x AMD FX 6100 Six-core CPU 3.3ghz – $180 NZD each

3 x Gigabyte GA-78LMTUSB3 – Nice uATX form factor, supports FX cpus, can take up to 32GB DDR3 with support for ECC un-buffered RAM – $115 NZD each

3 x Coolermaster 343 uATX cases (these are pretty cheap and are reasonably small) – $97 NZD each

6 x OCZ Vertex2 120GB SSDs – I got these because they were on special for $114 NZD each šŸ™‚

6 x 8GB DDR3 1333mhz non-ECC – These were about $65 NZD each. Couldn’t afford to go with ECC and didn’t feel I really needed it…when money permits I’ll be upgrading each host to 32gb RAM

3 x HP NC364T 4 port GbE NIC’s – I’m using some spare ones from work

2 x HP ProCurve 2910al-48G switches – Another loaner from work šŸ˜› We had these surplus and aren’t planning on deploying them anywhere

3 x HP P4000 VSA licenses – Yet another thing I was lucky to get from work, we had three licenses we purchased a while back and ended up putting physical P4300 SAN’s in so I figured these would be perfect in a home lab!

Here’s a few pics of the gear so far. Excuse the poor quality photos…my HTC Sensation’s camera is not working that well running a beta JB ROM šŸ™‚

HP Procurve 2910al-48G

HP switches – sweet!!!!

My three vSphere hosts

Cool, VCAP-DCA here I come!

The guts

Cheap and cheerful, no frills at it’s best! Notice I haven’t installed the additional NIC card or the SSDs…where’s my adapters!!!!!

All up I’ve spent close to $2500 NZD which isn’t too bad, but certainly not a cheap exercise…oh well, it’s going to be a great tool for learning so it’s worth it for that!

Bear in mind that most of these parts won’t be on the VMware HCL but this isn’t a production environment, and as such they don’t need to be.

So, I’ve got all the gear mostly built other than waiting on some 2.5″ to 3.5″ SSD drive adapters (the cases don’t have 2.5″ bays šŸ˜¦ ) and I screwed up with one of the cases. I accidentally purchased the wrong model (I initially purchased only one case as a test) and didn’t realise that the power supply included didn’t have a 4+4 12v molex plug for the cpu power…argh! I’ve got an adapter cable coming that will fix the problem though. I also have three 4gb USB sticks on order too for the hypervisor to boot from. This will mean I can allocate as much of the SSD storage as possible to the VSA’s.

At this stage I think I’ll configure the VSA cluster volumes using NRAID5 (for those of you who haven’t used the HP Lefthand gear it supports various network RAID levels when using multiple nodes) as this will give me close to 400GB of SSD storage. I’ll enable thin provisioning on both the datastores and in the VSAs so I should get a reasonable number of VM’s on it.

If you are wondering “but what about TRIM support?” I have thought about this. It seems that vSphere does not support the TRIM command but to be honest I don’t really care. I figure it will probably take me a while to kill them and they do have a three year warranty :-). At one stage I was going to build a FreeNAS server or similar with all the SSDs (which does support TRIM) but I thought I’d get more out of running the VSAs. Since I use P4300 SANs at work this would give me more opportunity to play around with the software and different configurations.

As for the network configuration, I haven’t quite decided my layout yet. I am probably going to trunk two nics for Management, vMotion, FT and VM traffic, possibly leaving two nics for iSCSI. I probably won’t get the same benefit out of using two nics per host for iSCSI as I would with a physical SAN as the VSA only supports one virtual network adapter (i think…it’s been a long time since I looked at it) but I will still be able to simulate uplink failure, etc.

Anyway, I better get back to trying to configure these switches…went to plug my rollover cable into them and realised my pc doesn’t have a serial port…doh!
Stay tuned for part 2, building the hosts and setting up the VSA šŸ˜‰

VMware vSphere Auto Start and HA clusters

I’ve just been reading through various VMware documentation as part of my VCAP-DCD5 study and I was going over HA specific configuration. It was here that I found a particular note that in all honesty did not realise until now…Auto Start of VM’s is not supported with an HA cluster!

I guess I can see why this is the case but it never occurred to me as the option was always available to me even when HA was configured.

It then prompted me to look for more information and came across a blog post written by Michael Webster: http://longwhiteclouds.com/2012/03/28/auto-start-breaks-in-5-0-update-1-not-just-free-vsphere-hypervisor-version/

It turns out that in the latest vSphere 5 update this is finally disabled in the GUI and cannot be configured when HA is enabled.

This is good to know as we have always had both configured particularly for power outage scenarios where we don’t want to rely on the manual startup of VM’s. We have over 30 sites and at particular times of the year full site outages are common. Also, as Michael has mentioned this could cause problems when using a Virtual SAN Appliance such as the HP P4000 VSA. These use local VM’s that present the hosts local disk as iSCSI storage to a cluster. If these did not automatically start when a host was powered on there would be no shared storage and as such no VM’s to power on.

I suppose with good process and procedure these sorts of situations can be dealt with but it is something to keep in mind particularly for remote sites where support is limited.