vSphere Home Lab: Part 2 – There’s some junk in my trunk…

Well, I finally received the last few bits I needed to finish building my new vSphere hosts, in particular my nice little two-bay 2.5″ to 3.5″ drive adapters. These babies cost me about five bucks a piece and means I can mount two SSD’s in a single 3.5″ slot, cool.

Anyway, I started configuring my network based on the fact that I wanted to “trunk” two ports per host so that I had the native VLAN1 untagged, as well as several other tagged VLANs for my vMotion, FT, etc. Please note that trunking VLAN1 to a production ESX host is not recommended, mostly due to the potential security risks around VLAN hopping. I’ve read several VMware documents referring to not trunking VLAN1 however some say “don’t do it” and others say “it won’t work”. I can tell you that it DOES work and could be used in a production environment where the business limitations demand it. One such area would be where there is a large amount of existing network infrastructure using VLAN1 and would be too costly to change (yes, we have some instances of this :-P).

Anyway, back to my lab. I have some pretty crap home networking gear that these HP switches are plugging into and as such don’t support VLAN’s, hence why I am using the native VLAN1. I spent hours the other night trying to work out how to “trunk” several ports and really struggled. To give you some background here, I am not a network engineer and do not administer/configure switches in my day job, however, I understand enough to know how some things work :-). Most of our environment at work involves Cisco switches and because of this I am used to Cisco terminology. This is where I came unstuck…

When we “trunk” ports in a Cisco to pass multiple VLAN’s we are aren’t actually creating an Etherchannel or LACP trunk, instead we are assigning tagged VLAN’s to a port or multiple ports. While trying to stumble my way through using these HP’s I had that same philosophy in mind and could not work out why my “trunks” weren’t working!

As it turns out Cisco do things completely the opposite to most other vendors, such as HP. In the HP switches, you create a VLAN, and assign ports to the VLAN, whether tagged or untagged. I found this blog post which completely explains it all: http://networkingnerd.net/2011/02/02/when-is-a-trunk-not-a-trunk/

So, to cut a long LONG story short, I now consider myself OWNED by these switches and now understand the differences between Cisco’ism and the rest of the world when it comes to trunking, access ports and trunk ports!

Whew…now back to the cool stuff (just kidding, I actually enjoyed it!), playing with vSphere!

Oh, by the way, I’ve given up on running two switches, it’s too noisy and hot, and to be honest, I wouldn’t gain much for the purposes of my VCAP-DCA study, so I’m keeping it simple with one switch…

Thanks for reading, stay tuned for future posts, my next one will probably talk about building up the VSA’s.


vSphere Home Lab: Part 1

After getting the VCAP5-DCD exam out of the way I started to work out what hardware I’d buy for creating a new home lab for my DCA study. Up till now I have used my main gaming rig as a home lab, running an 8-core AMD FX cpu and 32GB of RAM. While this has served me well it isn’t ideal and doesn’t have the flexibility I’d like.

I started trawling through numerous blogs about other home labs and liked the idea of using the Supermicro uATX motherboards that support the E3 Xeons and IPMI. However, after a lot of looking mostly on Amazon (here in NZ the only place I could find boards from was going to cost me almost $400 NZD per board…) I gave up. It was going to be too risky ordering pc gear from overseas and not have the confidence I’d get the right memory modules, etc. Don’t get me wrong, I’d love to have some, in particular the MBD-X9SCM-iiF as it has the two onboard 82574L LAN ports as well as the dedicated IPMI port. But for what I needed I could not justify almost doubling my budget, particularly as the E3 Xeons, such as the E3-1230 would set me back almost $400 NZD a piece too.
Instead I opted for more AMD based gear 🙂
Here is the spec I came up with:

3 x AMD FX 6100 Six-core CPU 3.3ghz – $180 NZD each

3 x Gigabyte GA-78LMTUSB3 – Nice uATX form factor, supports FX cpus, can take up to 32GB DDR3 with support for ECC un-buffered RAM – $115 NZD each

3 x Coolermaster 343 uATX cases (these are pretty cheap and are reasonably small) – $97 NZD each

6 x OCZ Vertex2 120GB SSDs – I got these because they were on special for $114 NZD each 🙂

6 x 8GB DDR3 1333mhz non-ECC – These were about $65 NZD each. Couldn’t afford to go with ECC and didn’t feel I really needed it…when money permits I’ll be upgrading each host to 32gb RAM

3 x HP NC364T 4 port GbE NIC’s – I’m using some spare ones from work

2 x HP ProCurve 2910al-48G switches – Another loaner from work 😛 We had these surplus and aren’t planning on deploying them anywhere

3 x HP P4000 VSA licenses – Yet another thing I was lucky to get from work, we had three licenses we purchased a while back and ended up putting physical P4300 SAN’s in so I figured these would be perfect in a home lab!

Here’s a few pics of the gear so far. Excuse the poor quality photos…my HTC Sensation’s camera is not working that well running a beta JB ROM 🙂

HP Procurve 2910al-48G

HP switches – sweet!!!!

My three vSphere hosts

Cool, VCAP-DCA here I come!

The guts

Cheap and cheerful, no frills at it’s best! Notice I haven’t installed the additional NIC card or the SSDs…where’s my adapters!!!!!

All up I’ve spent close to $2500 NZD which isn’t too bad, but certainly not a cheap exercise…oh well, it’s going to be a great tool for learning so it’s worth it for that!

Bear in mind that most of these parts won’t be on the VMware HCL but this isn’t a production environment, and as such they don’t need to be.

So, I’ve got all the gear mostly built other than waiting on some 2.5″ to 3.5″ SSD drive adapters (the cases don’t have 2.5″ bays 😦 ) and I screwed up with one of the cases. I accidentally purchased the wrong model (I initially purchased only one case as a test) and didn’t realise that the power supply included didn’t have a 4+4 12v molex plug for the cpu power…argh! I’ve got an adapter cable coming that will fix the problem though. I also have three 4gb USB sticks on order too for the hypervisor to boot from. This will mean I can allocate as much of the SSD storage as possible to the VSA’s.

At this stage I think I’ll configure the VSA cluster volumes using NRAID5 (for those of you who haven’t used the HP Lefthand gear it supports various network RAID levels when using multiple nodes) as this will give me close to 400GB of SSD storage. I’ll enable thin provisioning on both the datastores and in the VSAs so I should get a reasonable number of VM’s on it.

If you are wondering “but what about TRIM support?” I have thought about this. It seems that vSphere does not support the TRIM command but to be honest I don’t really care. I figure it will probably take me a while to kill them and they do have a three year warranty :-). At one stage I was going to build a FreeNAS server or similar with all the SSDs (which does support TRIM) but I thought I’d get more out of running the VSAs. Since I use P4300 SANs at work this would give me more opportunity to play around with the software and different configurations.

As for the network configuration, I haven’t quite decided my layout yet. I am probably going to trunk two nics for Management, vMotion, FT and VM traffic, possibly leaving two nics for iSCSI. I probably won’t get the same benefit out of using two nics per host for iSCSI as I would with a physical SAN as the VSA only supports one virtual network adapter (i think…it’s been a long time since I looked at it) but I will still be able to simulate uplink failure, etc.

Anyway, I better get back to trying to configure these switches…went to plug my rollover cable into them and realised my pc doesn’t have a serial port…doh!
Stay tuned for part 2, building the hosts and setting up the VSA 😉