vSphere Home Lab: Part 3 – Procurves and static routing

So I’ve just spent the last three hours trying to work out why my Procurve switch wasn’t routing my various VLAN’s I have configured for my home lab.

I had to move my three hosts and switch into the garage because the heat in the office was becoming unbearable! Unfortunately because of this I broke my connection to my iSCSI VLAN I had configured for my labs ip storage. Because I’m running my SAN management software on my main pc I had a second nic directly plugged into that VLAN, nice and simple right?

However, when I moved the gear I no longer had two cables running to my main pc, I now only had one. I though to myself, “surely I can set up some static routing!?!?”.

Anyway, as it turns out my little Thomson ADSL router supports static routing, cool! I configured this like so:

:rtadd 192.168.3.0/24 192.168.2.50 (where 192.168.3.0/24 is my iSCSI subnet and 192.168.2.50 is the management ip of the Procurve). Step one done!

Next I jumped onto my Procurve 2910al and enabled ip routing, giving me this config:

hostname “SWITCH1”
module 1 type j9147a
no stack
ip default-gateway 192.168.2.1
ip route 0.0.0.0 0.0.0.0 192.168.2.1
ip routing
snmp-server community “public” unrestricted
spanning-tree legacy-path-cost
spanning-tree force-version stp-compatible
vlan 1
name “DEFAULT_VLAN”
no untagged 25-36
untagged 1-24,37-48
ip address 192.168.2.50 255.255.255.0
exit
vlan 10
name “vMotion”
tagged 13-18
no ip address
exit
vlan 20
name “FT”
tagged 13-18
no ip address
exit
vlan 30
name “iSCSI”
untagged 25-36
ip address 192.168.3.1 255.255.255.0
exit
management-vlan 1

Now, doing a tracert from my main pc on VLAN1 it would get as far as the Procurve, but the switch would respond with destination net unreachable.

I continued to try different commands and read several blog posts on configuring static routes and everything I had done looked fine!

I finally came across a comment someone had posted on a forum suggesting that when you specify a management VLAN on the switch it breaks routing! ARGHHHHHHH!

So, I ran “no management-vlan 1” and saved the config. Now the switch is properly routing all VLANs, yay!!!!!

Now I can fire up my HP P4000 CMC and connect to my VSA’s from my main pc on VLAN1, woohoo.

vSphere Home Lab: Part 1

After getting the VCAP5-DCD exam out of the way I started to work out what hardware I’d buy for creating a new home lab for my DCA study. Up till now I have used my main gaming rig as a home lab, running an 8-core AMD FX cpu and 32GB of RAM. While this has served me well it isn’t ideal and doesn’t have the flexibility I’d like.

I started trawling through numerous blogs about other home labs and liked the idea of using the Supermicro uATX motherboards that support the E3 Xeons and IPMI. However, after a lot of looking mostly on Amazon (here in NZ the only place I could find boards from was going to cost me almost $400 NZD per board…) I gave up. It was going to be too risky ordering pc gear from overseas and not have the confidence I’d get the right memory modules, etc. Don’t get me wrong, I’d love to have some, in particular the MBD-X9SCM-iiF as it has the two onboard 82574L LAN ports as well as the dedicated IPMI port. But for what I needed I could not justify almost doubling my budget, particularly as the E3 Xeons, such as the E3-1230 would set me back almost $400 NZD a piece too.
Instead I opted for more AMD based gear 🙂
Here is the spec I came up with:

3 x AMD FX 6100 Six-core CPU 3.3ghz – $180 NZD each

3 x Gigabyte GA-78LMTUSB3 – Nice uATX form factor, supports FX cpus, can take up to 32GB DDR3 with support for ECC un-buffered RAM – $115 NZD each

3 x Coolermaster 343 uATX cases (these are pretty cheap and are reasonably small) – $97 NZD each

6 x OCZ Vertex2 120GB SSDs – I got these because they were on special for $114 NZD each 🙂

6 x 8GB DDR3 1333mhz non-ECC – These were about $65 NZD each. Couldn’t afford to go with ECC and didn’t feel I really needed it…when money permits I’ll be upgrading each host to 32gb RAM

3 x HP NC364T 4 port GbE NIC’s – I’m using some spare ones from work

2 x HP ProCurve 2910al-48G switches – Another loaner from work 😛 We had these surplus and aren’t planning on deploying them anywhere

3 x HP P4000 VSA licenses – Yet another thing I was lucky to get from work, we had three licenses we purchased a while back and ended up putting physical P4300 SAN’s in so I figured these would be perfect in a home lab!

Here’s a few pics of the gear so far. Excuse the poor quality photos…my HTC Sensation’s camera is not working that well running a beta JB ROM 🙂

HP Procurve 2910al-48G

HP switches – sweet!!!!

My three vSphere hosts

Cool, VCAP-DCA here I come!

The guts

Cheap and cheerful, no frills at it’s best! Notice I haven’t installed the additional NIC card or the SSDs…where’s my adapters!!!!!

All up I’ve spent close to $2500 NZD which isn’t too bad, but certainly not a cheap exercise…oh well, it’s going to be a great tool for learning so it’s worth it for that!

Bear in mind that most of these parts won’t be on the VMware HCL but this isn’t a production environment, and as such they don’t need to be.

So, I’ve got all the gear mostly built other than waiting on some 2.5″ to 3.5″ SSD drive adapters (the cases don’t have 2.5″ bays 😦 ) and I screwed up with one of the cases. I accidentally purchased the wrong model (I initially purchased only one case as a test) and didn’t realise that the power supply included didn’t have a 4+4 12v molex plug for the cpu power…argh! I’ve got an adapter cable coming that will fix the problem though. I also have three 4gb USB sticks on order too for the hypervisor to boot from. This will mean I can allocate as much of the SSD storage as possible to the VSA’s.

At this stage I think I’ll configure the VSA cluster volumes using NRAID5 (for those of you who haven’t used the HP Lefthand gear it supports various network RAID levels when using multiple nodes) as this will give me close to 400GB of SSD storage. I’ll enable thin provisioning on both the datastores and in the VSAs so I should get a reasonable number of VM’s on it.

If you are wondering “but what about TRIM support?” I have thought about this. It seems that vSphere does not support the TRIM command but to be honest I don’t really care. I figure it will probably take me a while to kill them and they do have a three year warranty :-). At one stage I was going to build a FreeNAS server or similar with all the SSDs (which does support TRIM) but I thought I’d get more out of running the VSAs. Since I use P4300 SANs at work this would give me more opportunity to play around with the software and different configurations.

As for the network configuration, I haven’t quite decided my layout yet. I am probably going to trunk two nics for Management, vMotion, FT and VM traffic, possibly leaving two nics for iSCSI. I probably won’t get the same benefit out of using two nics per host for iSCSI as I would with a physical SAN as the VSA only supports one virtual network adapter (i think…it’s been a long time since I looked at it) but I will still be able to simulate uplink failure, etc.

Anyway, I better get back to trying to configure these switches…went to plug my rollover cable into them and realised my pc doesn’t have a serial port…doh!
Stay tuned for part 2, building the hosts and setting up the VSA 😉

My VCAP5-DCD experience

I thought I’d write this post to help others that are thinking about sitting the VCAP5-DCD exam.

I sat the exam a week ago and am happy to say I passed it first time! I was extremely nervous about sitting this exam, mostly because I didn’t really have much of an idea of what to expect other than the exam UI demo that VMware provide.

I got to the point where I thought that I need to just book the exam, giving myself a deadline (I work best with deadlines :-P) and give it a go. If I failed I would put it down to a learning experience, and if I passed then that’s great. I booked it and gave myself six weeks, and during that time I studied most nights for between one to two hours a night.

I spent a bit of time re-reading Scott Lowe’s Mastering VMware vSphere 5 book which is very good and I would highly recommend (http://www.amazon.com/Mastering-VMware-vSphere-Scott-Lowe/dp/0470890800). I went through the blueprint and read through as much of the referenced documentation that I could, sometimes skimming them if I felt I was comfortable with the content. I read a ton of blogs relating to vSphere design including several on network design. Overall, a LOT of reading, but I really felt like I learnt a lot and would advise going through the blueprint even if you aren’t planning on sitting the exam but do a bit of vSphere design in your job.

UPDATE: Oh, how could I forget! A big thanks to Alastair Cooke and Nick Marshall for their fabulous work on the APAC vBrownbags (http://professionalvmware.com/brownbags/) and Autolab (www.labguides.com). These are such great resources and I’ll be using them heavily for my VCAP-DCA study. Thanks guys!!!

On the last day before my exam I got to the point where I thought that there was no point in reading any more, and that if I didn’t know the content by now I had no chance of cramming that night!

I got to the testing center which is about 90 minutes drive from home. I signed in and started the exam. I hit the first question and had a huge sinking feeling already that I wasn’t prepared for the exam…I spent way too long on this first one (It was a select and place style question) and had to force myself to continue regardless of my choices.

Time ticked on and I was slowly getting through the 100 questions. At the time it felt like I was spending a long time on each one but in hindsight I was reasonably quick as I finished with about 30 minutes left. I had some issues getting through the design questions where I would accidentally drag the wrong object and mess up the whole drawing but managed to persevere. Finally I clicked submit thinking “Oh crap, I’ve failed this one” and was extremely pleased to find “Congratulations” on the screen!

So, for me the key takeaways from the exam…

– Don’t waste too much time on any one question, in particular the multi-choice ones

– Leave the design ones to the end as this helps with the momentum and helps to get accustomed to the interface

– Be careful when moving objects around in the design questions! If you really stuff things up you can start over if you need to

– Read the blueprint and associated material!!

– As with any of these types of exams, learn to skip past the waffle in the questions and quickly identify the key parts of the question!

– Try to have fun 🙂 (This one is optional)

I hope this helps. Seriously I would recommend giving it a shot, it is a great learning experience. I lacked a lot of confidence going into the exam and feel a LOT better now having done one. Now I am onto studying for the DCA with more of an idea on the style of questions and “look and feel” of the exams.

Using VMware Update Manager to upgrade ESX/ESXi 4 to ESXi 5

Today I spent some time configuring and testing upgrading some ESX and ESXi 4 hosts using VUM. We’ve got a project coming up that will involve upgrading about ten remote hosts. Being connected via relatively low WAN links I was unsure how well this would work, hence my testing in the office :-). Luckily our office has the same speed link as most of our remote sites so it provided a good test scenario.

We had a few dev/test hosts not doing much so I chose one ESX and one ESXi install, both on HP DL385 G6 hardware. Both had different patch revisions and had old HP offline bundle extensions installed, further making them a good cross section of variables.

The first thing that was required was to upload the ESXi 5 image into VUM via the Admin View. The ESXi Images tab contains a link that you click to import an ESXi Image. This allows you to select an ESXi ISO image and import it into the VUM patch database. Once this has been uploaded you can now create an upgrade baseline using this image.

Remember that this type of baseline must be set as a Host Upgrade before you can select the ESXi image.

Now that you have a baseline you can apply this to a baseline group or host directly. From the Update Manager tab within the host view you can attach this baseline or baseline group and scan the host to check for compliance against this baseline.

All going well you should have a Non-compliant baseline meaning that the host upgrade is compatible but not currently applied to the host.

Clicking remediate will initialise the remediation wizard as shown below.

Working through the wizard you need to accept the EULA before you come to the next important step. Here you can select whether any incompatible third-party extensions are removed before remediation. Select this if you have extensions such as the HP offline bundles that I have on my HP hosts. Bear in mind that the upgrade procedure is not reversible!

Any host extensions that you require after the upgrade can either be integrated into the ESXi image using the custom image builder or applied as a separate remediation task using VUM. I chose the latter because I didn’t have time to create a custom image 😛

Continuing through the remaining options you can finally chose to remediate the host. For me this process took about 20 minutes over a 10mb WAN link.

When the host remediation has completed you should be presented with an upgraded 5.0 host! Yay! One thing to note is that the host will require re-licensing which is simply done via the Licenses option within the vSphere client.

A few things I did encounter were not major but things to keep in mind. My ESX host upgrade at first appeared to fail but was actually the result of the temporary host license having expired. I was able to apply a new license and reconnect the host. The next thing I noticed was a host alarm saying that system logging was not enabled on my host.

After a bit of reading I found that under the Advanced host settings the syslog default datastore location for logs (Syslog.global.logDir) was blank! Setting this to []/scratch/log fixed the issue. If a different datastore location for your logs is desired this can be changed, for example: [mydatastore]/logs.

After all this I had two fully functional ESXi 5 hosts that were both previously ESX and ESXi 4!

One last thing to remember is to upgrade any VMFS volumes to v5. This can be done online and takes a matter of seconds from the datastore view or host storage view. Take note that any existing block size will be retained, whereas a new VMFS datastore will be created with a 1MB block size always.

The next step as I mentioned earlier is to apply any host extensions. In my case I applied my HP offline bundles (make sure you select the baseline as a host extension) and now I can see all my hardware on the Hardware Status tab 🙂

You can normally tell when the HP bundles aren’t applied as you only see a basic list of hardware and does not show components such as the Storage and iLo devices.

Anyway, hope this helps! Thanks for reading.

VMware vSphere Auto Start and HA clusters

I’ve just been reading through various VMware documentation as part of my VCAP-DCD5 study and I was going over HA specific configuration. It was here that I found a particular note that in all honesty did not realise until now…Auto Start of VM’s is not supported with an HA cluster!

I guess I can see why this is the case but it never occurred to me as the option was always available to me even when HA was configured.

It then prompted me to look for more information and came across a blog post written by Michael Webster: http://longwhiteclouds.com/2012/03/28/auto-start-breaks-in-5-0-update-1-not-just-free-vsphere-hypervisor-version/

It turns out that in the latest vSphere 5 update this is finally disabled in the GUI and cannot be configured when HA is enabled.

This is good to know as we have always had both configured particularly for power outage scenarios where we don’t want to rely on the manual startup of VM’s. We have over 30 sites and at particular times of the year full site outages are common. Also, as Michael has mentioned this could cause problems when using a Virtual SAN Appliance such as the HP P4000 VSA. These use local VM’s that present the hosts local disk as iSCSI storage to a cluster. If these did not automatically start when a host was powered on there would be no shared storage and as such no VM’s to power on.

I suppose with good process and procedure these sorts of situations can be dealt with but it is something to keep in mind particularly for remote sites where support is limited.

VCAP-DCD5 Exam Blueprint

So, I finally checked back to see if there was the new blueprint and viola, it was there…

Been reading through it, grabbing all of the referenced material. Crap there is a lot to read through! A lot seems to cover topics such as understanding business requirements and translating those into real terms, as well as identifying business ROI’s, etc.I haven’t had any experience with the DCD exams before so going through this is excellent. Not that I haven’t dealt with some of these areas before in my current work but I’ve never actually read documentation to aid me in the past other than some Gartner stuff.

I did find in particular the SQL Solution Toolkit an interesting read, specifically around scaling up or scaling out your SQL infrastructure. We’ve traditionally had a mixture of both types but with some future projects in mind I’m favoring the scale out approach as it would seem to give us more flexibility. In the past some of our larger SQL instances hosting multiple databases have created inconveniences due to the fact that downtime affects multiple services. Now that I also understand the way Microsoft handle their per-proc licensing it makes even more sense for me to scale out!

Anyway, just a few things I wanted to talk about since reading through the blueprint. Interesting reading the Oracle, SAP and Exchange guides as in my job we don’t use any of them 🙂

Oh, I’ve made a bit more progress on the lab front, now I have two vCenter instances running with SRM partially configured…slowly getting there!

VCAP-DCD/DCA Home Lab – What should it look like?

I thought I’d create some discussion around what others thought a VMware home lab should consist of. When I studied for my VCP5 exam I had the following:

– Main desktop pc with Workstation 8, 8-core CPU with 16GB RAM, 1TB disk drive

– Domain controller VM running 2k8R2, with DNS and DHCP roles. Also had the Solarwinds free FTP server for Autodeploy. This VM had 1 vCPU, 2GB vRAM and 20GB disk. With this it ran ok, didn’t need to be any bigger.

– vCenter Server Appliance with 2vCPU, 4GB vRAM and default disk sizes. I found with any less RAM I had issues with the database and Autodeploy wouldn’t start properly.

– 3 x ESXi 5 VM’s, each with 2vCPU, 3GB vRAM and 10GB disk. This was enough to test building ESXi hosts with Autodeploy, attach them to vCenter and manage them. I was able to test HA and vMotion among other features. The only nested VM I used was the Nostalgia OVA, I was able to play Prince of Persia during a vMotion, cool!

This pretty much summed up my lab during the study. However, now that I’m studying for the VCAP exams I’m trying to work out the best use of my limited host resources (I have upgraded my RAM to 32GB though).

The question is, what VM’s are the most appropriate and how many? At the moment I only have a Domain Controller, vCenter (full install with UM, etc) and four ESXi guests (three ESXi 5 and one ESXi 4 – I want to test upgrading to 5 using Update Manager).

I thought I’d stand up some VM’s to play with vShield Zones, View, SRM, etc but there are so many products to choose from. Obviously these all won’t be covered by the VCAP exams but I still see benefit in being familiar with them.

What do others out there think? What has worked well in your experience? Bear in mind that I only have 32GB of RAM 🙂

I look forward to hearing your ideas!