Posts Tagged ‘hyper-v’

So, my journey with Windows Server 2012 has started. The primary focus of my company’s journey is Hyper-V3. So far, the results are impressive. We are running a cluster consisting of 4 nodes connected to a DAS.

So, my lessons learnt so far.

1. As per my previous article, be careful with your Hyper-V NICs. While I am discussing NICs, the inclusion of “Out the box” NIC Teaming is great and so far no issues at all. Works well and allows us to simplify the administration of our Virtual switches and allows for more tolerance in terms of the loss of a NIC. Before, should we loose a NIC, all VMs attached to that NIC would loose connectivity, that is now a thing of the past and our NIC Teaming is fully supported by Microsoft. NICE!!!

2. Increased VHD hard drives, this is achieved by introducing a new type of file format called a .VHDX which allows for “Virtual Hard Drives” larger than 2TB. This works nicely and I have personally created a few VHDx’s larger than 2TB. There is no little “gotcha”, or at least this was in my environment. The creation of this .vhdx file is rather time-consuming and very IO intensive with a DAS. I will try to remember to update you when I try this same procedure on my Equallogic SAN. However, for now, I would recommend you test this process first and see if you run into the same IO issues as I did, before creating numerous .vhdx’s in your production environment.

3. Virtual CPU increase. This has been particularly helpful with our process of creating a new environment. It is quite amazing how much more “CPU” can help with this. Finally I could create a VM with over 2TB of usable space in one drive and have more than 4 vCPU’s, what a pleasure. So I would create my “Core Services” and throw massive CPU and RAM at the VMs to create them and update, once completed I would remove the additional resources and then “sweat” the VMs.

4. Hyper-V Replica. This is a great tool for us and almost feels like it was custom written for us. I guess pretty much everyone is thinking that. However, there is a slight spanner in the works for us. By this I mean a little bit of a re-think in terms of how we present storage especially to the VMs which we want to have as part of our “DR Cloud” if you will. We present a lot of our additional drives directly via iSCSI, This is great and iSCSI works well, VERY well. However when you are trying to think about a DR situation, it adds complexity as you can replicate the VM only, you then need to replicate the presented iSCSI volumes and then reconfigure the network cards. Hence the “re-think” we are looking at a better way to work around this. We want and need Hyper-V replica.

ODX Offload. This is one I am waiting to test, our SAN Vendor has promised this in the next release of their firmware. More to come on this as soon as I can give some more information on this.

More to come I promise,

Follow me.

facebook-small322252222 twitter-small322252222

MCC11_Logo_Horizontal_2-color_thumb_

Just recently I was running into some very strange issues with my Hyper-V Cluster. we have numerous NICs to allow for redundancy and better throughput for our clustered environment. We have a total of 11 NICs on our server to cater for Guest LAN, iSCSI, guest iSCSI (as we present SAN storage directly to the Guests), HeartBeat and Live Migration. So, I was getting a little concerned when I was performing maintenance and/or “Live Migrations” and my VMs were losing network connectivity both on LAN connections and iSCSI connections which was causing major issues with my VMs needing external storage as this was mostly SQL or file shares. So time to start troubleshooting.

First stop would obviously be the NICs on the hosts and I spent some time checking all the NICs and ensuring they are cabled correctly. Not the world’s easiest job considering the amount of network cables we have in our cabinets. We have numerous Hyper-V nodes, all with 11 NICs excluding Management and iDRAC. So we are talking hundreds of cables. However, I trudged through it and put a check mark next to it – no issues there.

So, several hours and days later and cursing and swearing all the network cables, velcro and cable ties in my cabinets, it was time to move on. So, it was NOT a physical network issue. Time to give a little further. My next stop was the Hypervisor layer.

I start diving through the Hypervisor and I found “gold” here, (I actually mean the cause of my issue). Enter Hyper-V Networking. Please allow me to clarify this. The networking was not the issue but what someone/something had done, the truth here will never be known. I found that I had duplicate Virtual Network created.

So, for each physical NIC i had in the server and was using for the Virtualization, I had 2 networks. 1 connected to the “External Network” and here is the kicker, 1 connected to the “PRIVATE” network. So with a little troubleshooting and deal of understanding Virtual Switches, I was able to return my cluster to it’s former 100% redundant glory. After removing ALL the private networks and there were many on all my Nodes, I was able to Live Migrate and patch/maintain to my heart’s content.

Gotta love Virtualization.

Hopefully this will help someone else as well.

The lesson here is to keep a good eye on your Virtual Environment and always check the basics.

Follow me.

facebook-small322252222 twitter-small322252222

MCC11_Logo_Horizontal_2-color_thumb_