Hyper-V NIC Teaming, CSV, Snapshots, DCs: (Virtualization FAQ 1): Presenting at VIR471-INT

Author by Nathan Lasnoski

Hello, I just had the opportunity to participate in the presentation of VIR471-INT at Tech Ed 2011!  I really enjoy the opportunity to get to know people using Hyper-V and to share my experiences with the platform.  I thought I would take a second to note some key questions we discussed in the session and to address them with more detail here: "How should I structure my CSVs?" In addressing this question, we focused on the idea that CSV volumes are dependent on SMB and you need to be very careful with CSVs in relationship to patching domain controllers.  We also addressed the concept of CSV quantity, with a key best practice of focusing on co-locating CSVs, VMs, and an associated coordinator node.  The idea was to aim toward using affinity to keep VMs on a particular CSV together.  The reason for having separated CSVs is principally to limit the impact of redirected mode in DPM, as well as limit the single points of failure within the cluster. "How do I enable Hyper-V NIC teaming?" Yes you can.   Although Microsoft has offloaded this capability to the network card manufactures, it is a capability that works, assuming you've configured the teaming software properly.    There are several different types of load balancing configurations (in Broadcom BASP and Intel):
  • Smart Load Balancing with Failover:  This implementation is sort of like multicast, where all the switch ports have different MAC addresses and theoretically can be implemented without any switch changes.  We've found this relatively easy to configure, but prone to network integration issues.
  • Link Aggregation (802.3ad):  This implementation aligns with the IEEE 802.ad (LACP) specification.  In this configuration all adapters receive traffic on the same MAC address.  In this configuration you'll need to have a switch which supports LACP integrations.  I've seen people who have had a lot of success with this option.
  • Generic Trunking (FEC / GEC) / 802.3ad-Draft Static:  This implementation is similar to 802.ad link aggregation, but instead of integrating with LACP, it uses a trunking mechanism at the switch level, such as EtherChannel.  We've had success with this on Cisco, HP, and Dell switches.  This implementation type has been the predominate option we've used because of its ease of configuration and because we've experienced very few issues with it.   It should be noted that when using Intel NICs, which configuration is called "Static Link Aggregation" vs. "IEEE 802.3ad Dynamic Link Aggregation"
To configure the NIC teaming integration with Hyper-V follow these configuration steps:
  • Install Hyper-V roll and clear networks
  • Install and configure teaming software
  • Connect the team to a Hyper-V virtual network
Additional Tips:
  • We've found it useful to enable "VLAN Promicuous Mode" if the feature is available, as that allows for VLAN tagging to work properly.
  • Make sure to fully test your configuration before moving into production.  This is especially true  for live migration and access to teams from other networks or VLANs.  Also, if you run into issues, with virtual machine networking, make sure you aren't running into an IC or hotfix issue that is not related to teaming.
  • We have tended to be very careful about offloading features, often disabling them completely
Here are some helpful links on configuring this feature: Guide from HP.com:  http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01663264/c01663264.pdf "What are the best practices for Hyper-V networking, quantity of cards, and structure?" We've drafted a dedicated post on addressing Hyper-V network structures and best practices: http://www.concurrency.com/blog/hyper-v-networking-best-practices/  "Should I use snapshots?" We find that snapshots are suitable only for test environments or lab environments where snapshot roll-back makes sense.  The biggest issue with snapshotting is that the creation of a shapshot creates an additional AVHD file which is linked to the parent VHD.  This makes recovery and management challenging as these are linked in the VM configuration.  Also, note that AVHDs sooner or later will need to be merged and merging requires the system to be offline, which can be a significant problem.  I've found that the best way to protect the VMs and provide recoverability is to focus on using a recovery technology like DPM, vs. snapshots. For more information on DPM, check the following: http://www.concurrency.com/blog/back-me-up-im-going-in-hyper-v-and-backup/ "Should I virtualize my domain controllers?  /  What are best practices in virtualizing domain controllers?" We've addressed this one in a dedicated blog called "should I virtualize my domain controllers?" http://www.concurrency.com/blog/should-i-virtualize-my-domain-controllers/  "What are issues you've run into in moving to Hyper-V SP1 with Dynamic Memory?" I'd start by checking out a few key posts: Upgrading Hyper-V Integration Components: http://www.concurrency.com/blog/upgrading-hyper-v-integration-components/ How does Dynamic Memory work? http://www.concurrency.com/blog/how-does-hyper-v-dynamic-memory-work/ Here are some key issues we've hit:
  • Implement the post-SP1 hotfixes in addition to SP1  (ex: cluster validation KB)
  • Re-apply the Hyper-V networking  KB
  • You may need to upgrade your SAN's firmware to avoid storage connectivity issues
  • You may need to reconnect your hosts to the SAN if using the Microsoft MPIO iSCSI initiator
"How do I deploy virtual machines faster without using BITS?" Janssen addressed this question in his overview and I'm sure people will have questions about it.  Here is a nice overview of the SCVMM rapid provisioning capabilities: http://blogs.technet.com/b/ddc_dudes/archive/2009/09/08/rapid-provisioning-with-scvmm-2008-r2.aspx  "Should I use dynamic disks or fixed VHDs?" We've been seeing very good performance from dynamic VHDs, especially for moderate IO volumes.  I've personally only been using fixed VHDs with very high IO applications vs. tying up the storage.  I've addressed the overview in this post: http://www.concurrency.com/blog/what-disk-type-do-i-use-with-hyper-v-r2/ I'd be careful about using pass-through disks, especially in a cluster.  We tend to use them only when we have high capacity needs and prefer the abstraction that VHDs bring to the table. THANKS!  I really enjoy the opportunity to address questions like this. If there are other questions I missed from the Q&A that were of significance, please let me know. Happy virtualizing! Nathan Lasnoski
Author

Nathan Lasnoski

Chief Technology Officer