Skip to main content

Hyper-V NIC TEAMing

Author by Shannon Fritz

I often see questions about how to team network adapters for use with Hyper-V and whether or not it can even be done.  Using a network team gives you the ability to provide additional bandwidth on a single logical link as well as provide resiliency to failures, so there are some compelling reasons to do it.  I can assure you that it absolutely CAN be done, and I’ll show you how I do it based on my experience with it. Creating a NIC TEAM for Hyper-V depends partly on the Drivers and Software available for the Network Adapters that are installed in the Host servers.  As a general rule, TEAMs should only be made up of members of the same make and model.  So, if your sever has a mix of NICs from different vendors, then you should consider creating separate TEAMs for them.  For example, if your server has 4 Intel NICs and 2 Broadcom NICs, you should create one TEAM for the Intel adapters (use it for the Guest network) and create a separate team using the Broadcom adapters (use it for the Host and LiveMig). When planning the network adapters for a Hyper-V Cluster, you’ll generally want to design three networks:
  1. The Host (aka “Management”) Network
  • Allows the Hyper-V Server to talk to the Windows Domain.  This would be the typical network that any server would be on where it can reach other Update Servers, Domain Controllers, the Internet, etc.
  1. The Live Migration (aka “Heartbeat” or “Cluster”) Network
  • A private network that only the members of the cluster are attached to.  This would be a set of network adapters that are configured with IP’s in its own subnet, without DNS and no Gateway to any other networks.  Usually on a VLAN of its own to further isolate the traffic.
  1. The Guest (aka “Virtual Machine”) Network
  • How the Virtual Machines on the Host are able to talk to the real network.  Often times this is the same network as the Host, in which case this can share the network adapter being used for the Host Network, but it is best practice to use separate network adapters when possible.  These NICs do not have IP’s assigned to them and can be configured as a TRUNK to allow any number of VLANs to be presented to various Virtual Machines.
You’ll notice that I did not mention any iSCSI storage networks.  This is because you should never use NIC Teaming for iSCSI.  That is a completely different discussion and is exactly what MPIO is for. Because the Host and Guest networks can share a NIC, this model can be accomplished using as few as 2 physical network adapters, but that limits the amount of bandwidth available by dedicating one for Live Migration and one for the Host and Guests.  It also does not provide any failover capacity in the event of a failure along the network path.  One way to overcome these shortfalls is to TEAM the network adapters and create Virtual NIC’s that are presented to the Hyper-V Server.  This adds a layer of complexity in the network design, but it also grants some flexibility, adds capacity and provides some failsafe redundancy. An illustration of this server might look something like this. Of course, if your server has more than just 2 network adapters, you can create TEAMs that use many more physical links, but you can also create additional TEAMs.  This is sometimes necessary if the network adaptors are from different vendors, or if you just want to physically isolate the traffic. For the purposes of illustration, let’s say we are creating TEAMs for three servers.  Two of them have only 2 Broadcom adapters and the third one has a total of 8 NICs, but four of them are Intel.  The network list might then look like this:
  • Server: HyperVHost-01
  • TEAM_Bcom (2 ports) Host, LiveMig, Guests
  • Server: HyperVHost-02
  • TEAM_Bcom (2 ports) Host, LiveMig, Guests
  • Server: HyperVHost-03
  • TEAM_Bcom (4 ports) Host, LiveMig
  • TEAM_Intel (4 ports) Guests
As you can see, the third host has far more networking capacity, but I need to ensure that the connectivity model (three networks) remains the same so I can join it to the cluster with the first two hosts.  Using TEAMs to create these three network links will accomplish that goal. I should note however that Microsoft does not officially support NIC Teaming because the TEAM is a creation of 3rd party software that is provided by the NIC hardware vendor.  So if you have a problem related to networking and you contact Microsoft Product Support Services, they may ask that you break the TEAM in order to continue troubleshooting of the Host / Hyper-V.  That being said, there are plenty of people using NIC TEAMing with Hyper-V and it does work great.  In fact there are several Microsoft Blog articles out there explaining how to do teaming, but you still might find yourself in a sort of grey area of support should you end up needing help from PSS one day.  

Pick a TEAM

There are several different types of TEAMs that can be created, including “Smart Load Balancing” (SLB) and two variants of 802.3ad grouping.  One advantage of SLB is that the members of the team can be connected to different switches, but it has proven to be a rather unreliable type of team when used for Hyper-V (Virtual Machines sometimes loose network connectivity when they migrate from one host to another.  It has something to do with the way the MAC addresses are handled).  Using a form of 802.3ad grouping is the better way to go, and you have two options.
  • 802.3ad “Static” (Intel) or “Generic” (Broadcom) or “Switch Port mode On” (Cisco)
  • 802.3ad “Dynamic” (Intel) or “LACP” (Broadcom) or “Switch Port Mode Auto” (Cisco)
Both options are just a slight variant on the IEEE 802.3 standard which defines the physical and data layer of wired Ethernet, where “ad” is the Link Aggregation (aka “LAG”) specification.  Whichever flavor you choose may depend partly on the kind of network switch equipment you have, as this type of teaming requires that the switch be made aware of the team and that it supports the kind of team you are making.  Most Cisco switches support both variants, so you can take your pick.  I’ve have good experiences with the Static mode, so I tend to stick with that.  

Configure the Switch Ports

Before doing anything to the Network adapters on your servers, you’ll need to identify the ports on the switch that you will be using for each TEAM.  Remember that all ports of the same team must be on the same switch that supports 802.11ab, but if you are using stacked switches or a switch chassis, the NICs can be connected in different blades/stack members because they are all acting as one logical switch with a shared configuration. We want the LiveMig traffic to be on its own network, but since it will be using the TEAM, it must share the physical adapters and switch ports.  When we create the TEAM we can specify what network the traffic belongs to by “tagging” the traffic with a VLAN ID.  We can use this method of tagging the traffic to logically separate the traffic on the physical network, but we need to prepare the switch so it can process these VLAN ID tags so it can pass the traffic to the correct logical network, and we do that by setting the switch ports as a TRUNK. The ports must be told what TEAM or “Group” they belong to, as well be told what kind of group it is.  In Cisco switches this is called a “channel-group” or an “EtherChannel” and you set the group type by setting the switchport mode.   If you are using the Intel “Static” or Broadcom “Generic”, use switchport mode “on”, but if you are using the Intel “Dynamic” team, or the Broadcom “LACP”, you would use switchport mode “auto”.  The configuration of the ports looks something like this:
interface GigabitEthernet1/21
description HyperVHost-01 TEAM_Bcom_A Host, LiveMig, Guests
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 11 mode on end

interface GigabitEthernet2/21
description HyperVHost-01 TEAM_Bcom_B Host, LiveMig, Guests
switchport trunk encapsulation dot1q
switchport mode trunk
channel-group 11 mode on end
  TIP: You can clear an existing configuration on a port by using “default interface gi1/21”. The configuration for interface ports being used by Hosts 02 and 03 would be very similar except for the descriptions and a the channel-group numbers.  Each TEAM one each host should have its own unique channel-group.  The number you choose for the Channel Group is not significant as long as ports that belong to the same team use the same number.  For example, all of the ports for TEAM_Bcom on Host 1 will belong to channel-group 11, and then TEAM_Bcom on Host 2 will belong to channel-group 12, and so on.  This channel-group number is what binds the switch ports together to act as a single logical link, but only the switch cares what this number actually is (Note: This is not a VLAN ID, it’s something altogether different).  If you’d like you can also name the channel-group by using this:
interface Port-channel11
description HyperVHost-01 TEAM_Bcom Host, LiveMig, Guests
  Repeat this configuration for all of the ports that will be used in the various TEAMs you will be creating and make sure to use the description in order to help you keep track of what is what.  Once all of the the switch ports are configured, you are ready to return to your Windows Hosts and configure the NICs.  

Prepare the Adapters

The default drivers that Microsoft provides for Intel and Broadcom adapters do not support TEAMing so you’ll need to download install the vendor specific drivers for the adapters in your server. Note: Both Intel and Broadcom regularly update their driver packages, so I recommend that you keep a copy of the version you install so that in the future, if you decide to add another server to the cluster you can use the exact same NIC driver.  It’s a good idea to use the same version of the NIC drivers on all cluster members to minimize differences between hosts when possible.  If you decide to upgrade the drivers on one host, you should really consider upgrading the drivers on all of them. BroadcomDownload site Download and install the appropriate driver pack as well as the Management Applications package.  This includes their “BACS” utility which creates and manages the TEAM.  It also includes a utility called BACScli.exe which can also be used on Windows Core servers and you can find an overview on using that utility at nullsession. IntelDownload site Download and install their driver pack.  There are no other utilities to install because all team settings are configured within the driver from the control panel.  Because Windows Core servers do not include a control panel, Intel has provided a utility called prosetCL.exe since v15 of their driver suite.  You can read more about that on Intel’s support article. Once the drivers are installed, do yourself a favor and identify which NIC is which.  Simply connect and disconnect the adaptor from a switch to note when link status changes between Enabled and Disconnected, then name the Network adapter to something that will help you keep them straight.  I like to look at the back of the server and from left to right, top to bottom and give them alphabetical names. Note: You can name the adapters from command line (useful on Core editions) like this:
netsh int set int name="Local Area Connection" newname="TEAM_Intel_A"
  So I end up with something that looks like this:  

Shannon Fritz

Infrastructure Architect & Server Team Lead