This code will migrate everything on a standard switch to a distributed switch at a cluster wide level.
***Note: If the hosts in the cluster have inconsistent vmnic numbers, this script will probably explode if ran at a cluster level.
For example: If all hosts in the cluster use “vmnic1 and vmnic2″ for vswitch1” then this works great.
Basic steps this code executes:
1- Set vars from a reference host in the cluster.
2- Creates the Distributed switch, scans the portgroup names on the specified standard vSwitch and creates the new portgroups on the vDS.
– It names the switch “vds-ClusterName” and the portgroups “ClusterName-YourOldPortGroupName”
3 – Sets all of the portgroups on the vDS to a “load based” policy.
4 – Adds the hosts to the vDS then migrates all but one of the vmnics to the vDS. I have it place the vmnic in standby before it’s removed from the host.
5 – Migrates all of the VMs that were part of that vSwitch to the new vDS portgroups.
6 – Removes the last vmnic from the vSwitch and adds it to the vDS. I added a check to not migrate the vmnic is a VM still exists on the vSwitch.
This code could be cleaned but I shoot for slow and sure. I paste it in steps to be 100% sure everything completes.
Some quick notes before you start :
– I suggest setting DRS to manual to prevent any vm migrations. An active migration will cause a VM portgroup change to fail.
– If your vSwitch has more than two vmnics, you can add those to the vars below. Anything that is not added to $hostvnic1 is removed from the host in the first vnic pull. For example: If you set $hostvnic1 to “vmnic3,vmnic4” then it will keep vmnic3 and vmnic4 on the host and move all remaining vmnics to the vDS during the first vmnic migration. The end of the script will migrate the remaining vnics noted in $hostvnic1.
Add-PSSnapin VMware.VimAutomation.Core
Add-PSSnapin VMware.VimAutomation.Vds
Connect-VIServer “MyvCenter”
####### Step1 = These are the only settings you need to change for a basic vDS migration
$datacentername = “MyDataCenter”
$clustername = “MyCluster”
$hostname = “MyHost.pcli.me”
$vswitch = “vswitch2”
$hostvnic1 = “vmnic4”
$hostvnic2 = “vmnic5”
####### Step2 – vDS and portgroup creation
$VDSName = “vDS-“+$Clustername
$UplinkName = $vdsname+”Uplinks”
new-vdswitch -name $VDSName -location $datacentername
get-vdswitch -name $vdsname | get-vdportgroup | ?{$_.name -like “*Uplinks*”} | set-vdportgroup -name “$uplinkName”
$Portgroups = get-vmhost -name $hostname | get-virtualswitch -name $vswitch | get-virtualportgroup
foreach($port in $portgroups){
$Pname = $clustername +”-“+ $port.name
$PVlanid = $port.vlanid
Get-VDSwitch -Name $vdsname | New-VDPortgroup -Name $Pname -Vlanid $PVlanid -numports 256
}
####### Step3 – Set the portgroups to a Load Based Policy
$DVSPortgroups = get-virtualswitch -name $vdsname | get-virtualportgroup
Function Set-VDPortGroupTeamingPolicy {
param (
[Parameter(Mandatory=$True,ValueFromPipeline=$True,ValueFromPipelinebyPropertyName=$True)]
$VDPortgroup
)
Process {
$spec = New-Object VMware.Vim.DVPortgroupConfigSpec
$spec.configVersion = $VDPortgroup.ExtensionData.Config.ConfigVersion
$spec.defaultPortConfig = New-Object VMware.Vim.VMwareDVSPortSetting
$spec.defaultPortConfig.uplinkTeamingPolicy = New-Object VMware.Vim.VmwareUplinkPortTeamingPolicy
$spec.defaultPortConfig.uplinkTeamingPolicy.inherited = $false
$spec.defaultPortConfig.uplinkTeamingPolicy.policy = New-Object VMware.Vim.StringPolicy
$spec.defaultPortConfig.uplinkTeamingPolicy.policy.inherited = $false
$spec.defaultPortConfig.uplinkTeamingPolicy.policy.value = “loadbalance_loadbased”
$VDPortgroup.ExtensionData.ReconfigureDVPortgroup_Task($spec)
}
}
foreach($port in $DVSportgroups){
$singlePG = $port.name
Get-VDPortgroup $singlePG | Set-VDPortgroupTeamingPolicy
}
####### Step4 – First vmnic pull – For each host in the cluster, mark the vmnics in var $hostvnic2 as unused, remove them from the host and add them to the vDS.
$clusterhosts = @()
$clusterhosts += get-cluster $clustername | get-vmhost
foreach($hosts in $ClusterHosts){
$singlehost = $hosts.name
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | get-nicteamingpolicy | set-nicteamingpolicy -makenicunused $hostvnic2
sleep 10
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | Set-VirtualSwitch -Nic $hostvnic1 -confirm:$false
Add-VDSwitchVMHost -vdswitch $vdsname -vmhost $singlehost
$hostadapter = get-vmhost -name $singlehost | Get-vmhostnetworkadapter -physical -name $hostvnic2
get-vdswitch $VDSName |add-vdswitchphysicalnetworkadapter -vmhostnetworkadapter $hostadapter -confirm:$false
}
####### Step5 – Migrate all of the VMs on the vSwitch to the vDS port groups.
foreach($hosts in $ClusterHosts){
$singlehost = $hosts.name
foreach($port in $portgroups){
$Pname = $clustername +”-“+ $port.name
$OldNetwork = $port.name
$NewNetwork = $Pname
Get-Cluster $clustername |get-vmhost $singlehost |Get-VM |Get-NetworkAdapter |Where {$_.NetworkName -eq $OldNetwork } |Set-NetworkAdapter -NetworkName $NewNetwork -Confirm:$false
}
}
####### Step6 – For each host , check if any VMs still exist on the vSwitch, if its empty, migrate the vmnics in var $hostvnic1 to the vDS.
$nic = @()
foreach($hosts in $ClusterHosts){
$singlehost = $hosts.name
$checkme = get-vmhost $singlehost | get-vm | get-virtualswitch | where {$_.name -eq $vswitch} | %{$_.name} | out-string |measure-object -character
$checkcount = $checkme.characters
if($checkcount -lt 1){
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | get-nicteamingpolicy | set-nicteamingpolicy -makenicunused $hostvnic1
sleep 10
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | Set-VirtualSwitch -Nic $nic -confirm:$false
$hostadapter = get-vmhost -name $singlehost | Get-vmhostnetworkadapter -physical -name $hostvnic1
get-vdswitch $VDSName |add-vdswitchphysicalnetworkadapter -vmhostnetworkadapter $hostadapter -confirm:$false
}else {
Write-host “$singlehost vmnic $hostvnic1 Migration FAILED – VMs STILL EXIST ON THE VSWITCH!!”
}
}
####### End of script #######
Here are some checks I do before and after running this:
– Before I run it, I set the cluster DRS Setting to manual or partial. If a VM is migrating during the script it will not be able to migrate its portgroup config.
– I also like to pull a list of all VMs in the cluster and start a constant Ping. This way I know if something didn’t migrate correctly. For a click list, click the cluster level and select the Virtual Machine tab, then you can right click in the list and export to csv.
– After Step2 I check the vDS to make sure everything was created correctly and that the names look good.
– After Step4 I like to scan a few hosts to make sure things the correct vmnics are still active. You can also check the vDS and see all of the hosts and active vmnics.
– During Step5 I watch my active ping script to look for lost connections.
– Before pasting Step6, to remove the last of the vmnics from the hosts, I run through and look for vms that did not migrate. The script will catch it but I like to double check.
– Once the script is complete, I launch the webclient interface and enable “Health Check”. You can enable the health check by clicking on a Distributed Switch, click manage, Health Check, then click EDIT and change the two values “VLAN and MTU”, and “Teaming and failover” to Enabled then click OK. Once that is done, you can click on the Monitor tab then click health to see each hosts “vDS health.” You can find out if vlans are missing on your trunks or if you have an invalid MTU setting.
Some extra things to note :
– The health check will complain if you leave the vDS uplink vlans set to 0-4094. I change the list to match what vlans I have configured on my portgroups.
– You can set the vDS to use a 9000 MTU by placing (-mtu 9000) in Step2 on the “New-vdswitch” line