If your hosts can see the CDP info from your physical network, this will help check or map your port configs.

$Clustername = “hotness”
$report = @()
$allhosts = get-cluster $Clustername | get-vmhost
foreach($singlehost in $allhosts){
Get-VMHost $singlehost | Where-Object {$_.State -eq “Connected”} |
%{Get-View $_.ID} |
%{$esxname = $_.Name; Get-View $_.ConfigManager.NetworkSystem} |
%{foreach($physnic in $_.NetworkInfo.Pnic){
$pnic = $_.QueryNetworkHint($physnic.Device)
foreach($thing in $pnic){
$ReportRow = “” | select Hostname,vmnic,Ttl,Samples,DevId,Address,PortId,HardwarePlatform,Vlan,FullDuplex,Mtu
if( $thing.ConnectedSwitchPort ) {
$reportRow.Hostname = $esxname
$reportRow.vmnic = $physnic.Device
$reportRow.Ttl = $thing.ConnectedSwitchPort.ttl
$reportRow.Samples = $thing.ConnectedSwitchPort.samples
$reportRow.DevId = $thing.ConnectedSwitchPort.Devid
$reportRow.Address = $thing.ConnectedSwitchPort.address
$reportRow.PortId = $thing.ConnectedSwitchPort.portid
$reportRow.HardwarePlatform = $thing.ConnectedSwitchPort.HardwarePlatform
$reportRow.Vlan = $thing.ConnectedSwitchPort.vlan
$reportRow.FullDuplex = $thing.ConnectedSwitchPort.fullduplex
$reportRow.Mtu = $thing.ConnectedSwitchPort.mtu
}}$report += $reportrow}}}

 Then you can “export-csv”  the $report var or just throw it on the command line to see the output.

We found an issue where the SAN wasn’t using VAAI commands correctly.
So,  we need to disable it.
Here is the code to do that:

$allhosts = get-vmhost
foreach($singlehost in $allhosts){
Set-VMHostAdvancedConfiguration -VMHost $singlehost -Name DataMover.HardwareAcceleratedMove -Value 0
Set-VMHostAdvancedConfiguration -VMHost $singlehost -Name DataMover.HardwareAcceleratedInit -Value 0
Set-VMHostAdvancedConfiguration -VMHost $singlehost -Name VMFS3.HardwareAcceleratedLocking -Value 0

Value of 1 = Enabled
Value of 0 = Disabled

VMware Link:

This code will help you step though the creation of VMotion networks on your hosts.

######### the vars ####
$thehost = “host1.pcli.me”
$theVswitch = “vSwitch1”
$vmotion1IP = “”
$vmotion2IP = “”
$theSubnetMask = “”

### the code ###
### Create the Vmotion Port groups
get-vmhost -name $thehost | Get-VirtualSwitch -name $theVswitch | new-VirtualPortGroup -name vmotion1 -vlanid 0
get-vmhost -name $thehost | Get-VirtualSwitch -name $theVswitch | new-VirtualPortGroup -name vmotion2 -vlanid 0

### Create the Vmotion VMKernels
New-VMHostNetworkAdapter -VMHost $thehost -PortGroup vmotion1 -VirtualSwitch $theVswitch -IP $vmotion1IP -SubnetMask $theSubnetMask -VMotionEnabled:$true
New-VMHostNetworkAdapter -VMHost $thehost -PortGroup vmotion2 -VirtualSwitch $theVswitch -IP $vmotion2IP -SubnetMask $theSubnetMask -VMotionEnabled:$true

### Change the Network Teaming Policy to Active/Standby on both vmotion network port groups.
### Use this code if you are using two vmotion networks and want them to run on seperate NICs
$thingy = get-virtualswitch -vmhost $thehost -name $theVswitch | Get-virtualportgroup -name “vmotion1” | get-nicteamingPolicy
$thingy | Set-NicTeamingPolicy -MakeNicActive “vmnic1” -MakeNicStandby “vmnic2”
$thingy = get-virtualswitch -vmhost $thehost -name $theVswitch | Get-virtualportgroup -name “vmotion2” | get-nicteamingPolicy
$thingy | Set-NicTeamingPolicy -MakeNicActive “vmnic2” -MakeNicStandby “vmnic1”

VMWare Links:

This will enable Storage IO Control for all datastores that exist within a datastore cluster for the entire vcenter.
Edit as needed:

set-datastore -datastore (Get-DatastoreCluster | get-datastore) -StorageIOControlEnabled $true


Quick edits if you want a narrow scope:
Single Cluster:
set-datastore -datastore (Get-DatastoreCluster “MyStorageCluster”| get-datastore) -StorageIOControlEnabled $true

With a filter to only change datastores with “ThisText” in the name:
set-datastore -datastore (Get-DatastoreCluster | get-datastore | where {$_.name -match “ThisText”}) -StorageIOControlEnabled $true


Vmware Links:

Run this to reduce the load time of your Powershell windows when using the Powershell add-ons.
You will need to run this for both 32 and 64bit powershell environments.
You will also need to run this each time you install a new version of PowerCLI.
The window may take a few minutes to complete.

Launch a 64 and 32bit powershell window and runAS an Administrator.
Paste this into each window. 

Set-Alias ngen (Join-Path ([System.Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()) ngen.exe)

Get-ChildItem -Path $env:SystemRoot\assembly\GAC_MSIL\VimService*.XmlSerializers |
ForEach-Object {
if ($_) {
$Name = $_.Name
Get-ChildItem -Path $_
} |
Select-Object -Property @{N=”Name”;E={$Name}},@{N=”Version”;E={$_.Name.Split(“_”)[0]}},@{N=”PublicKeyToken”;E={$_.Name.Split(“_”)[-1]}} |
ForEach-Object {
if ($_) {
ngen install “$($_.Name), Version=$($_.Version), Culture=neutral, PublicKeyToken=$($_.PublicKeyToken)”

We found that setting the server BIOS or ESXHost Advanced Settings to a Balanced CPU Power Policy creates additional CPU Wait times for VMs with more than a single core.  This is seen more-so in VMs with four or more cores.
If you click on the cluster level object, it will populate more windows on the right.
Click on the “View Resource Distribution Chart” link within the vSphere DRS window to view the following chart:
With the BIOS and ESX Host setting at “Balanced” we see hosts with low CPU usage but higher CPU wait times.

When we change this setting to High Performance, the entire cluster turns green!

Check out this post on how to check and set this setting at the ESX Host level via PowerCli.

This will set the Advanced Setting “CPU Power Policy” for a host.
Keep in mind that some servers require that the BIOS CPU Power management setting be changed to “OS Control” for the ESX Host setting to work. Setting the BIOS to High Performance may use more watts but may reduce CPU wait time for VMs. Waiting for a CPU to “power up / boost to full speed” will create CPU wait time.

Set-VMHostAdvancedConfiguration -vmhost (get-vmhost “MyHost”) -Name Power.cpupolicy -Value static

The value options are:
Static = High Performance
Dynamic = Balanced
Off = Not Configured

This code will get the current value of the host:
Get-VMHost “MyHost” | Select Name,@{N=”Power Policy”;E={
$powSys = Get-View $_.ExtensionData.ConfigManager.PowerSystem

You can add a host list array, or remove the hostname from the get-vmhost part if you want to check vcenter or cluster wide settings. The same goes for the Set-VMHostAdvancedCOnfiguration line. Creating a foreach loop for each host in a cluster will keep all hosts configured the same.

Vmware Link:

This code will migrate everything on a standard switch to a distributed switch at a cluster wide level.
***Note: If the hosts in the cluster have inconsistent vmnic numbers, this script will probably explode if ran at a cluster level.
For example: If all hosts in the cluster use “vmnic1 and vmnic2″ for vswitch1” then this works great.
Basic steps this code executes:
1- Set vars from a reference host in the cluster.
2- Creates the Distributed switch, scans the portgroup names on the specified standard vSwitch and creates the new portgroups on the vDS.
– It names the switch “vds-ClusterName” and the portgroups “ClusterName-YourOldPortGroupName”
3 – Sets all of the portgroups on the vDS to a “load based” policy.
4 – Adds the hosts to the vDS then migrates all but one of the vmnics to the vDS.  I have it place the vmnic in standby before it’s removed from the host.
5 – Migrates all of the VMs that were part of that vSwitch to the new vDS portgroups.
6 – Removes the last vmnic from the vSwitch and adds it to the vDS.  I added a check to not migrate the vmnic is a VM still exists on the vSwitch.

This code could be cleaned but I shoot for slow and sure. I paste it in steps to be 100% sure everything completes.
Some quick notes before you start :
– I suggest setting DRS to manual to prevent any vm migrations. An active migration will cause a VM portgroup change to fail.
– If your vSwitch has more than two vmnics, you can add those to the vars below. Anything that is not added to $hostvnic1 is removed from the host in the first vnic pull. For example: If you set $hostvnic1 to “vmnic3,vmnic4” then it will keep vmnic3 and vmnic4 on the host and move all remaining vmnics to the vDS during the first vmnic migration. The end of the script will migrate the remaining vnics noted in $hostvnic1.

Add-PSSnapin VMware.VimAutomation.Core
Add-PSSnapin VMware.VimAutomation.Vds
Connect-VIServer “MyvCenter”
####### Step1 = These are the only settings you need to change for a basic vDS migration
$datacentername = “MyDataCenter”
$clustername = “MyCluster”
$hostname = “MyHost.pcli.me”
$vswitch = “vswitch2”
$hostvnic1 = “vmnic4”
$hostvnic2 = “vmnic5”

####### Step2 – vDS and portgroup creation
$VDSName = “vDS-“+$Clustername
$UplinkName = $vdsname+”Uplinks”
new-vdswitch -name $VDSName -location $datacentername
get-vdswitch -name $vdsname | get-vdportgroup | ?{$_.name -like “*Uplinks*”} | set-vdportgroup -name “$uplinkName”
$Portgroups = get-vmhost -name $hostname | get-virtualswitch -name $vswitch | get-virtualportgroup
foreach($port in $portgroups){
$Pname = $clustername +”-“+ $port.name
$PVlanid = $port.vlanid
Get-VDSwitch -Name $vdsname | New-VDPortgroup -Name $Pname -Vlanid $PVlanid -numports 256

####### Step3 – Set the portgroups to a Load Based Policy
$DVSPortgroups = get-virtualswitch -name $vdsname | get-virtualportgroup
Function Set-VDPortGroupTeamingPolicy {
param (
Process {
$spec = New-Object VMware.Vim.DVPortgroupConfigSpec
$spec.configVersion = $VDPortgroup.ExtensionData.Config.ConfigVersion
$spec.defaultPortConfig = New-Object VMware.Vim.VMwareDVSPortSetting
$spec.defaultPortConfig.uplinkTeamingPolicy = New-Object VMware.Vim.VmwareUplinkPortTeamingPolicy
$spec.defaultPortConfig.uplinkTeamingPolicy.inherited = $false
$spec.defaultPortConfig.uplinkTeamingPolicy.policy = New-Object VMware.Vim.StringPolicy
$spec.defaultPortConfig.uplinkTeamingPolicy.policy.inherited = $false
$spec.defaultPortConfig.uplinkTeamingPolicy.policy.value = “loadbalance_loadbased”
foreach($port in $DVSportgroups){
$singlePG = $port.name
Get-VDPortgroup $singlePG | Set-VDPortgroupTeamingPolicy

####### Step4 – First vmnic pull – For each host in the cluster, mark the vmnics in var $hostvnic2 as unused, remove them from the host and add them to the vDS.
$clusterhosts = @()
$clusterhosts += get-cluster $clustername | get-vmhost
foreach($hosts in $ClusterHosts){
$singlehost = $hosts.name
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | get-nicteamingpolicy | set-nicteamingpolicy -makenicunused $hostvnic2
sleep 10
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | Set-VirtualSwitch -Nic $hostvnic1 -confirm:$false
Add-VDSwitchVMHost -vdswitch $vdsname -vmhost $singlehost
$hostadapter = get-vmhost -name $singlehost | Get-vmhostnetworkadapter -physical -name $hostvnic2
get-vdswitch $VDSName |add-vdswitchphysicalnetworkadapter -vmhostnetworkadapter $hostadapter -confirm:$false

####### Step5 – Migrate all of the VMs on the vSwitch to the vDS port groups.
foreach($hosts in $ClusterHosts){
$singlehost = $hosts.name
foreach($port in $portgroups){
$Pname = $clustername +”-“+ $port.name
$OldNetwork = $port.name
$NewNetwork = $Pname
Get-Cluster $clustername |get-vmhost $singlehost |Get-VM |Get-NetworkAdapter |Where {$_.NetworkName -eq $OldNetwork } |Set-NetworkAdapter -NetworkName $NewNetwork -Confirm:$false

####### Step6 – For each host , check if any VMs still exist on the vSwitch, if its empty, migrate the vmnics in var $hostvnic1 to the vDS.
$nic = @()
foreach($hosts in $ClusterHosts){
$singlehost = $hosts.name
$checkme = get-vmhost $singlehost | get-vm | get-virtualswitch | where {$_.name -eq $vswitch} | %{$_.name} | out-string |measure-object -character
$checkcount = $checkme.characters
if($checkcount -lt 1){
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | get-nicteamingpolicy | set-nicteamingpolicy -makenicunused $hostvnic1
sleep 10
Get-VMHost $singlehost | Get-VirtualSwitch | Where-Object {$_.Name -eq $vswitch} | Set-VirtualSwitch -Nic $nic -confirm:$false
$hostadapter = get-vmhost -name $singlehost | Get-vmhostnetworkadapter -physical -name $hostvnic1
get-vdswitch $VDSName |add-vdswitchphysicalnetworkadapter -vmhostnetworkadapter $hostadapter -confirm:$false
}else {
Write-host “$singlehost vmnic $hostvnic1 Migration FAILED – VMs STILL EXIST ON THE VSWITCH!!”

####### End of script #######

Here are some checks I do before and after running this:
– Before I run it, I set the cluster DRS Setting to manual or partial. If a VM is migrating during the script it will not be able to migrate its portgroup config.
– I also like to pull a list of all VMs in the cluster and start a constant Ping. This way I know if something didn’t migrate correctly. For a click list, click the cluster level and select the Virtual Machine tab, then you can right click in the list and export to csv.
– After Step2 I check the vDS to make sure everything was created correctly and that the names look good.
– After Step4 I like to scan a few hosts to make sure things the correct vmnics are still active. You can also check the vDS and see all of the hosts and active vmnics.
– During Step5 I watch my active ping script to look for lost connections.
– Before pasting Step6, to remove the last of the vmnics from the hosts, I run through and look for vms that did not migrate. The script will catch it but I like to double check.
– Once the script is complete, I launch the webclient interface and enable “Health Check”. You can enable the health check by clicking on a Distributed Switch, click manage, Health Check, then click EDIT and change the two values “VLAN and MTU”, and “Teaming and failover” to Enabled then click OK. Once that is done, you can click on the Monitor tab then click health to see each hosts “vDS health.” You can find out if vlans are missing on your trunks or if you have an invalid MTU setting.

Some extra things to note :
– The health check will complain if you leave the vDS uplink vlans set to 0-4094. I change the list to match what vlans I have configured on my portgroups.
– You can set the vDS to use a 9000 MTU by placing (-mtu 9000) in Step2 on the “New-vdswitch” line

This code will remove any mounted ISOs from a VM or set of VMs.
A VM with an ISO attached will not migrate off a host when the host is placed into Maintenance Mode or for a manual vmotion.

Single VM:
get-vm “MyVM” | get-cddrive |set-cddrive -nomedia -confirm:$false

VMs on a host:
get-vmhost “MyHost” | get-vm | get-cddrive |set-cddrive -nomedia -confirm:$false

VMs in a cluster:
get-cluster “MyCluster” | get-vm | get-cddrive |set-cddrive -nomedia -confirm:$false

VMs on a datastore:
get-vm -datastore “MyDatastore” | get-cddrive |set-cddrive -nomedia -confirm:$false


VMware Link:

This code will create a new Distributed switch at the datacenter level of your vcenter.
Keep in mind that distributed switchs are vCenter objects instead of host objects.
If you are running vcenter 5.1 and deploying distributed switches, I highly suggest the vDS backup script here > http://www.pcli.me/?paged=41
Add-PSSnapin VMware.VimAutomation.Vds
new-vdswitch -name “MyNewvDS”-location “MyDataCenterName” -mtu 9000

VMware Link: