What’s New for vMotion in vSphere 6.0

I found this posted on Julian Wood’s blog in reference to Keynotes at VMworld 2014 please visit his blog!

vMotion is one of most basic yet coolest features of vSphere, People generally consider the time they saw vMotion work for the first time as their “wow” moment showing the power of virtualisation. in vSphere 5.5, vMotion is possible within a single cluster and across clusters within the same Datacenter and vCenter. With vSphere 6.0 vMotion is being expanded to include vMotion across vCenters, across virtual switches, across long distances and routed vMotion networks aligning vMotion capabilities with larger data center environments.

vMotion across vCenters will simultaneously change compute, storage, networks, and management. This leverages vMotion with unshared storage and will support local, metro and cross-continental distances.

image15

You will need the same SSO domain for both vCenters if you use the GUI to initiate the vMotion as the VM UUID can be maintained across vCenter Server instances but it is possible with the API to have a different SSO domain. VM historical data is preserved such as Events, Alarms and Task History. Performance Data will be preserved once the VM is moved but is not aggregated in the vCenter UI. the information can still be accessed using 3rd party tools or the .API using the VM instance ID which will remain across vCenters.

When a VM moves across vCenters, HA properties are preserved and DRS anti-affinity rules are honoured. The standard vMotion compatibility checks are executed. You will need 250 Mbps network bandwidth per vMotion operation.

Another new function is being able to vMotion or clone powered of VMs across vCenters. This will use the VMware Network File Copy (NFC) protocol.

vMotion previously could only occur within a network managed by a single virtual switch, either a Virtual Standard Switch (VSS) or Virtual Distributed Switch (VDS). vMotion across vCenters will allow VMs to vMotion to a network managed by a different virtual switch effectively switching the networks seamlessly. This will include:

  • from VSS to VSS
  • from VSS to VDS
  • from VDS to VDS

You will not be able to vMotion from a VDS to a VSS. VDS port metadata will be transferred and cross vCenter vMotion is still transparent to the guest OS. You will still need Layer 2 VM network connectivity.

In vSphere 5.5, vMotion requires Layer 2 connectivity for the vMotion network. vSphere 6.0 will allow VMs to vMotion using routed vMotion networks.

Another great addition in vSphere 6.0 is being able to do Long-distance vMotion. The idea is to be able to support cross-continental US distances with up to 100+ms RTTs while still maintaining standard vMotion guarantees. Use cases are:

  • Disaster avoidance
  • SRM and disaster avoidance testing
  • Multi-site load balancing and capacity utilisation
  • Follow-the-sun scenarios

You can also use Long-distance vMotion to live move VMs onto vSphere-based public clouds, including VMware VCHS now called vCloud Air..

This may be long distance vMotion but it’s still vMotion, a Layer 2 connection is required for the VM network in both source and destination. The same VM IP address needs to be available at the destination. vCenters need to connect via Layer 3 and the vMotion network can now be a Layer 3 connection. The vMotion network can be secure either by being dedicated or encrypted (VM memory is copied across this network).

vMotion not only involves moving over a VMs CPU and Memory but storage needs to be taken into consideration if you are moving VMs across sites and arrays. There are various storage replication architectures to allow this. Active-Active replication over a shared site as with a metro cluster appears as shared storage to a VM and so this works like classic vMotion. For geo-distance vMotion where storage Active-Active replication is not be possible, VVols will be required which creates a whole new use case for VVols.

Author: Jon

Share This Post On

Submit a Comment

Your email address will not be published. Required fields are marked *

Share This