What’s New in #vSphere 6.0: Virtual Volumes #vvols #vmware #vmworld 2014

Virtual Volumes () is one the new addition to the recently announced 6.0. has been talking about it publicly since VMworld 2011 (I called VVols “VMware’s game changer for storage”).  and  is a very significant update. VVols completely change the way storage is presented, managed and consumed and certainly for the better. Most storage vendors are on board as their software needs to be able to support VVols and they’ve been champing at the bit for VVols to be released. Talk was it was technically ready for vSphere 5.5 but VMware decided to keep it back, perhaps to let VSAN have its year in the sun and to give 6.0 something big.

VVols is all about changing the way storage is deployed, managed and consumed making the storage system VM-centric, VMware likes to use the term “making the VMDK a first class citizen in the storage world”.

 

image

Virtual Volumes is part of VMware’s Software Defined Storage story which is split between the control plane with Virtual Data Services which is all policy driven and the data plane with Virtual Data Plane which is where the data is actually stored.

 

image

 

Currently all storage is LUN-centric or volume-centric, especially when it comes to snapshots, clones and replication. VVols makes storage VM-centric

With VVol most of the data operations can be offloaded to the storage arrays. VVols goes much further and makes storage arrays aware of individual VMDK files.

image

To provide the management capabilities of VVols, the following concepts are being introduced:

  • Vendor Provider (VP) – management of data operations
  • Storage Containers (SC) – management of data capacity
  • Protocol Endpoints (PE) – management of access control and communications

Vendor Provider (VP)

The VP is a plug-in that is written by the storage vendors. The VP uses a set of out-of-band management APIs, VASA (updated to version 2.0). The VP exports storage array capabilities and present them to vSphere through the VASA APIs.

Storage Containers (SC)

Storage containers are chunks of physical storage in which you would create and logically group VVols, SCs are what previously would have been datastores. SCs are based on the grouping of VMDKs onto which application specific SLAs are translated to capabilities through the VASA APIs. SC capacity is limited only by hardware capacity. You will need at least one SC per storage system but can have multiple SCs per array. Storage array capabilities such as replication, encryption and snapshot retention are assigned to a SC so you would have different SCs for different grouping of requirements. If you had a chunk of physical storage with SSD disks and another with SAS disks for example (forgetting any auto-tiering for simplicity) you would present these as different SCs.

Protocol Endpoint (PE)

Protocol endpoints are the access points from the hosts to the storage systems, which are created by storage administrators. All paths and policies are administered by protocol endpoints. Protocol Endpoints are compliant with both FC, iSCSI and NFS. They are intended to replace the concept of LUNs and mount points. At last the storage transport protocol is independent of the disk storage mechanism.

VVols are then “bound” and “unbound” to a PE, ESX/vCenter initiates the “bind” or “unbind” operation. Existing multi-path policies and NFS topology requirements can be applied to the PE.

Obviously this completely changes the way storage is allocated, managed and connected to.

Storage Administrators now no longer need to configure LUNs or NFS shares, they just set up a single IO access Protocol Endpoint for the array to setup a data path from VMs to VVols and create SCs based on groupings of required capabilities and physical storage and then the VM administrator through vCenter/ESXi would create VVols in these SCs.

Policy Based Management

Importantly, this also adds Policy Based Management which is integrated with Virtual Volumes. Policy is one of the key tenants of the SDDC and I’ve written previously about how this can work.

Policies are set based on application needs for capacity, performance and availability. These are capabilities that the array advertises through the VASA APIs. You define the policies you want and then assign VVols to particular policies and the external storage array automates control of where these VVols ultimately land up and replicates/encrypts/etc. them and manages the service levels. This gives you per VM storage service levels from a single self-tuning datastore. You can define performance rules as part of a vVol for example specifying minimum and maximum ReadOPs, WriteOPs, ReadLatency and WriteLatency.

image

To get an idea of how these policies are implemented, have a look at VSAN. VSAN creates a single datastore and then manages capacity, performance and availability per VM using policy within this single datastore. VSAN is implemented a little different from VVols though. VVols use VASA 2.0 to communicate with an array’s VASA Provider to manage VVols on the array but VSAN uses its own APIs to manage virtual disks. Storage Policy Based Management is used by both to present and use storage specific capabilities.

A VM will have a number of VVols which could have different policies associated with them if you want. a single VVol is created which contains the VM config, There’s one VVol for every virtual disk, one VVol for swap if needed and 1 VVol per disk snapshot and 1 VVol per memory snapshot.

When you snapshot  a VM using the Web Client, it is translated into simultaneous snapshots of all the VMs virtual disks together with with a snapshot of the VMs memory if requested. You can however with the API snapshot individual VMDK VVols.

There’s plenty of additional information on VVols available from the following links:

Virtual Volumes are certainly an exciting and interesting storage addition.

Related posts