With VMware’s vision of having a Software-Defined-Data-Center, the compute, networking and storage resources need to be virtualized and handled in a better way. When talking about Software-Defined Storage, all those heterogeneous storage resources are abstracted into logical pools, consumed and managed via policies set at the datacenter level.
Why the focus on storage?
Two common concerns that all customers have: storage is costly – in fact based on informal data, storage accounts for 60-70% of the overall infrastructure costs. Part of the reason is the complexity in mapping multiple tiers of applications to only a few tiers of storage. This results in gross over provisioning and under-utilization of storage resources to the applications running on them, which dramatically adds to the cost. In addition storage is complex to manage. While customers can tailor the storage to match the application’s needs, this requires manual management of SLA’s. Compound that with the rapid growth of data and VM’s,it makes data management at scale extremely challenging.
Enter VSAN:
image
VSAN is a Software Defined Storage solution fully integrated with VMware vSphere. It is an Object-Based storage system that aggregates local HDDs and SSDs.It has automated policy based storage management for VMs.

 

Requirements for VSAN

  1. At least 3 x ESXi hosts running version 5.5 update 1 & above
  2. 1 x vCenter server running version 5.5 update 1 & above
  3. Minimum 6GB of Memory required on each ESXi Host
  4. 1Gb or 10Gb Network on each host
  5. Vmkernel Virtual Networking Configured on the hosts.
Want to know which hardware is supported? Head over to the HCL located at http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan

Disk Groups

Disk groups may be considered as a container in which a relationship between SSDs and magnetic disks are formed. When a virtual machine is created, it is placed on magnetic disks. However its I/O is accelerated through a SSD. The SSD acts as a read cache and write buffer for that VM’s I/O. The SSD. There can be a maximum of 7 Disks per Disk Group. There can be 1 SSD per Disk Disk Group and 5 Disk Groups per ESXi host.
 

DEFINING VM REQUIREMENTS

When the Virtual SAN cluster is created, the shared Virtual SAN datastore—which has a set of capabilities that are surfaced up to vCenter is also created. When a vSphere administrator begins to design a virtual machine, that design is influenced by the application it will be hosting. This application might potentially have many sets of requirements, including storage requirements.
The vSphere administrator uses a virtual machine storage policy to specify and contain the application’s storage requirements in the form of storage capabilities that will be attached to the virtual machine hosting the application; the specific storage requirements will be based on capabilities surfaced by the storage system.
In effect, the storage system provides the capabilities, and virtual machines consume them via requirements placed in the virtual machine storage policy. Virtual SAN uses the concept of distributed RAID, by which a vSphere cluster can contend with the failure of a vSphere host, or of a component within a host—for example, magnetic disks, flash-based devices, and network interfaces—and continue to provide complete functionality for all virtual machines. Availability is defined on a per–virtual machine basis through the use of virtual machine storage policies.
vSphere administrators can specify the number of host component failures that a virtual machine can tolerate within the Virtual SAN cluster. If a vSphere administrator sets zero as the number of failures to tolerate in the virtual machine storage policy, one host or disk failure can impact the availability of the virtual machine.
Using virtual machine storage policies along with Virtual SAN distributed RAID architecture, virtual machines and copies of their contents are distributed across multiple vSphere hosts in the cluster. In this case, it is not necessary to migrate data from a failed node to a surviving host in the cluster in the event of a failure.

 

Limits

  • Number of Nodes Supported: 32
  • Max Number of Components per Host : 3000
  • Max Number of Components per Object : 64
  • Max Size of a Single Object : 256GB
  • Max number of disks per disk group : 7 MDs and 1 SSD
  • Max number of disk groups per host : 5
  • Max number of VMs per host : 100