Site icon mulcas

vSphere 7 - Identify And Differentiate Storage Access Protocols For vSphere

Identify and differentiate storage access protocols for vSphere

VMware vSphere 7.x Study Guide for VMware Certified Professional – Data Center Virtualization certification. This article covers Section 1: Architectures and Technologies.  Objective 1.3 – Identify and differentiate storage access protocols for vSphere (NFS, iSCSI, SAN, etc.)

This article is part of the VMware vSphere 7.x - VCP-DCV Study Guide. Check out this page first for an introduction, disclaimer, and updates on the guide. The page also includes a collection of articles matching each objective of the official VCP-DCV.

Identify and differentiate storage access protocols for vSphere

For objective 1.3 of the VMware vSphere 7.x exam, you must know the traditional storage virtualization models; local and networked storage. It is also critical to study essential VMware storage concepts to understand better how virtual machines communicate with their virtual disk stored on a datastore. Software-defined storage models will have a glance here.

Topics here will help you to identify and differentiate storage access protocols for vSphere.

vSphere supports various storage options and functionalities in traditional and software-defined storage environments. A high-level overview of vSphere storage elements and aspects will help you plan a proper storage strategy for your virtual data center.

Traditional Storage Virtualization Models

Generally, storage virtualization refers to a logical abstraction of physical storage resources and capacities from virtual machines and their applications. ESXi provides host-level storage virtualization.

Software-Defined Storage Models

In addition to abstracting underlying storage capacities from VMs, as traditional storage models do, software-defined storage abstracts storage capabilities.

See Introduction to Storage

1. Key VMware Storage Concepts

Storage Area Networks

A storage area network (SAN) is a specialized high-speed network that connects computer systems, or ESXi hosts, to high-performance storage systems. ESXi can use Fibre Channel or iSCSI protocols to connect to storage systems.

Storage Device or LUN

In the ESXi context, the terms device and LUN are used interchangeably. Typically, both terms mean a storage volume presented to the host from a block storage system and is available for formatting.

VMware vSphere VMFS

The datastores you deploy on block storage devices use the native vSphere Virtual Machine File System (VMFS) format. It is a unique high-performance file system format optimized for storing virtual machines.

Virtual Disks

Virtual disks are large physical files or sets of files that can be copied, moved, archived, and backed up as any other file. You can configure virtual machines with multiple virtual disks.

Raw Device Mapping

In addition to virtual disks, vSphere offers a raw device mapping mechanism (RDM). RDM is useful when a guest operating system inside a virtual machine requires direct access to a storage device. 

See more About Raw Device Mapping

2. Traditional Storage Virtualization Models

In traditional storage environments, the ESXi storage management process starts with storage space that your storage administrator preallocates on different storage systems. ESXi supports local storage and networked storage.

Local Storage can be internal hard disks located inside your ESXi host. It can also include external storage systems located outside and connected to the host directly through protocols such as SAS or SATA.

In the above example, the ESXi host uses a single connection to a storage device. You can create a VMFS datastore on that device, which you use to store virtual machine disk files.

ESXi supports various local storage devices, including SCSI, IDE, SATA, USB, SAS, flash, and NVMe devices.

Local storage does not require a storage network to communicate with your host. You need a cable connected to the storage unit and, when needed, a compatible HBA in your host.

Networked Storage consists of external storage systems that your ESXi host uses to store virtual machine files remotely. Typically, the host accesses these systems over a high-speed storage network.

2.1 Internet SCSI (iSCSI)

ESXi can connect to external SAN storage using the Internet SCSI (iSCSI) protocol. iSCSI is a SAN transport that uses Ethernet connections (TCP/IP networks) between computer systems (ESXi hosts) and high-performance storage systems. 

ESXi offers the following types of iSCSI connections:

Hardware iSCSI

Your host connects to storage through a third-party adapter capable of offloading the iSCSI and network processing. Hardware adapters can be dependent and independent.

Software iSCSI

Your host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With this type of iSCSI connection, your host needs only a standard network adapter for network connectivity.

2.2 Fibre Channel

Fibre Channel (FC) is a storage protocol that the SAN uses to transfer data traffic from ESXi host servers to shared storage. The protocol packages SCSI commands into FC frames. Your ESXi host must use Fibre Channel host bus adapters (HBAs) to connect to the FC SAN.

2.3 Network File System (NFS)

The NFS client built into ESXi uses Network File System (NFS) protocol version 3 and 4.1 to communicate with the NAS/NFS servers. For network connectivity, the host requires a standard network adapter.

2.4 Other Protocols

NVM Express (NVMe)

NVMe is a method for connecting and transferring data between a host and a target storage system. The NVMe protocol is designed with faster storage media equipped with non-volatile memory, such as flash devices. 

Fibre Channel over Ethernet (FCoE)

The FCoE protocol encapsulates Fibre Channel frames into Ethernet frames. As a result, your host does not need special Fibre Channel links to connect to Fibre Channel storage. The host can use 10 Gbit lossless Ethernet to deliver Fibre Channel traffic.

Note: Starting from vSphere 7.0, VMware no longer supports software FCoE in production environments.

3. How Virtual Machines Access Storage

When a virtual machine communicates with its virtual disk stored on a datastore, it issues SCSI commands. Because datastores can exist on various types of physical storage, these commands are encapsulated into other forms, depending on the protocol that the ESXi host uses to connect to a storage device.

ESXi supports Fibre Channel (FC), Internet SCSI (iSCSI), Fibre Channel over Ethernet (FCoE), and NFS protocols. Regardless of the type of storage device your host uses, the virtual disk always appears to the virtual machine as a mounted SCSI device. The virtual disk hides a physical storage layer from the virtual machine’s operating system. This allows you to run operating systems that are not certified for specific storage equipment, such as SAN, inside the virtual machine.

After studying and understanding basic concepts and access protocols of the traditional storage virtualization models, we can recap how virtual machines use different types of storage and the differences between each type with the graphic below. 

This is how virtual machines access different types of storage:

4. Software-Defined Storage Models

In addition to abstracting underlying storage capacities from VMs, as traditional storage models do, software-defined storage abstracts storage capabilities.

Note: This part is a basic introduction to software-defined storage models, so we can quickly compare them against traditional storage virtualization models. More about software-defined models with the next VMware vSphere 7.x exam objectives.

With the software-defined storage model, a virtual machine becomes a unit of storage provisioning and can be managed through a flexible policy-based mechanism. The model involves the following vSphere technologies.

VMware vSphere Virtual Volumes (vVols)

The vVols functionality changes the storage management paradigm from managing space inside datastores to managing abstract storage objects handled by storage arrays. With vVols, an individual virtual machine (not the datastore) becomes a storage management unit. And storage hardware gains complete control over virtual disk content, layout, and management.

VMware vSAN

vSAN is a distributed layer of software that runs natively as a part of the hypervisor. vSAN aggregates local or direct-attached capacity devices of an ESXi host cluster and creates a single storage pool shared across all hosts in the vSAN cluster.

Storage Policy Based Management

Storage Policy Based Management (SPBM) is a framework that provides a single control panel across various data services and storage solutions, including vSAN and vVols. Using storage policies, the framework aligns application's demands of your virtual machines with capabilities provided by storage entities.

I/O Filters

I/O filters are software components installed on ESXi hosts and offer virtual machines additional data services. Depending on the implementation, the services might include replication, encryption, caching, etc.

Resources

vSphere Storage

Conclusion

The topic reviewed in this article is part of the VMware vSphere 7.x Exam (2V0-21.20), which leads to the VMware Certified Professional – Data Center Virtualization 2021 certification. 

Section 1 - Architectures and Technologies. 

Objective 1.3 – Identify and differentiate storage access protocols for vSphere (NFS, iSCSI, SAN, etc.)

See the full exam preparation guide and all exam sections from VMware.

More topics related to VMware

Exit mobile version