VMware vSphere 7.x Study Guide for VMware Certified Professional – Data Center Virtualization certification. This article covers Section 1: Architectures and Technologies. Objective 1.4 – Differentiate between vSphere Network I/O Control (NIOC) and vSphere Storage I/O Control (SIOC).
This article is part of the VMware vSphere 7.x - VCP-DCV Study Guide. Check out this page first for an introduction, disclaimer, and updates on the guide. The page also includes a collection of articles matching each objective of the official VCP-DCV.
Differentiate Between vSphere NIOC and vSphere SIOC
There are three ways (or features) to manage I/O traffic in vSphere: Network I/O Control (NIOC), Storage I/O Control (SIOC), and Storage Distributed Resources Scheduler (SDRS). This article (Objective 1.4) focuses on how to differentiate between vSphere Network I/O Control (NIOC) and vSphere Storage I/O Control (SIOC).
Note: Although “SDRS” is not precisely highlighted in any objective, is a key feature, and it is discussed in objective 1.6.5 – Describe datastore clusters.
In short, the key differences between NIOC and SIOC are the following:
- NIOC manages Network I/O for the System and VMs, whereas SIOC manages Storage I/O for VMs.
- NIOC is configured in distributed switches, whereas the SIOC is configured in datastores.
- NIOC is invoked if there is contention for network bandwidth, whereas SIOC is invoked if device latency exceeds a threshold.
- Share and Limits of NIOC can be configured in distributed switches and VMs, whereas Share and Limits of SIOC are configured in VMs only.
1. Network I/O Control (NIOC)
vSphere Network I/O Control is used to set Quality of Service (QoS) levels on network traffic. It is a feature available ONLY with vSphere Distributed Switches. Useful for vSAN when vSAN traffic must share the physical NIC with other traffic types, such as vMotion, management, virtual machines.
There are two primary uses for vSphere Network I/O Control:
- Allocate network bandwidth to business-critical applications
- Resolve situations where several types of traffic compete for common resources.
vSphere NIOC is currently on version 3. This latest version:
- Introduces a mechanism to reserve bandwidth for system traffic based on the capacity of the physical adapters on a host.
- Enables fine-grained resource control at the VM network adapter level, similar to the model used for allocating CPU and memory resources.
- Offers improved network resource reservation and allocation across the entire switch.
1.1 Bandwidth Allocation for System Traffic
You can use Network I/O Control on a distributed switch to configure bandwidth allocation for the traffic that is related to the main vSphere features:
- Management
- Fault Tolerance
- NFS
- vSAN
- vMotion
- vSphere Replication
- vSphere Data Protection Backup
- Virtual machine
1.2 Parameter for Bandwidth Allocation - System Traffic
By using several configuration parameters Network I/O Control allocates bandwidth to traffic from basic vSphere system features.
Shares
- Shares, from 1 to 100, reflect the relative priority of a system traffic type against the other system traffic types that are active on the same physical adapter.
- The amount of bandwidth available to a system traffic type is determined by its relative shares and by the amount of data that the other system features are transmitting.
Reservation
- This is the minimum bandwidth, in Mbps, that must be guaranteed on a single physical adapter.
- The total bandwidth reserved among all system traffic types cannot exceed 75 percent of the bandwidth that the physical network adapter with the lowest capacity can provide.
Limit
- The maximum bandwidth, in Mbps or Gbps, that a system traffic type can consume on a single physical adapter.
1.3 Bandwidth Allocation for Virtual Machine Traffic
Network I/O Control allocates bandwidth for virtual machines by using two models:
- Allocation across the entire vSphere Distributed Switch based on network resource pools
- Allocation on the physical adapter that carries the traffic of a virtual machine.
Network Resource Pools
A network resource pool represents a part of the aggregated bandwidth that is reserved for the virtual machine system traffic on all physical adapters connected to the distributed switch.
- The bandwidth quota that is dedicated to a network resource pool is shared among the distributed port groups associated with the pool.
- A virtual machine receives bandwidth from the pool through the distributed port group the VM is connected to.
- By default, distributed port groups on the switch are assigned to a network resource pool, called default, whose quota is not configured.
Defining Bandwidth Requirements for a Virtual Machine
You allocate bandwidth to an individual virtual machine similarly to allocating CPU and memory resources.
- NIOC provisions bandwidth to a virtual machine according to shares, reservation, and limits that are defined for a network adapter in the VM hardware settings.
- The reservation represents a guarantee that the traffic from the virtual machine can consume at least the specified bandwidth.
- If a physical adapter has more capacity, the virtual machine may use additional bandwidth according to the specified shares and limit.
Bandwidth Provisioning to a Virtual Machine on the Host
To guarantee bandwidth, Network I/O Control implements a traffic placement engine that becomes active if a virtual machine has bandwidth reservation configured.
- The distributed switch attempts to place the traffic from a VM network adapter to the physical adapter that can supply the required bandwidth and is in the scope of the active teaming policy.
- The total bandwidth reservation of the virtual machines on a host cannot exceed the reserved bandwidth that is configured for the virtual machine system traffic.
1.4 Parameter for Bandwidth Allocation - Virtual Machine
NIOC allocates bandwidth to individual virtual machines based on configured shares, reservation, and limit for the network adapters in the VM hardware settings.
Shares
The relative priority, from 1 to 100, of the traffic through this VM network adapter against the capacity of the physical adapter that is carrying the VM traffic to the network.
Reservation
The minimum bandwidth, in Mbps, that the VM network adapter must receive on the physical adapter.
Limit
The maximum bandwidth on the VM network adapter for traffic to other virtual machines on the same or on another host.
2. Storage I/O Control (SIOC)
vSphere Storage I/O Control allows cluster-wide storage I/O prioritization, which allows better workload consolidation and helps reduce extra costs associated with overprovisioning.
- With SIOC, ESXi monitors datastore latency and throttles the I/O load if the datastore average latency exceeds the threshold.
- Storage I/O Control extends the constructs of shares and limits to handle storage I/O resources.
- You can control the amount of storage I/O that is allocated to virtual machines during periods of I/O congestion.
When you enable Storage I/O Control on a datastore:
- ESXi begins to monitor the device latency that hosts observe when communicating with that datastore.
- When device latency exceeds a threshold, the datastore is considered to be congested
- Each virtual machine that accesses that datastore is allocated I/O resources in proportion to their shares.
- You set shares per virtual machine.
- You can adjust the number for each based on need.
Configuring Storage I/O Control is a two-step process:
- Enable Storage I/O Control for the datastore.
- Set the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each virtual machine.
By Default:
- All virtual machine shares are set to Normal (1000) with unlimited IOPS.
- Storage I/O Control is enabled by default on Storage DRS-enabled datastore clusters.
2.1 SIOC Requirements
Storage I/O Control has several requirements and limitations.
- Datastores that are Storage I/O Control-enabled must be managed by a single vCenter Server system.
- Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected, and NFS-connected storage. Raw Device Mapping (RDM) is not supported.
- Storage I/O Control does not support datastores with multiple extents.
Note: Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O Control.
You allocate the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each virtual machine.
- When storage I/O congestion is detected for a datastore, the I/O workloads of the virtual machines accessing that datastore are adjusted according to the proportion of virtual machine shares each virtual machine has.
- Storage I/O shares are similar to shares used for memory and CPU resource allocation, which are described in Resource Allocation Shares.
- Under resource contention, virtual machines with higher share values have greater access to the storage array.
- When you allocate storage I/O resources, you can limit the IOPS allowed for a virtual machine. By default, IOPS are unlimited.
You can also monitor the SIOC shares. Datastore performance charts allow you to monitor the following information:
- Average latency and aggregated IOPS on the datastore
- Latency among hosts
- Queue depth among hosts
- Read/write IOPS among hosts
- Read/write latency among virtual machine disks
- Read/write IOPS among virtual machine disks
Resources
Conclusion
The topic reviewed in this article is part of the VMware vSphere 7.x Exam (2V0-21.20), which leads to the VMware Certified Professional – Data Center Virtualization 2021 certification.
Section 1 - Architectures and Technologies.
Objective 1.4 – Differentiate between vSphere Network I/O Control (NIOC) and vSphere Storage I/O Control (SIOC)
See the full exam preparation guide and all exam sections from VMware.