Theory of Operation & Deployment Choices

 Construct Mappings between Cinder and Data ONTAP

Cinder Backends and Storage Virtual Machines.  Storage Virtual Machines (SVMs, formerly known as Vservers) contain data volumes and one or more LIFs through which they serve data to clients. SVMs can either contain one or more FlexVol volumes, or a single Infinite Volume.

SVMs securely isolate the shared virtualized data storage and network, and each SVM appears as a single dedicated storage virtual machine to clients. Each SVM has a separate administrator authentication domain and can be managed independently by its SVM administrator.

In a cluster, SVMs facilitate data access. A cluster must have at least one SVM to serve data. SVMs use the storage and network resources of the cluster. However, the volumes and LIFs are exclusive to the SVM. Multiple SVMs can coexist in a single cluster without being bound to any node in a cluster. However, they are bound to the physical cluster on which they exist.

[Important]Important

When deploying Cinder with clustered Data ONTAP, NetApp recommends that each Cinder backend refer to a single SVM within a cluster through the use of the netapp_vserver configuration option. While the driver can operate without the explicit declaration of a mapping between a backend and an SVM, a variety of advanced functionality (e.g. volume type extra-specs) will be disabled.

Cinder volumes and FlexVol volumes.  Data ONTAP FlexVol volumes (commonly referred to as volumes) and OpenStack Block Storage volumes (commonly referred to as Cinder volumes) are not semantically analogous. A FlexVol volume is a container of logical data elements (for example: files, Snapshot™ copies, clones, LUNs, et cetera) that is abstracted from physical elements (for example: individual disks, and RAID groups). A Cinder volume is a block device. Most commonly, these block devices are made available to OpenStack Compute instances. NetApp’s various driver options for deployment of FAS as a provider of Cinder storage place Cinder volumes, snapshot copies, and clones within FlexVol volumes.

[Important]Important

The FlexVol volume is an overarching container for one or more Cinder volumes.

[Note]Note

NetApp's OpenStack Cinder drivers are not supported for use with Infinite Volumes, as Data ONTAP currently only supports FlexClone files and FlexClone LUNs with FlexVol volumes.

Cinder volume representation within a FlexVol volume.  A Cinder volume has a different representation in Data ONTAP when stored in a FlexVol volume, dependent on storage protocol utilized with Cinder:

  • iSCSI: When utilizing the iSCSI storage protocol, a Cinder volume is stored as an iSCSI LUN.
  • NFS: When utilizing the NFS storage protocol, a Cinder volume is a file on an NFS export.
  • Fibre Channel: When utilizing the Fibre Channel storage protocol, a Cinder volume is stored as a Fibre Channel LUN.

 Cinder Scheduling and resource pool selection.  When Cinder volumes are created, the Cinder scheduler selects a resource pool from the available storage pools: see the section called “Storage Pools” for an overview. Table 4.8, “Behavioral Differences in Cinder Volume Placement” details the behavioral changes in NetApp's Cinder drivers when scheduling the provisioning of new Cinder volumes.

Table 4.8. Behavioral Differences in Cinder Volume Placement
Driver Old Behavior (Icehouse and Prior) New Behavior (as of Juno)
Clustered Data ONTAP: iSCSI
  • Total and available capacity of largest available volume (only) is reported to scheduler.
  • SSC data is aggregated across all volumes and reported to scheduler.
  • During volume creation, driver filters volumes by extra specs and filters/weighs volumes by capacity (largest available space first).
Each FlexVol volume’s capacity and SSC data is reported separately as a pool to the Cinder scheduler. The Cinder filters and weighers decide which pool a new volume goes into, and the driver honors that request.
Data ONTAP operating in 7-mode: iSCSI
  • Total and available capacity of all volumes are accumulated and reported to the scheduler as a combined value.
  • No SSC data is available.
  • During volume creation, driver filters volumes by capacity but does no weighing.
Each FlexVol volume’s capacity is reported separately as a pool to the Cinder scheduler. The Cinder filters and weighers decide which pool a new volume goes into, and the driver honors that request.
Clustered Data ONTAP: NFS
  • Total and available capacity of largest available volume (only) is reported to scheduler.
  • SSC data is aggregated across all volumes and reported to scheduler.
  • During volume creation, driver filters volumes by extra specs and filters/weighs volumes by capacity (largest available space first).
Each FlexVol volume’s capacity and SSC data is reported separately as a pool to the Cinder scheduler. The Cinder filters and weighers decide which pool a new volume goes into, and the driver honors that request.
Data ONTAP operating in 7-mode: NFS
  • Total and available capacity of all volumes are accumulated and reported to the scheduler as a combined value.
  • No SSC data is available.
  • During volume creation, the base NFS driver filters/weighs volumes by capacity (smallest allocated space first).
Each FlexVol volume’s capacity is reported separately as a pool to the Cinder scheduler. The Cinder filters and weighers decide which pool a new volume goes into, and the driver honors that request.
E-Series
  • Total and available capacity of all dynamic disk pools are accumulated and reported to the scheduler as a combined value.
  • E-series volume groups are not supported.
  • No SSC data is available.
  • During volume creation, driver filters/weighs volumes by capacity (largest available space first).
  • Each dynamic disk pool's and volume group’s capacity is reported separately as a pool to the Cinder scheduler. The Cinder filters and weighers decide which pool a new volume goes into, and the driver honors that request.
  • E-Series volume groups are supported as of the Liberty release.

Cinder snapshots versus NetApp Snapshots.  A NetApp Snapshot copy is a point-in-time file system image. Low-overhead NetApp Snapshot copies are made possible by the unique features of the WAFL storage virtualization technology that is part of Data ONTAP. The high performance of the NetApp Snapshot makes it highly scalable. A NetApp Snapshot takes only a few seconds to create — typically less than one second, regardless of the size of the volume or the level of activity on the NetApp storage system. After a Snapshot copy has been created, changes to data objects are reflected in updates to the current version of the objects, as if NetApp Snapshot copies did not exist. Meanwhile, the NetApp Snapshot version of the data remains completely stable. A NetApp Snapshot incurs no performance overhead; users can comfortably store up to 255 NetApp Snapshot copies per FlexVol volume, all of which are accessible as read-only and online versions of the data.

Since NetApp Snapshots are taken at the FlexVol level, they can not be directly leveraged within an OpenStack context, as a user of Cinder requests a snapshot be taken of a particular Cinder volume (not the containing FlexVol volume). As a Cinder volume is represented as either a file on NFS or as a LUN (in the case of iSCSI or Fibre Channel), the way that Cinder snapshots are created is through use of Data ONTAP's' FlexClone technology. By leveraging the FlexClone technology to facilitate Cinder snapshots, it is possible to create many thousands of Cinder snapshots for a single Cinder volume.

FlexClone files or FlexClone LUNs and their parent files or LUNs that are present in the FlexClone volume continue to share blocks the same way they do in the parent FlexVol volume. In fact, all the FlexClone entities and their parents share the same underlying physical data blocks, minimizing physical disk space usage.

E-Series snapshots.  The cinder driver can create hardware-based snapshots on E-Series. The E-Series uses copy-on-write snapshots, which can be created within seconds. In order to ensure snapshots can be deleted in any order, the cinder driver limits E-Series to 3 snapshots per volume. Snapshots on E-Series do not require an additional license.

E-Series snapshots are typically used for relatively brief operations, such as making a backup. If you require many snapshots or long-lasting snapshots, consider FAS.

[Important]Important

When Cinder is deployed with Data ONTAP, Cinder snapshots are created leveraging the FlexClone feature of Data ONTAP. As such, a license option for FlexClone must be enabled.

 Deployment Choice: Direct versus Intermediated

The NetApp Cinder driver can operate in two independent modes: a direct mode where the Cinder processes directly interact with NetApp FAS storage systems, and an intermediated mode where the Cinder processes interact with an additional software entity that issues provisioning and management requests on behalf of Cinder.

OnCommand® Workflow Automator.  OnCommand® Workflow Automator (WFA) is a flexible framework that provides automation for storage-related tasks, customization, scripting capabilities, and integration with higher-order IT systems such as orchestration software through web services.

While WFA can be utilized in conjunction with the NetApp unified Cinder driver, a deployment of Cinder and WFA does introduce additional complexity, management entities, and potential points of failure within a cloud architecture. If you have an existing set of workflows that are written within the WFA framework, and are looking to leverage them in lieu of the default provisioning behavior of the Cinder driver operating directly against a FAS system, then it may be desirable to use the intermediated mode.

SANtricity® Web Services Proxy.  The NetApp SANtricity® Web Services Proxy provides access through standard HTTPS mechanisms to configuring management services for E-Series storage arrays. You can install Web Services Proxy on either Linux or Windows. As Web Services Proxy satisfies the client request by collecting data or executing configuration change requests to a target storage array, the Web Services Proxy module issues SYMbol requests to the target storage arrays. Web Services Proxy provides a Representative State Transfer (REST)-style API for managing E-Series controllers. The API enables you to integrate storage array management into other applications or ecosystems.

[Important]Recommendation

Unless you have a significant existing investment with OnCommand Workflow Automator that you wish to leverage in an OpenStack deployment, it is recommended that you start with the direct mode of operation when deploying Cinder with a NetApp FAS system. When Cinder is used with a NetApp E-Series system, use of the SANtricity Web Services Proxy in the intermediated mode is currently required. The SANtricity Web Services Proxy may be deployed in a highly-available topology using an active/passive strategy.

 Deployment Choice: FAS vs E-Series

FAS.  If rich data management, deep data protection, and storage efficiency are desired and should be availed directly by the storage, the NetApp FAS product line is a natural fit for use within Cinder deployments. Massive scalability, nondisruptive operations, proven storage efficiencies, and a unified architecture (NAS and SAN) are key features offered by the Data ONTAP storage operating system. These capabilities are frequently leveraged in existing virtualization deployments and thus align naturally to OpenStack use cases.

E-Series.  For cloud environments where higher performance is critical, or where higher-value data management features are not needed or are implemented within an application, the NetApp E-Series product line can provide a cost-effective underpinning for a Cinder deployment. NetApp E-Series storage offers a feature called Dynamic Disk Pools, which simplifies data protection by removing the complexity of configuring RAID groups and allocating hot spares. Utilization is improved by dynamically spreading data, parity, and spare capacity across all drives in a pool, reducing performance bottlenecks due to hot-spots. Additionally, should a drive failure occur, DDP enables the pool to return to an optimal state significantly faster than RAID6, while reducing the performance impact during the reconstruction of a failed drive.

[Note]Note

As of the Icehouse release, NetApp has integrations with Cinder for both FAS and E-Series, and either storage solution can be included as part of a Cinder deployment to leverage the native benefits that either platform has to offer.

 Deployment Choice: Clustered Data ONTAP vs Data ONTAP operating in 7-Mode

Clustered Data ONTAP represents NetApp’s platform for delivering future innovation in the FAS product line. Its inherent qualities of virtualization of network interfaces, disk subsystem, and administrative storage controller map well to OpenStack constructs. The Storage Virtual Machine storage server (SVM, historically referred to as Vserver) can span across all nodes of a given clustered Data ONTAP deployment, for example. The elasticity provided to expand or contract a Storage Virtual Machine across horizontally scalable resources are capabilities critical to cloud deployment unique to the clustered Data ONTAP mode of operation.

The Data ONTAP 7-Mode drivers are primarily provided to allow rapid use of prior deployed FAS systems for OpenStack block storage requirements. There is no current intention to enhance the 7-Mode driver’s capabilities beyond providing basic bug fixes.

[Important]Recommendation

NetApp strongly recommends that all OpenStack deployments built upon the NetApp FAS product set leverage clustered Data ONTAP.

 Deployment Choice: NFS versus iSCSI

A frequent question from customers and partners is whether to utilize NFS or iSCSI as the storage protocol with a Cinder deployment on top of the NetApp FAS product line. Both protocol options are TCP/IP-based, deliver similar throughputs and latencies, support Cinder features, snapshot copies and cloning are supported to similar degrees, as well as advertisement of other storage efficienty, data protection, and high availability features.

iSCSI. 

  • At the time of publishing, the maximum number of iSCSI LUNs per NetApp cluster is either 8,192 or 49,152 - dependent on the FAS model number (refer to Hardware Universe for detailed information for a particular model). Cinder can be configured to operate with multiple NetApp clusters via multi-backend support to increase this number for an OpenStack deployment.
  • LUNs consume more management resources and some management tools also have limitations on the number of LUNs.
  • When Cinder is used independently of OpenStack Compute, use of iSCSI is essential to provide direct access to block devices. The Cinder driver used in conjunction with NFS relies on libvirt and the hypervisor to represent files on NFS as virtual block devices. When Cinder is utilized in bare-metal or non-virtualized environments, the NFS storage protocol is not an option.
  • The number of volumes on E-Series varies based on platform. The E5x00 series supports 2048 volume per system while the E2x00 series supports 512. In both cases, the number of cinder volumes is limited to 256 per physical server. If live migration is enabled, E-Series is limited to 256 volumes. See the netapp_enable_multiattach option for more information.

NFS. 

  • The maximum number of files in a single FlexVol volume exported through NFS is dependent on the size of the FlexVol volume; a 1TB FlexVol can have 33,554,432 files (assuming 32k inodes). The theoretical maximum of files is roughly two billion.
  • NFS drivers require support from the hypervisor to virtualize files and present them as block devices to an instance.
  • As of the Icehouse release, the use of parallel NFS (pNFS) is supported with the NetApp unified driver, providing enhanced performance and scalability characteristics.
  • You cannot apply Cinder QoS specs to NFS backends on cDOT through an SVM-Scoped admin user. In order to do so, you must use a Cluster-Scoped admin user.

[Important]Recommendation

Deploying the NetApp Cinder driver with clustered Data ONTAP utilizing the NFS storage protocol yields a more scalable OpenStack deployment than iSCSI with negligible performance differences. If Cinder is being used to provide block storage services independent of other OpenStack services, the iSCSI protocol must be utilized.

[Tip]Tip

A related use case for the use of iSCSI with OpenStack deployments involves creating a FlexVol volume to serve as the storage for OpenStack compute nodes. As more hypervisor nodes are added, a master boot LUN can simply be cloned for each node, and compute nodes can become completely stateless. Since the configuration of hypervisor nodes are usually nearly identical (except for node-specific data like configuration files, logs, etc), the boot disk lends well to optimizations like deduplication and compression.

Currently this configuration must be done outside of the management scope of Cinder, but it serves as another example of how the differentiated capabilities of NetApp storage can be leveraged to ease the deployment and ongoing operation of an OpenStack cloud deployment.

 Fibre Channel Switch Fabric With Cinder

Cinder includes a Fibre Channel zone manager facility for configuring zoning in Fibre Channel fabrics, specifically supporting Cisco and Brocade Fibre Channel switches. The user is required to configure the zoning parameters in the Cinder configuration file (cinder.conf). An example configuration using Brocade is given below:

zoning_mode=fabric 1

[fc-zone-manager]
fc_fabric_names=fabricA,fabricB
zoning_policy=initiator-target
brcd_sb_connector=cinder.zonemanager.drivers.brocade.brcd_fc_zone_client_cli.BrcdFCZoneClientCLI
fc_san_lookup_service=cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
zone_driver=cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver

[fabricA] 2
fc_fabric_address=hostname
fc_fabric_user=username
fc_fabric_password=password
principal_switch_wwn=00:00:00:00:00:00:00:00

[fabricB] 3
fc_fabric_address=hostname
fc_fabric_user=username
fc_fabric_password=password
principal_switch_wwn=00:00:00:00:00:00:00:00
            

 1

This option will need to be set in the DEFAULT configuration stanza and its value must be fabric.

 2

Be sure that the name of the stanza matches one of the values given in the fc_fabric_names option in the fc-zone-manager configuration stanza.

 3

Be sure that the name of the stanza matches the other value given in the fc_fabric_names option in the fc-zone-manager configuration stanza.

[Important]Important

While OpenStack has support for several Fibre Channel fabric switch vendors, NetApp has validated their drivers with the use of Brocade switches. For more information on other vendors, refer to the upstream documentation.

 Using Cinder Volume Types to Create a Storage Service Catalog

The Storage Service Catalog (SSC) is a concept that describes a set of capabilities that enables efficient, repeated, and consistent use and management of storage resources by the definition of policy-based services and the mapping of those services to the backend storage technology. It is meant to abstract away the actual technical implementations of the features at a storage backend into a set of simplified configuration options.

The storage features are organized or combined into groups based on the customer needs to achieve a particular scenario or use case. Based on the catalog of the storage features, intelligent provisioning decisions are made by infrastructure or software enabling the storage service catalog. In OpenStack, this is achieved together by the Cinder filter scheduler and the NetApp driver by making use of volume type extra-specs support together with the filter scheduler. There are some prominent features which are exposed in the NetApp driver including mirroring, de-duplication, compression, and thin provisioning.

When the NetApp unified driver is used with clustered Data ONTAP and E-Series storage systems, you can leverage extra specs with Cinder volume types to ensure that Cinder volumes are created on storage backends that have certain properties (e.g. QoS, mirroring, compression) configured.

Extra specs are associated with Cinder volume types, so that when users request volumes of a particular volume type, they are created on storage backends that meet the list of requirements (e.g. available space, extra specs, etc). You can use the specs in Table 4.9, “NetApp supported Extra Specs for use with Cinder Volume Types” later in this section when defining Cinder volume types with the cinder type-key command.

Table 4.9. NetApp supported Extra Specs for use with Cinder Volume Types
Extra spec Type Products Supported Description
netapp_raid_type String Clustered Data ONTAP, E-Series Limit the candidate volume list based on one of the following raid types: raid0, raid1, raid4, raid5[a], raid6, raidDiskPool, and raid_dp. Note that raid4 and raid_dp are for Clustered Data ONTAP only and raid0, raid1, raid5, raid6, and raidDiskPool are for E-Series only.
netapp_disk_type String Clustered Data ONTAP, E-Series Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL, FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp_eseries_disk_spindle_speed String E-Series Limit the candidate volume list based on the spindle speed of the drives. Select from the following options: spindleSpeedSSD, spindleSpeed5400, spindleSpeed7200, spindleSpeed10k, spindleSpeed15k. Note: If mixed spindle speeds are present in the same pool, the filtering behavior is undefined.
netapp:qos_policy_group[b] String Clustered Data ONTAP Specify the name of a QoS policy group, which defines measurable Service Level Objectives (SLO), that should be applied to the Cinder volume at the time of volume creation. Ensure that the QoS policy group is defined within clustered Data ONTAP before a Cinder volume is created. The QoS policy group specified will be shared among all Cinder volumes whose volume types reference the policy group in their extra specs. Since the SLO is shared with multiple Cinder volumes, the QoS policy group should not be associated with the destination FlexVol volume. If you want to apply an SLO uniquely on a per Cinder volume basis use Cinder backend QoS specs. See Table 4.1, “NetApp Supported Backend QoS Spec Options”.
netapp_disk_encryption Boolean E-Series Limit the candidate volume list to only the ones that have Full Disk Encryption (FDE) enabled on the storage controller.
netapp_eseries_data_assurance Boolean E-Series Limit the candidate volume list to only the ones that support the Data Assurance (DA) capability. DA provides an additional level of data integrity by computing a checksum for every block of data that is written to the drives. DA is not supported with iSCSI interconnect.
netapp_eseries_flash_read_cache Boolean E-Series Limit the candidate volume list to only the ones that support being added to a Flash Cache. Adding volumes to a Flash Cache can increase read performance. An SSD cache must be defined on the storage controller for this feature to be available.
netapp:read_cache Boolean E-Series Explicitly enable or disable read caching for the Cinder volume at the time of volume creation.
netapp:write_cache Boolean E-Series Explicitly enable or disable write caching for the Cinder volume at the time of volume creation.
netapp_mirrored Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored[c] Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup[c] Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression[c] Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Clustered Data ONTAP, E-Series Limit the candidate volume list to only the ones that support thin provisioning on the storage controller.
netapp_thick_provisioned[c] Boolean Clustered Data ONTAP Limit the candidate volume list to only the ones that support thick provisioning on the storage controller.

[a] Note that RAID3 is a deprecated RAID type on E-Series storage controllers and operates as RAID5.

[b] Please note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the OpenStack Block Storage volume after it has been provisioned.

[c] In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.

 Over-Subscription and Thin-Provisioning

When a thick-provisioned Cinder volume is created, an amount of space is reserved from the backend storage system equal to the size of the requested volume. Because users typically do not actually consume all the space in the Cinder volume, overall storage efficiency is reduced. With thin-provisioned Cinder volumes, on the other hand, space is only carved from the backend storage system as required for actual usage. A thin-provisioned Cinder volume can grow up to its nominal size, but for space-accounting only the actual physically used space counts.

Thin-provisioning allows for over-subscription because you can present more storage space to the hosts connecting to the storage controller than is actually currently available on the storage controller. As an example, in a 1TB storage pool, if four 250GB thick-provisioned volumes are created, it would be necessary to add more storage capacity to the pool in order to create another 250GB volume, even if all volumes are at less than 25% utilization. With thin-provisioning, it is possible to allocate a new volume without exhausting the physical capacity of the storage pool, as only the utilized storage capacity of the volumes impacts the available capacity of the pool.

Thin-provisioning with over-subscription allows flexibility in capacity planning and reduces waste of storage capacity. The storage administrator is able to simply grow storage pools as needed to fill capacity requirements.

As of the Liberty release, all NetApp drivers conform to the standard Cinder scheduler-based over-subscription framework as described here, in which the max_over_subscription_ratio and reserved_percentage configuration options are used to control the degree of over-subscription allowed in the relevant storage pool. Note that the Cinder scheduler only allows over-subscription of a storage pool if the pool reports the thin-provisioning-support capability, as described for each type of NetApp platform below.

The default max_over_subscription_ratio for all drivers is 20.0, and the default reserved_percentage is 0. With these values and thin-provisioning-support capability on (see below), if there is 5TB of actual free space currently available in the backing store for a Cinder pool, then up to 1,000 Cinder volumes of 100GB capacity may be provisioned before getting a failure, assuming actual physical space used averages 5% of nominal capacity.

Data ONTAP Thin Provisioning.  In Data ONTAP multiple forms of thin-provisioning are possible. By default, the nfs_sparsed_volumes configuration option is True, so that files that back Cinder volumes with our NFS drivers are sparsely provisioned, occupying essentially no space when they are created, and growing as data is actually written into the file. With block drivers, on the other hand, the default netapp_lun_space_reservation configuration option is 'enabled' and the corresponding behavior is to reserve space for the entire LUN backing a cinder volume. For thick-provisioned Cinder volumes with NetApp drivers, set nfs_sparsed_volumes to False. For thin-provisioned Cinder volumes with NetApp block drivers, set netapp_lun_space_reservation to 'disabled'.

With Data ONTAP, the flexvols that act as storage pools for Cinder volumes may themselves be thin-provisioned since when these flexvols are carved from storage aggregates this may be done without space guarantees, i.e. the flexvols themselves grow up to their nominal size as actual physical space is consumed.

Data ONTAP drivers report the thin-provisioning-support capability if either the files or LUNs backing cinder volumes in a storage pool are thin-provisioned, or if the flexvol backing the storage pool itself is thin-provisioned. Note that with Data ONTAP drivers, the thin-provisioning-support and thick-provisioning-support capabilities are mutually-exclusive.

E-Series Thin Provisioning.  E-Series thin-provisioned volumes may only be created on Dynamic Disk Pools (DDP). They have 2 different capacities that are relevant: virtual capacity, and physical capacity. Virtual capacity is the capacity that is reported by the volume, while physical (repository), capacity is the actual storage capacity of the pool being utilized by the volume. Physical capacity must be defined/increased in 4GB increments. Thin volumes have two different growth options for physical capacity: automatic and manual. Automatically expanding thin volumes will increase in capacity in 4GB increments, as needed. A thin volume configured as manually expanding must be manually expanded using the appropriate storage management software.

With E-series, thin-provisioned volumes and thick-provisioned volumes may be created in the same storage pool, so the thin-provisioning-support and thick-provisioning-support may both be reported to the scheduler for the same storage pool.

Table 4.10. NetApp supported configuration options for use with Over-Subscription
Extra spec Type Description
max_over_subscription_ratio 20.0 A floating point representation of the oversubscription ratio when thin-provisioning is enabled for the pool.
reserved_percentage 0 Percentage of total pool capacity that is reserved, not available for provisioning.


loading table of contents...