Key Concepts

 Volume

A Cinder volume is the fundamental resource unit allocated by the Block Storage service. It represents an allocation of persistent, readable, and writable block storage that could be utilized as the root disk for a compute instance, or as secondary storage that could be attached and/or detached from a compute instance. The underlying connection between the consumer of the volume and the Cinder service providing the volume can be achieved with the iSCSI, NFS, or Fibre Channel storage protocols (dependent on the support of the Cinder driver deployed).

[Warning]Warning

A Cinder volume is an abstract storage object that may or may not directly map to a "volume" concept from the underlying backend provider of storage. It is critically important to understand this distinction, particularly in context of a Cinder deployment that leverages NetApp storage solutions.

Cinder volumes can be identified uniquely through a UUID assigned by the Cinder service at the time of volume creation. A Cinder volume may also be optionally referred to by a human-readable name, though this string is not guaranteed to be unique within a single tenant or deployment of Cinder.

The actual blocks provisioned in support of a Cinder volume reside on a single Cinder backend. Starting in the Havana release, a Cinder volume can be migrated from one storage backend to another within a deployment of the Cinder service; refer to the section called “Cinder Command Line Interface (CLI)” for an example of volume migration.

The cinder manage command allows importing existing storage objects that are not managed by Cinder into new Cinder volumes. The operation will attempt to locate an object within a specified Cinder backend and create the necessary metadata within the Cinder database to allow it to be managed like any other Cinder volume. The operation will also rename the volume to a name appropriate to the particular Cinder driver in use. The original source name or path will be stored in the database table volume_admin_metadata for future reference. The metadata will be stored in the table with key as original-source-name-when-managed. The imported storage object could be a file, LUN, or a volume depending on the protocol (iSCSI/NFS) and driver (Data ONTAP operating in 7-mode, clustered Data ONTAP) in use. This feature is useful in migration scenarios where virtual machines or other data need to be managed by Cinder; refer to the section called “Cinder manage usage” for an example of the cinder manage command.

The cinder unmanage command allows Cinder to cease management of a particular Cinder volume. All data stored in the Cinder database related to the volume is removed, but the volume's backing file, LUN, or appropriate storage object is not deleted. This allows the volume to be transferred to another environment for other use cases; refer to the section called “Cinder unmanage usage” for an example of the cinder unmanage command.

 Snapshot

A Cinder snapshot is a point-in-time, read-only copy of a Cinder volume. Snapshots can be created from an existing Cinder volume that is operational and either attached to an instance or in a detached state. A Cinder snapshot can serve as the content source for a new Cinder volume when the Cinder volume is created with the create from snapshot option specified.

 Backend

A Cinder backend is the configuration object that represents a single provider of block storage upon which provisioning requests may be fulfilled. A Cinder backend communicates with the storage system through a Cinder driver. Cinder supports multiple backends to be simultaneously configured and managed (even with the same Cinder driver) as of the Grizzly release.

[Note]Note

A single Cinder backend may be defined in the [DEFAULT] stanza of cinder.conf; however, NetApp recommends that the enabled_backends configuration option be set to a comma-separated list of backend names, and each backend name have its own configuration stanza with the same name as listed in the enabled_backends option. Refer to the section called “Cinder” for an example of the use of this option.

 Driver

A Cinder driver is a particular implementation of a Cinder backend that maps the abstract APIs and primitives of Cinder to appropriate constructs within the particular storage solution underpinning the Cinder backend.

[Caution]Caution

The use of the term "driver" often creates confusion given common understanding of the behavior of “device drivers” in operating systems. The term can connote software that provides a data I/O path. In the case of Cinder driver implementations, the software provides provisioning and other manipulation of storage devices but does not lay in the path of data I/O. For this reason, the term "driver" is often used interchangeably with the alternative (and perhaps more appropriate) term “provider”.

 Volume Type

A Cinder volume type is an abstract collection of criteria used to characterize Cinder volumes. They are most commonly used to create a hierarchy of functional capabilities that represent a tiered level of storage services; for example, a cloud administrator might define a premium volume type that indicates a greater level of performance than a basic volume type, which would represent a best-effort level of performance.

The collection of criteria is specified as a list of key/value pairs, which are inspected by the Cinder scheduler when determining which Cinder backend(s) are able to fulfill a provisioning request. Individual Cinder drivers (and subsequently Cinder backends) may advertise arbitrary key/value pairs (also referred to as capabilities) to the Cinder scheduler, which are then compared against volume type definitions when determining which backend will fulfill a provisioning request.

Extra Spec.  An extra spec is a key/value pair, expressed in the style of key=value. Extra specs are associated with Cinder volume types, so that when users request volumes of a particular volume type, the volumes are created on storage backends that meet the specified criteria.

[Note]Note

The list of default capabilities that may be reported by a Cinder driver and included in a volume type definition include:

  • volume_backend_name: The name of the backend as defined in cinder.conf
  • vendor_name: The name of the vendor who has implemented the driver (e.g. NetApp)
  • driver_version: The version of the driver (e.g. 1.0)
  • storage_protocol: The protocol used by the backend to export block storage to clients (e.g. iSCSI or nfs)

For a table of NetApp supported extra specs, refer to Table 4.7, “NetApp supported Extra Specs for use with Cinder Volume Types”.

QoS Spec.  QoS Specs are used to apply generic Quality-of-Service(QoS) support for volumes that can be enforced either at the hypervisor (front-end) or at the storage subsystem (back-end), or both. QoS specifications are added as standalone objects that can then be associated with Cinder volume types.

[Note]Note

The NetApp Cinder drivers do not currently support any back-end QoS specs; however, the NetApp construct of QoS policy groups can be assigned to Cinder volumes managed through a clustered Data ONTAP backend that uses the NFS storage protocol. For more information, see Table 4.7, “NetApp supported Extra Specs for use with Cinder Volume Types”.

 Storage Pools

With the Juno release of OpenStack, Cinder has introduced the concept of "storage pools". The backend storage may present one or more logical storage resource pools from which Cinder will select as a storage location when provisioning volumes. In releases prior to Juno, NetApp's Cinder drivers contained some logic that determined which FlexVol volume or DDP a Cinder volume would be placed into; with the introduction of pools, all scheduling logic is performed completely within the Cinder scheduler.

For NetApp's Cinder drivers, a Cinder pool is a single container. The container that is mapped to a Cinder pool is dependent on the storage protocol used:

  • iSCSI: a Cinder pool is created for every FlexVol volume within the SVM specified by the configuration option netapp_vserver, or for Data ONTAP operating in 7-mode, all FlexVol volumes within the system unless limited by specifying a list of volumes in the configuration option netapp_volume_list.
  • NFS: a Cinder pool is created for each junction path from FlexVol volumes that are listed in the configuration option nfs_shares_config.
  • E-Series: a Cinder pool is created for each DDP listed in the configuration option netapp_storage_pools.

For additonal information, refer to Cinder Scheduling and resource pool selection.



loading table of contents...