Operational Concerns with Data ONTAP

 Considerations for the use of thin provisioning with FlexVol volumes

Using thin provisioning, you can configure your volumes so that they appear to provide more storage than they have available, provided that the storage that is actually being used does not exceed the available storage.

To use thin provisioning with FlexVol volumes, you create the volume with a guarantee of none. With a guarantee of none, the volume size is not limited by the aggregate size. In fact, each volume could, if required, be larger than the containing aggregate. The storage provided by the aggregate is used up only as data is written to the LUN or file.

If the volumes associated with an aggregate show more storage as available than the physical resources available to that aggregate, the aggregate is overcommitted. When an aggregate is overcommitted, it is possible for writes to LUNs or files in volumes contained by that aggregate to fail if there is not sufficient free space available to accommodate the write.

If you have overcommitted your aggregate, you must monitor your available space and add storage to the aggregate as needed to avoid write errors due to insufficient space.

Aggregates can provide storage to FlexVol volumes associated with more than one Storage Virtual Machine (SVM). When sharing aggregates for thin-provisioned volumes in a multi-tenancy environment, be aware that one tenant's aggregate space availability can be adversely affected by the growth of another tenant's volumes.

For more information about thin provisioning, see the following technical reports:

 NFS Best Practices

Be sure to refer to the Clustered Data ONTAP NFS Best Practices and Implementation Guide for information on how to optimally set up NFS exports for use with the various OpenStack services described in this guide.

 Volume Migration

Volume migration is a new feature added to the OpenStack Block Storage service in the Havana release. Volume migration allows the movement of data associated with a Cinder volume from one storage backend to another, completely transparent to any instances that may have the volume attached. Volume migration is agnostic to the storage protocol used to access the volume, and can be utilized to switch from leveraging iSCSI to NFS, or vice versa, even on the same physical NetApp system.

Please be aware of the following caveats:

  1. Volume migrations can only be performed by a cloud administrator, not a tenant or user.
  2. Migration of volumes that have existing snapshots is not currently allowed. If it is important to retain the data stored in a snapshot for a volume you wish to migrate, one option is to create another Cinder volume from the snapshot (which creates a copy of the snapshot), and then delete the snapshot that is associated with the volume you wish to migrate.
  3. Transparent migration of volumes that are attached to Nova instances are only supported when the Nova instances are running on a hypervisor that supports live migration. NetApp has qualified live migrations with the KVM hypervisor on Ubuntu 13.04 with the Havana release; note that earlier versions of Ubuntu (including 12.04 LTS) are known to not support migration of attached volumes.

Migrating from Data ONTAP operating in 7-Mode to clustered Data ONTAP.  The volume migration feature of the OpenStack Block Storage service can be used to aid in the transition from Data ONTAP operating in 7-Mode to clustered Data ONTAP seamlessly. If you have volumes managed by Cinder on a Data ONTAP operating in 7-Mode storage system, you can configure the clustered Data ONTAP instance as a new backend in the Cinder configuration and leverage the migration feature to move existing volumes to the new backend and then retire the Data ONTAP operating in 7-Mode system, if desired.

Once you have configured both the storage systems to operate with Cinder, you can verify both backends have been enabled successfully and are ready to support the migration process.

$ cinder service list
+------------------+-------------------+------+---------+-------+--------------------------+
|      Binary      |       Host        | Zone |  Status | State |        Updated_at        |
+------------------+-------------------+------+---------+-------+--------------------------+
| cinder-scheduler |     openstack1    | nova | enabled |   up  | 2013-1-1T19:01:26.000000 |
|  cinder-volume   |  openstack1@7mode | nova | enabled |   up  | 2013-1-1T19:01:18.000000 |
|  cinder-volume   |  openstack1@cDOT  | nova | enabled |   up  | 2013-1-1T19:01:27.000000 |
+------------------+-------------------+------+---------+-------+--------------------------+

The host openstack1@7mode represents the backend representing the Data ONTAP operating in 7-Mode system, and openstack1@cDOT represents the backend representing the clustered Data ONTAP system. Volumes can be migrated individually to the new backend, through the use of the cinder migrate CLI command. For example, suppose we have a volume with ID 781501e1-af79-4d3e-be90-f332a5841f5e on the openstack1@7mode storage backend. We can move it to the openstack1@cDOT backend with the following CLI command:

cinder migrate 781501e1-af79-4d3e-be90-f332a5841f5e openstack1@cDOT
        

The command is asynchronous and completes in the background. You can check the status of the migration with the show command:

cinder show 781501e1-af79-4d3e-be90-f332a5841f5e
        

While a volume migration is in progress, Cinder commands from tenants that involve operations on the volume (such as attach/detach, snapshot, clone, etc) will fail.

During this process, the data is copied inefficiently through the cinder-volume node. This compares unfavorably to data-efficient migration techniques, but has the significant advantage that it can be completely non-disruptive (if live migration is supported) and it can be done in small increments of one Cinder volume at a time, so the operations can be distributed over periods when load is minimal.

If you are utilizing a hypervisor that does not support live migration of volumes and the volume is currently attached, it is necessary to detach the volume from the Nova instance before performing the migration. If the volume is the boot volume or otherwise critical to the operation of the instance, you need to shutdown the Nova instance before performing the migration.



loading table of contents...