ONTAP Metrics
This document describes how Harvest metrics relate to their relevant ONTAP ZAPI and REST mappings, including:
-
Details about which Harvest metrics each dashboard uses. These can be generated on demand by running
bin/harvest grafana metrics. See #1577 for details. -
More information about ONTAP REST performance counters can be found here.
Creation Date : 2025-Aug-12
ONTAP Version: 9.16.1
Navigate to Grafana dashboards
Add your Grafana instance to the following form and save it. When you click on dashboard links on this page, a link to your dashboard will be opened. NAbox hosts Grafana on a subdomain like so: https://localhost/grafana/
Understanding the structure¶
Below is an annotated example of how to interpret the structure of each of the metrics.
disk_io_queued Name of the metric exported by Harvest
Number of I/Os queued to the disk but not yet issued Description of the ONTAP metric
- API will be one of REST or ZAPI depending on which collector is used to collect the metric
- Endpoint name of the REST or ZAPI API used to collect this metric
- Metric name of the ONTAP metric
- Template path of the template that collects the metric
Performance related metrics also include:
- Unit the unit of the metric
- Type describes how to calculate a cooked metric from two consecutive ONTAP raw metrics
- Base some counters require a
base counterfor post-processing. When required, this property lists thebase counter
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
Metrics¶
aggr_disk_busy¶
The utilization percent of the disk. aggr_disk_busy is disk_busy aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Disk Utilization | table | Top $TopResources Average Disk Utilization Per Aggregate |
| ONTAP: Aggregate | Disk Utilization | timeseries | Top $TopResources Average Disk Utilization Per Aggregate |
| ONTAP: Cluster | Throughput | timeseries | Average Disk Utilization by Aggregate |
| ONTAP: Disk | Highlights | stat | Raid Groups |
| ONTAP: Disk | Highlights | stat | Plexes |
aggr_disk_capacity¶
Disk capacity in MB. aggr_disk_capacity is disk_capacity aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP. aggr_disk_cp_read_chain is disk_cp_read_chain aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations. aggr_disk_cp_read_latency is disk_cp_read_latency aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_cp_reads¶
Number of disk read operations initiated each second for consistency point processing. aggr_disk_cp_reads is disk_cp_reads aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response. aggr_disk_io_pending is disk_io_pending aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_io_queued¶
Number of I/Os queued to the disk but not yet issued. aggr_disk_io_queued is disk_io_queued aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_busy¶
The utilization percent of the disk. aggr_disk_max_busy is the maximum of disk_busy for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_max_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Highlights | table | Top $TopResources Aggregates by Disk Utilization |
| ONTAP: Disk | Highlights | timeseries | Top $TopResources Aggregates by Max Disk Utilization |
| ONTAP: MetroCluster | Highlights | gauge | Max Disk Utilization Per Aggregate |
aggr_disk_max_capacity¶
Disk capacity in MB. aggr_disk_max_capacity is the maximum of disk_capacity for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP. aggr_disk_max_cp_read_chain is the maximum of disk_cp_read_chain for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations. aggr_disk_max_cp_read_latency is the maximum of disk_cp_read_latency for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_cp_reads¶
Number of disk read operations initiated each second for consistency point processing. aggr_disk_max_cp_reads is the maximum of disk_cp_reads for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response. aggr_disk_max_io_pending is the maximum of disk_io_pending for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_io_queued¶
Number of I/Os queued to the disk but not yet issued. aggr_disk_max_io_queued is the maximum of disk_io_queued for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_total_data¶
Total throughput for user operations per second. aggr_disk_max_total_data is the maximum of disk_total_data for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_total_transfers¶
Total number of disk operations involving data transfer initiated per second. aggr_disk_max_total_transfers is the maximum of disk_total_transfers for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_max_total_transfers metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Highlights | table | Top $TopResources Aggregates by Disk Utilization |
| ONTAP: Disk | Highlights | timeseries | Top $TopResources Aggregates by Disk Transfers |
aggr_disk_max_user_read_blocks¶
Number of blocks transferred for user read operations per second. aggr_disk_max_user_read_blocks is the maximum of disk_user_read_blocks for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_user_read_chain¶
Average number of blocks transferred in each user read operation. aggr_disk_max_user_read_chain is the maximum of disk_user_read_chain for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_max_user_read_chain metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Highlights | timeseries | Top $TopResources Aggregates by User Read Chain Length |
aggr_disk_max_user_read_latency¶
Average latency per block in microseconds for user read operations. aggr_disk_max_user_read_latency is the maximum of disk_user_read_latency for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. aggr_disk_max_user_reads is the maximum of disk_user_reads for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_user_write_blocks¶
Number of blocks transferred for user write operations per second. aggr_disk_max_user_write_blocks is the maximum of disk_user_write_blocks for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_user_write_chain¶
Average number of blocks transferred in each user write operation. aggr_disk_max_user_write_chain is the maximum of disk_user_write_chain for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_max_user_write_chain metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Highlights | timeseries | Top $TopResources Aggregates by User Write Chain Length |
aggr_disk_max_user_write_latency¶
Average latency per block in microseconds for user write operations. aggr_disk_max_user_write_latency is the maximum of disk_user_write_latency for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_max_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests. aggr_disk_max_user_writes is the maximum of disk_user_writes for label aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_total_data¶
Total throughput for user operations per second. aggr_disk_total_data is disk_total_data aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_total_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Aggregate | table | Aggregates |
aggr_disk_total_transfers¶
Total number of disk operations involving data transfer initiated per second. aggr_disk_total_transfers is disk_total_transfers aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_total_transfers metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Aggregate | table | Aggregates |
aggr_disk_user_read_blocks¶
Number of blocks transferred for user read operations per second. aggr_disk_user_read_blocks is disk_user_read_blocks aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_user_read_chain¶
Average number of blocks transferred in each user read operation. aggr_disk_user_read_chain is disk_user_read_chain aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_user_read_latency¶
Average latency per block in microseconds for user read operations. aggr_disk_user_read_latency is disk_user_read_latency aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. aggr_disk_user_reads is disk_user_reads aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_user_reads metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Highlights | timeseries | Top $TopResources Aggregates by IOPS Per Power Consumed |
aggr_disk_user_write_blocks¶
Number of blocks transferred for user write operations per second. aggr_disk_user_write_blocks is disk_user_write_blocks aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_user_write_chain¶
Average number of blocks transferred in each user write operation. aggr_disk_user_write_chain is disk_user_write_chain aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_user_write_latency¶
Average latency per block in microseconds for user write operations. aggr_disk_user_write_latency is disk_user_write_latency aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
aggr_disk_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests. aggr_disk_user_writes is disk_user_writes aggregated by aggr.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_disk_user_writes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Highlights | timeseries | Top $TopResources Aggregates by IOPS Per Power Consumed |
aggr_efficiency_savings¶
Space saved by storage efficiencies (logical_used - used)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency.savings |
conf/rest/9.12.0/aggr.yaml |
aggr_efficiency_savings_wo_snapshots¶
Space saved by storage efficiencies (logical_used - used)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency_without_snapshots.savings |
conf/rest/9.12.0/aggr.yaml |
aggr_efficiency_savings_wo_snapshots_flexclones¶
Space saved by storage efficiencies (logical_used - used)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency_without_snapshots_flexclones.savings |
conf/rest/9.12.0/aggr.yaml |
aggr_hybrid_cache_size_total¶
Total usable space in bytes of SSD cache. Only provided when hybrid_cache.enabled is 'true'.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
block_storage.hybrid_cache.size |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.hybrid-cache-size-total |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_hybrid_disk_count¶
Number of disks used in the cache tier of the aggregate. Only provided when hybrid_cache.enabled is 'true'.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
block_storage.hybrid_cache.disk_count |
conf/rest/9.12.0/aggr.yaml |
aggr_inode_files_private_used¶
Number of system metadata files used. If the referenced file system is restricted or offline, a value of 0 is returned.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.files_private_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.files-private-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_inode_files_total¶
Maximum number of user-visible files that this referenced file system can currently hold. If the referenced file system is restricted or offline, a value of 0 is returned.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.files_total |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.files-total |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_inode_files_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inodes Files |
aggr_inode_files_used¶
Number of user-visible files used in the referenced file system. If the referenced file system is restricted or offline, a value of 0 is returned.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.files_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.files-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_inode_files_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inodes Files |
aggr_inode_inodefile_private_capacity¶
Number of files that can currently be stored on disk for system metadata files. This number will dynamically increase as more system files are created.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.file_private_capacity |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.inodefile-private-capacity |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_inode_inodefile_private_capacity metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inode Capacity |
aggr_inode_inodefile_public_capacity¶
Number of files that can currently be stored on disk for user-visible files. This number will dynamically increase as more user-visible files are created.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.file_public_capacity |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.inodefile-public-capacity |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_inode_inodefile_public_capacity metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inode Capacity |
aggr_inode_maxfiles_available¶
The count of the maximum number of user-visible files currently allowable on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.max_files_available |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.maxfiles-available |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_inode_maxfiles_possible¶
The largest value to which the maxfiles-available parameter can be increased by reconfiguration, on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.max_files_possible |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.maxfiles-possible |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_inode_maxfiles_used¶
The number of user-visible files currently in use on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.max_files_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.maxfiles-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_inode_used_percent¶
The percentage of disk space currently in use based on user-visible file count on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
inode_attributes.used_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-inode-attributes.percent-inode-used-capacity |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_inode_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inodes Used % |
aggr_labels¶
This metric provides information about Aggregate
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
Harvest generated |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | stat | Aggregates |
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Aggregates |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_logical_used_wo_snapshots¶
Logical used
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency_without_snapshots.logical_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-efficiency-get-iter |
aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-logical-used-wo-snapshots |
conf/zapi/cdot/9.9.0/aggr_efficiency.yaml |
The aggr_logical_used_wo_snapshots metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Storage Efficiency Ratios | stat | Data Reduction with FlexClones |
| ONTAP: Aggregate | Storage Efficiency Ratios | timeseries | Top $TopResources Aggregates by Logical Used with FlexClones |
| ONTAP: Cluster | Storage Efficiency Ratios | stat | Data Reduction with FlexClones |
| ONTAP: Cluster | Storage Efficiency Ratios | timeseries | Logical Used with FlexClones |
| ONTAP: Datacenter | Storage Efficiency | stat | Data Reduction with FlexClones |
| ONTAP: Datacenter | Storage Efficiency | timeseries | Top $TopResources Logical Used with FlexClones by Cluster |
aggr_logical_used_wo_snapshots_flexclones¶
Logical used
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency_without_snapshots_flexclones.logical_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-efficiency-get-iter |
aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-logical-used-wo-snapshots-flexclones |
conf/zapi/cdot/9.9.0/aggr_efficiency.yaml |
The aggr_logical_used_wo_snapshots_flexclones metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Storage Efficiency Ratios | stat | Data Reduction |
| ONTAP: Aggregate | Storage Efficiency Ratios | timeseries | Top $TopResources Aggregates by Logical Used |
| ONTAP: Cluster | Storage Efficiency Ratios | stat | Data Reduction |
| ONTAP: Cluster | Storage Efficiency Ratios | timeseries | Logical Used |
| ONTAP: Datacenter | Storage Efficiency | stat | Data Reduction |
| ONTAP: Datacenter | Storage Efficiency | timeseries | Top $TopResources Logical Used by Cluster |
aggr_new_status¶
This metric indicates a value of 1 if the aggregate state is online (indicating the aggregate is operational) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: Node | Highlights | stat | Aggregates |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_object_store_logical_used¶
Logical space usage of aggregates in the attached object store.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/aggr/show-space |
object_store_logical_used |
conf/rest/9.12.0/aggr.yaml |
The aggr_object_store_logical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by logical space usage in Object Store |
aggr_object_store_physical_used¶
Physical space usage of aggregates in the attached object store.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/aggr/show-space |
object_store_physical_used |
conf/rest/9.12.0/aggr.yaml |
The aggr_object_store_physical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by physical space usage in Object Store |
aggr_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_other_latency¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.latency_raw.otherUnit: microsec Type: average Base: aggr_statistics.iops_raw.other |
conf/keyperf/9.15.0/aggr.yaml |
aggr_other_ops¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_physical_used_wo_snapshots¶
Total Data Reduction Physical Used Without Snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency_without_snapshots.logical_used, space.efficiency_without_snapshots.savings |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-efficiency-get-iter |
aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-physical-used-wo-snapshots |
conf/zapi/cdot/9.9.0/aggr_efficiency.yaml |
The aggr_physical_used_wo_snapshots metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Storage Efficiency Ratios | stat | Data Reduction with FlexClones |
| ONTAP: Aggregate | Storage Efficiency Ratios | timeseries | Top $TopResources Aggregates by Physical Used with FlexClones |
| ONTAP: Cluster | Storage Efficiency Ratios | stat | Data Reduction with FlexClones |
| ONTAP: Cluster | Storage Efficiency Ratios | timeseries | Physical Used with FlexClones |
| ONTAP: Datacenter | Storage Efficiency | stat | Data Reduction with FlexClones |
| ONTAP: Datacenter | Storage Efficiency | timeseries | Top $TopResources Physical Used with FlexClones by Cluster |
aggr_physical_used_wo_snapshots_flexclones¶
Total Data Reduction Physical Used without snapshots and flexclones
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency_without_snapshots_flexclones.logical_used, space.efficiency_without_snapshots_flexclones.savings |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-efficiency-get-iter |
aggr-efficiency-info.aggr-efficiency-cumulative-info.total-data-reduction-physical-used-wo-snapshots-flexclones |
conf/zapi/cdot/9.9.0/aggr_efficiency.yaml |
The aggr_physical_used_wo_snapshots_flexclones metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Storage Efficiency Ratios | stat | Data Reduction |
| ONTAP: Aggregate | Storage Efficiency Ratios | timeseries | Top $TopResources Aggregates by Physical Used |
| ONTAP: Cluster | Storage Efficiency Ratios | stat | Data Reduction |
| ONTAP: Cluster | Storage Efficiency Ratios | timeseries | Physical Used |
| ONTAP: Datacenter | Storage Efficiency | stat | Data Reduction |
| ONTAP: Datacenter | Storage Efficiency | timeseries | Top $TopResources Physical Used by Cluster |
aggr_power¶
Power consumed by aggregate in Watts.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The aggr_power metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Highlights | timeseries | Top $TopResources Aggregates by Power Consumed |
| ONTAP: Power | Highlights | timeseries | Aggregates Power by Disk Type |
| ONTAP: Power | Aggregate | table | Aggregates |
aggr_primary_disk_count¶
Number of disks used in the aggregate. This includes parity disks, but excludes disks in the hybrid cache.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
block_storage.primary.disk_count |
conf/rest/9.12.0/aggr.yaml |
aggr_raid_disk_count¶
Number of disks in the aggregate.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
block_storage.primary.disk_count, block_storage.hybrid_cache.disk_count |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-raid-attributes.disk-count |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_raid_disk_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | stat | Disks |
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: Disk | Highlights | stat | Total Disks by Aggregate(s) |
| ONTAP: Disk | Highlights | table | Disk Capacity Per Aggregate |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Disks |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_raid_plex_count¶
Number of plexes in the aggregate
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
block_storage.plexes.# |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-raid-attributes.plex-count |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_raid_size¶
Option to specify the maximum number of disks that can be included in a RAID group.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
block_storage.primary.raid_size |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-raid-attributes.raid-size |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_read_data¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_read_latency¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.latency_raw.readUnit: microsec Type: average Base: aggr_statistics.iops_raw.read |
conf/keyperf/9.15.0/aggr.yaml |
aggr_read_ops¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_snapshot_files_total¶
Total files allowed in snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
snapshot.files_total |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.files-total |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_files_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Snapshot Files |
aggr_snapshot_files_used¶
Total files created in snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
snapshot.files_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.files-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_files_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Snapshot Files |
aggr_snapshot_inode_used_percent¶
The percentage of disk space currently in use based on user-visible file (inode) count on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.percent-inode-used-capacity |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_inode_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Snapshot Inodes Used % |
aggr_snapshot_maxfiles_available¶
Maximum files available for snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
snapshot.max_files_available |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.maxfiles-available |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_maxfiles_available metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Snapshot MaxFiles |
aggr_snapshot_maxfiles_possible¶
The largest value to which the maxfiles-available parameter can be increased by reconfiguration, on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
snapshot.max_files_available, snapshot.max_files_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.maxfiles-possible |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_maxfiles_possible metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Snapshot MaxFiles |
aggr_snapshot_maxfiles_used¶
Files in use by snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
snapshot.max_files_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.maxfiles-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_maxfiles_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Snapshot MaxFiles |
aggr_snapshot_reserve_percent¶
Percentage of space reserved for snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.snapshot.reserve_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.snapshot-reserve-percent |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_reserve_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Space Reserved for Snapshots % |
aggr_snapshot_size_available¶
Available space for snapshots in bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.snapshot.available |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.size-available |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_size_available metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Space Used by Snapshots |
aggr_snapshot_size_total¶
Total space for snapshots in bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.snapshot.total |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.size-total |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_snapshot_size_used¶
Space used by snapshots in bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.snapshot.used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.size-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_size_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Space Used by Snapshots |
aggr_snapshot_used_percent¶
Percentage of disk space used by snapshots
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.snapshot.used_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-snapshot-attributes.percent-used-capacity |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_snapshot_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Space Used by Snapshots % |
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Space Used by Snapshots % |
aggr_space_available¶
Space available in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.available |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.size-available |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_available metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | stat | Available Space |
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Space Available |
| ONTAP: Cluster | Highlights | stat | Available Space |
| ONTAP: Datacenter | Highlights | stat | Available Space |
| ONTAP: Datacenter | Highlights | timeseries | Top $TopResources Available Space by Cluster |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Available Space |
| ONTAP: StorageGrid FabricPool | Highlights | timeseries | Space Available |
aggr_space_capacity_tier_used¶
Used space in bytes in the cloud store. Only applicable for aggregates with a cloud store tier.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.cloud_storage.used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.capacity-tier-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_capacity_tier_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Capacity Tier Used |
| ONTAP: Aggregate | FabricPool | timeseries | Top $TopResources Aggregates by Capacity Tier Footprint |
| ONTAP: StorageGrid FabricPool | Highlights | timeseries | Capacity Tier Used |
aggr_space_data_compacted_count¶
Amount of compacted data in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.data_compacted_count |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.data-compacted-count |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_space_data_compaction_saved¶
Space saved in bytes by compacting the data.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.data_compaction_space_saved |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.data-compaction-space-saved |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_data_compaction_saved metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Data Compaction space saved |
aggr_space_data_compaction_saved_percent¶
Percentage saved by compacting the data.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.data_compaction_space_saved_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.data-compaction-space-saved-percent |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_space_performance_tier_inactive_user_data¶
The size that is physically used in the block storage and has a cold temperature, in bytes. This property is only supported if the aggregate is either attached to a cloud store or can be attached to a cloud store.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either block_storage.inactive_user_data or **.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.inactive_user_data |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.performance-tier-inactive-user-data |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_performance_tier_inactive_user_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inactive Data |
aggr_space_performance_tier_inactive_user_data_percent¶
The percentage of inactive user data in the block storage. This property is only supported if the aggregate is either attached to a cloud store or can be attached to a cloud store.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either block_storage.inactive_user_data_percent or **.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.inactive_user_data_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.performance-tier-inactive-user-data-percent |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_performance_tier_inactive_user_data_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Inactive Data % |
aggr_space_performance_tier_used¶
A summation of volume footprints (including volume guarantees), in bytes. This includes all of the volume footprints in the block_storage tier and the cloud_storage tier.This is an advanced property; there is an added computational cost to retrieving its value. The field is not populated for either a collection GET or an instance GET unless it is explicitly requested using the fields query parameter containing either footprint or **.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.footprint |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-space-get-iter |
volume-footprints |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_performance_tier_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | FabricPool | timeseries | Top $TopResources Aggregates by Performance Tier Footprint |
aggr_space_performance_tier_used_percent¶
A summation of volume footprints inside the aggregate,as a percentage. A volume's footprint is the amount of space being used for the volume in the aggregate.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.footprint_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-space-get-iter |
volume-footprints-percent |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_performance_tier_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | FabricPool | timeseries | Top $TopResources Aggregates by Performance Tier Footprint % |
aggr_space_physical_used¶
Total physical used size of an aggregate in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.physical_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.physical-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_physical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Physical Space Used |
| ONTAP: Aggregate | Storage Efficiency Ratios | timeseries | Top $TopResources Aggregates by Physical Used with Snapshots & FlexClones |
| ONTAP: Cluster | Storage Efficiency Ratios | timeseries | Physical Used with Snapshots & FlexClones |
| ONTAP: Datacenter | Storage Efficiency | timeseries | Top $TopResources Physical Used with Snapshots & FlexClones by Cluster |
| ONTAP: StorageGrid FabricPool | Highlights | timeseries | Physical Space Used |
aggr_space_physical_used_percent¶
Physical used percentage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.physical_used_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.physical-used-percent |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_physical_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Physical Space Used % |
aggr_space_reserved¶
The total disk space in bytes that is reserved on the referenced file system. The reserved space is already counted in the used space, so this element can be used to see what portion of the used space represents space reserved for future use.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.total-reserved-space |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_space_sis_saved¶
Amount of space saved in bytes by storage efficiency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.volume_deduplication_space_saved |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.sis-space-saved |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_sis_saved metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by SIS space saved |
aggr_space_sis_saved_percent¶
Percentage of space saved by storage efficiency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.volume_deduplication_space_saved_percent |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.sis-space-saved-percent |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_space_sis_shared_count¶
Amount of shared bytes counted by storage efficiency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.volume_deduplication_shared_count |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.sis-shared-count |
conf/zapi/cdot/9.8.0/aggr.yaml |
aggr_space_total¶
Total usable space in bytes, not including WAFL reserve and aggregate snapshot reserve.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.size |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.size-total |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | stat | Total Space |
| ONTAP: Aggregate | Highlights | stat | Space Used % |
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: Aggregate | Highlights | timeseries | Top $TopResources Aggregates by Total Space |
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources Aggregates by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources Aggregates by Capacity Used % |
| ONTAP: Cluster | Highlights | stat | Total Space |
| ONTAP: Cluster | Highlights | stat | Space Used % |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | bargauge | Capacity used |
| ONTAP: Datacenter | Highlights | stat | Total Space |
| ONTAP: Datacenter | Highlights | stat | Space Used % |
| ONTAP: Datacenter | Highlights | timeseries | Top $TopResources Total Space by Cluster |
| ONTAP: Datacenter | Highlights | timeseries | Top $TopResources Space Used % by Cluster |
| ONTAP: Disk | Highlights | table | Disk Capacity Per Aggregate |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Total Space |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Space Used % |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_space_used¶
Space used or reserved in bytes. Includes volume guarantees and aggregate metadata.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.size-used |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | stat | Used and Reserved Space |
| ONTAP: Aggregate | Highlights | stat | Space Used % |
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources Aggregates by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources Aggregates by Capacity Used % |
| ONTAP: Cluster | Highlights | stat | Used and Reserved Space |
| ONTAP: Cluster | Highlights | stat | Space Used % |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | bargauge | Capacity used |
| ONTAP: Datacenter | Highlights | stat | Space Used % |
| ONTAP: Datacenter | Highlights | stat | Used and Reserved Space |
| ONTAP: Datacenter | Highlights | timeseries | Top $TopResources Used and Reserved Space by Cluster |
| ONTAP: Datacenter | Highlights | timeseries | Top $TopResources Space Used % by Cluster |
| ONTAP: Datacenter | Power and Temperature | stat | Average Power/Used_TB |
| ONTAP: Power | Highlights | stat | Average Power/Used_TB |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Space Used % |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_space_used_percent¶
The percentage of disk space currently in use on the referenced file system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.block_storage.used, space.block_storage.size |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-space-attributes.percent-used-capacity |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_space_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: Cluster | Throughput | timeseries | Average Aggregate Space Used |
| ONTAP: Disk | Highlights | table | Disk Capacity Per Aggregate |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_total_data¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_total_latency¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.latency_raw.totalUnit: microsec Type: average Base: aggr_statistics.iops_raw.total |
conf/keyperf/9.15.0/aggr.yaml |
aggr_total_logical_used¶
Logical used
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency.logical_used |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-efficiency-get-iter |
aggr-efficiency-info.aggr-efficiency-cumulative-info.total-logical-used |
conf/zapi/cdot/9.9.0/aggr_efficiency.yaml |
The aggr_total_logical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Storage Efficiency Ratios | stat | Data Reduction with Snapshots & FlexClones |
| ONTAP: Aggregate | Storage Efficiency Ratios | timeseries | Top $TopResources Aggregates by Logical Used with Snapshots & FlexClones |
| ONTAP: Aggregate | Growth Rate | timeseries | Top $TopResources Aggregates Per Growth Rate of Logical Used |
| ONTAP: Aggregate | Growth Rate | table | Top $TopResources Aggregates by Logical Usage: Delta Report |
| ONTAP: Cluster | Storage Efficiency Ratios | stat | Data Reduction with Snapshots &FlexClones |
| ONTAP: Cluster | Storage Efficiency Ratios | timeseries | Logical Used with Snapshots & FlexClones |
| ONTAP: Datacenter | Storage Efficiency | stat | Data Reduction with Snapshots & FlexClones |
| ONTAP: Datacenter | Storage Efficiency | timeseries | Top $TopResources Logical Used with Snapshots & FlexClones by Cluster |
aggr_total_ops¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_total_physical_used¶
Total Physical Used
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
space.efficiency.logical_used, space.efficiency.savings |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-efficiency-get-iter |
aggr-efficiency-info.aggr-efficiency-cumulative-info.total-physical-used |
conf/zapi/cdot/9.9.0/aggr_efficiency.yaml |
The aggr_total_physical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Storage Efficiency Ratios | stat | Data Reduction with Snapshots & FlexClones |
| ONTAP: Aggregate | Growth Rate | timeseries | Top $TopResources Aggregates Per Growth Rate of Physical Used |
| ONTAP: Aggregate | Growth Rate | table | Top $TopResources Aggregates by Physical Usage: Delta Report |
| ONTAP: Cluster | Storage Efficiency Ratios | stat | Data Reduction with Snapshots &FlexClones |
| ONTAP: Datacenter | Storage Efficiency | stat | Data Reduction with Snapshots & FlexClones |
aggr_volume_count¶
The aggregate's volume count, which includes both FlexVols and FlexGroup constituents.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/aggregates |
volume_count |
conf/rest/9.12.0/aggr.yaml |
| ZAPI | aggr-get-iter |
aggr-attributes.aggr-volume-count-attributes.flexvol-count |
conf/zapi/cdot/9.8.0/aggr.yaml |
The aggr_volume_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Highlights | stat | Volumes |
| ONTAP: Aggregate | Highlights | table | Aggregates |
| ONTAP: StorageGrid FabricPool | Highlights | stat | Volumes |
| ONTAP: StorageGrid FabricPool | Highlights | table | Aggregates |
aggr_write_data¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
aggr_write_latency¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.latency_raw.writeUnit: microsec Type: average Base: aggr_statistics.iops_raw.write |
conf/keyperf/9.15.0/aggr.yaml |
aggr_write_ops¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/aggregates |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/aggr.yaml |
cifs_session_connection_count¶
A counter used to track requests that are sent to the volumes to the node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/cifs/sessions |
connection_count |
conf/rest/9.8.0/cifs_session.yaml |
| ZAPI | cifs-session-get-iter |
cifs-session.connection-count |
conf/zapi/cdot/9.8.0/cifs_session.yaml |
The cifs_session_connection_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | Highlights | timeseries | Top $TopResources Connection Count |
| ONTAP: SMB | Highlights | timeseries | Connection Count By SMB version |
cifs_session_idle_duration¶
Specifies an ISO-8601 format of date and time used to retrieve the idle time duration in hours, minutes, and seconds format.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/cifs/sessions |
idle_duration |
conf/rest/9.8.0/cifs_session.yaml |
The cifs_session_idle_duration metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | Highlights | table | CIFS Sessions |
cifs_session_labels¶
This metric provides information about CIFSSession
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/cifs/sessions |
Harvest generated |
conf/rest/9.8.0/cifs_session.yaml |
| ZAPI | cifs-session-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/cifs_session.yaml |
The cifs_session_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | Highlights | table | CIFS Sessions |
cifs_share_labels¶
This metric provides information about CIFSShare
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/vserver/cifs/share |
Harvest generated |
conf/rest/9.6.0/cifs_share.yaml |
| ZAPI | cifs-share-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/cifs_share.yaml |
cloud_target_labels¶
This metric provides information about CloudTarget
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cloud/targets |
Harvest generated |
conf/rest/9.12.0/cloud_target.yaml |
| ZAPI | aggr-object-store-config-get-iter |
Harvest generated |
conf/zapi/cdot/9.10.0/aggr_object_store_config.yaml |
cloud_target_used¶
The amount of cloud space used by all the aggregates attached to the target, in bytes. This field is only populated for FabricPool targets. The value is recalculated once every 5 minutes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cloud/targets |
used |
conf/rest/9.12.0/cloud_target.yaml |
| ZAPI | aggr-object-store-config-get-iter |
aggr-object-store-config-info.used-space |
conf/zapi/cdot/9.10.0/aggr_object_store_config.yaml |
cluster_new_status¶
It is an indicator of the overall health status of the cluster, with a value of 1 indicating a healthy status and a value of 0 indicating an unhealthy status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/status.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/status.yaml |
The cluster_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | table | $Cluster |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | stat | cluster health status |
| ONTAP: Datacenter | Health | table | Cluster Health |
cluster_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_other_latency¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.latency_raw.otherUnit: microsec Type: average Base: cluster_statistics.iops_raw.other |
conf/keyperf/9.15.0/cluster.yaml |
cluster_other_ops¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_peer_labels¶
This metric provides information about ClusterPeer
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/peers |
Harvest generated |
conf/rest/9.12.0/clusterpeer.yaml |
| ZAPI | cluster-peer-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/clusterpeer.yaml |
The cluster_peer_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
cluster_peer_non_encrypted¶
This metric indicates a value of 1 if the cluster peer encryption state is none (indicating the connection is not encrypted) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/clusterpeer.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/clusterpeer.yaml |
The cluster_peer_non_encrypted metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
cluster_read_data¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_read_latency¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.latency_raw.readUnit: microsec Type: average Base: cluster_statistics.iops_raw.read |
conf/keyperf/9.15.0/cluster.yaml |
cluster_read_ops¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_schedule_labels¶
This metric provides information about ClusterSchedule
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/schedules |
Harvest generated |
conf/rest/9.6.0/clusterschedule.yaml |
The cluster_schedule_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Local Policy | table | Schedules |
cluster_software_status¶
Displays the software job with its status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/clustersoftware.yaml |
The cluster_software_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Software | table | Cluster Software Status |
cluster_software_update¶
Displays the software update phase with its status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/clustersoftware.yaml |
The cluster_software_update metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Software | table | Cluster Software Update |
cluster_software_validation¶
Displays the software pre-validation checks with their status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/clustersoftware.yaml |
The cluster_software_validation metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Software | table | Cluster Software Validation |
cluster_subsystem_new_status¶
This metric indicates a value of 1 if the subsystem health is ok (indicating the subsystem is operational) and a value of 0 for any other health status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/subsystem.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/subsystem.yaml |
The cluster_subsystem_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | table | subsystems |
cluster_subsystem_outstanding_alerts¶
Number of outstanding alerts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/system/health/subsystem |
outstanding_alert_count |
conf/rest/9.12.0/subsystem.yaml |
| ZAPI | diagnosis-subsystem-config-get-iter |
diagnosis-subsystem-config-info.outstanding-alert-count |
conf/zapi/cdot/9.8.0/subsystem.yaml |
The cluster_subsystem_outstanding_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | table | subsystems |
cluster_subsystem_suppressed_alerts¶
Number of suppressed alerts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/system/health/subsystem |
suppressed_alert_count |
conf/rest/9.12.0/subsystem.yaml |
| ZAPI | diagnosis-subsystem-config-get-iter |
diagnosis-subsystem-config-info.suppressed-alert-count |
conf/zapi/cdot/9.8.0/subsystem.yaml |
The cluster_subsystem_suppressed_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | table | subsystems |
cluster_tags¶
Displays tags at the cluster level.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/status.yaml |
cluster_total_data¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_total_latency¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.latency_raw.totalUnit: microsec Type: average Base: cluster_statistics.iops_raw.total |
conf/keyperf/9.15.0/cluster.yaml |
cluster_total_ops¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_write_data¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
cluster_write_latency¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.latency_raw.writeUnit: microsec Type: average Base: cluster_statistics.iops_raw.write |
conf/keyperf/9.15.0/cluster.yaml |
cluster_write_ops¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/cluster |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cluster.yaml |
copy_manager_bce_copy_count_curr¶
Current number of copy requests being processed by the Block Copy Engine.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/copy_manager |
block_copy_engine_current_copy_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/copy_manager.yaml |
| ZAPI | perf-object-get-instances copy_manager |
bce_copy_count_currUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/copy_manager.yaml |
copy_manager_kb_copied¶
Sum of kilo-bytes copied.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/copy_manager |
KB_copiedUnit: none Type: delta Base: |
conf/restperf/9.12.0/copy_manager.yaml |
| ZAPI | perf-object-get-instances copy_manager |
KB_copiedUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/copy_manager.yaml |
The copy_manager_kb_copied metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Copy Offload | timeseries | Copy Offload Data Copied |
copy_manager_ocs_copy_count_curr¶
Current number of copy requests being processed by the ONTAP copy subsystem.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/copy_manager |
ontap_copy_subsystem_current_copy_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/copy_manager.yaml |
| ZAPI | perf-object-get-instances copy_manager |
ocs_copy_count_currUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/copy_manager.yaml |
copy_manager_sce_copy_count_curr¶
Current number of copy requests being processed by the System Continuous Engineering.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/copy_manager |
system_continuous_engineering_current_copy_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/copy_manager.yaml |
| ZAPI | perf-object-get-instances copy_manager |
sce_copy_count_currUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/copy_manager.yaml |
copy_manager_spince_copy_count_curr¶
Current number of copy requests being processed by the SpinCE.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/copy_manager |
spince_current_copy_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/copy_manager.yaml |
| ZAPI | perf-object-get-instances copy_manager |
spince_copy_count_currUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/copy_manager.yaml |
disk_busy¶
The utilization percent of the disk
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_bytes_per_sector¶
Bytes per sector.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
bytes_per_sector |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-inventory-info.bytes-per-sector |
conf/zapi/cdot/9.8.0/disk.yaml |
disk_capacity¶
Disk capacity in MB
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_cp_reads¶
Number of disk read operations initiated each second for consistency point processing
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_io_queued¶
Number of I/Os queued to the disk but not yet issued
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_labels¶
This metric provides information about Disk
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
Harvest generated |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/disk.yaml |
The disk_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Disk | Highlights | stat | Total Disks |
| ONTAP: Disk | Highlights | stat | Failed Disks |
| ONTAP: Disk | List of Disks | table | Disks in Cluster |
| ONTAP: Health | Disks | table | Disks Issues |
disk_new_status¶
This metric indicates a value of 1 if the disk is not in an outage (i.e., the outage label is empty) and a value of 0 if the shelf is in an outage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/disk.yaml |
disk_power_on_hours¶
Hours powered on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
stats.power_on_hours |
conf/rest/9.12.0/disk.yaml |
disk_sectors¶
Number of sectors on the disk.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
sector_count |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-inventory-info.capacity-sectors |
conf/zapi/cdot/9.8.0/disk.yaml |
The disk_sectors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | List of Disks | table | Disks in Cluster |
disk_stats_average_latency¶
Average I/O latency across all active paths, in milliseconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
stats.average_latency |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-stats-info.average-latency |
conf/zapi/cdot/9.8.0/disk.yaml |
The disk_stats_average_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | List of Disks | table | Disks in Cluster |
disk_stats_io_kbps¶
Total Disk Throughput in KBPS Across All Active Paths
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/disk |
disk_io_kbps_total |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-stats-info.disk-io-kbps |
conf/zapi/cdot/9.8.0/disk.yaml |
The disk_stats_io_kbps metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | List of Disks | table | Disks in Cluster |
disk_stats_sectors_read¶
Number of Sectors Read
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/disk |
sectors_read |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-stats-info.sectors-read |
conf/zapi/cdot/9.8.0/disk.yaml |
disk_stats_sectors_written¶
Number of Sectors Written
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/disk |
sectors_written |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-stats-info.sectors-written |
conf/zapi/cdot/9.8.0/disk.yaml |
disk_total_data¶
Total throughput for user operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_total_transfers¶
Total number of disk operations involving data transfer initiated per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_uptime¶
Number of seconds the drive has been powered on
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
stats.power_on_hours, 60, 60 |
conf/rest/9.12.0/disk.yaml |
| ZAPI | storage-disk-get-iter |
storage-disk-info.disk-stats-info.power-on-time-interval |
conf/zapi/cdot/9.8.0/disk.yaml |
The disk_uptime metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | List of Disks | table | Disks in Cluster |
disk_usable_size¶
Usable size of each disk, in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/disks |
usable_size |
conf/rest/9.12.0/disk.yaml |
disk_user_read_blocks¶
Number of blocks transferred for user read operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_read_chain¶
Average number of blocks transferred in each user read operation
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_read_latency¶
Average latency per block in microseconds for user read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_write_blocks¶
Number of blocks transferred for user write operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_write_chain¶
Average number of blocks transferred in each user write operation
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_write_latency¶
Average latency per block in microseconds for user write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
disk_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
ems_destination_labels¶
This metric provides information about EmsDestination
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/support/ems/destinations |
Harvest generated |
conf/rest/9.12.0/ems_destination.yaml |
| ZAPI | ems-event-notification-destination-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/ems_destination.yaml |
The ems_destination_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
ems_events¶
Indicates EMS events that have occurred in the ONTAP as configured in the ems.yaml.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/support/ems/events |
Harvest generated |
conf/ems/9.6.0/ems.yaml |
The ems_events metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | table | Active Emergency EMS |
environment_sensor_average_ambient_temperature¶
Average temperature of all ambient sensors for node in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_average_ambient_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_average_fan_speed¶
Average fan speed for node in rpm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_average_fan_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_average_temperature¶
Average temperature of all non-ambient sensors for node in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_average_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Highlights | timeseries | Top $TopResources Nodes by Average Temperature |
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_max_fan_speed¶
Maximum fan speed for node in rpm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_max_fan_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Max Node Fan Speed |
| ONTAP: Power | Highlights | stat | Max Node Fan Speed |
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_max_temperature¶
Maximum temperature of all non-ambient sensors for node in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_max_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Max Node Temp |
| ONTAP: Power | Highlights | stat | Max Node Temp |
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_min_ambient_temperature¶
Minimum temperature of all ambient sensors for node in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_min_ambient_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_min_fan_speed¶
Minimum fan speed for node in rpm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_min_fan_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_min_temperature¶
Minimum temperature of all non-ambient sensors for node in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_min_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_power¶
Power consumed by a node in Watts.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_power metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Total Power |
| ONTAP: Datacenter | Power and Temperature | stat | Average Power/Used_TB |
| ONTAP: Datacenter | Power and Temperature | stat | Average IOPs/Watt |
| ONTAP: Datacenter | Power and Temperature | timeseries | Total Power Consumed |
| ONTAP: Power | Highlights | stat | Total Power |
| ONTAP: Power | Highlights | stat | Average Power/Used_TB |
| ONTAP: Power | Highlights | stat | Average IOPs/Watt |
| ONTAP: Power | Highlights | timeseries | Total Power Consumed |
| ONTAP: Power | Highlights | timeseries | Average Power Consumption (kWh) Over Last Hour |
| ONTAP: Power | Highlights | timeseries | Top $TopResources Nodes by Power Consumed |
| ONTAP: Power | Nodes | table | Storage Nodes |
environment_sensor_status¶
This metric indicates a value of 1 if the sensor threshold state is normal (indicating the sensor is operating within normal parameters) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/sensor.yaml |
environment_sensor_threshold_value¶
Provides the sensor reading.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/sensors |
value |
conf/rest/9.12.0/sensor.yaml |
| ZAPI | environment-sensors-get-iter |
environment-sensors-info.threshold-sensor-value |
conf/zapi/cdot/9.8.0/sensor.yaml |
The environment_sensor_threshold_value metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | Sensor | table | Sensor Issues |
| ONTAP: Power | Sensor Problems | table | Sensor Problems |
ethernet_switch_port_receive_discards¶
Total number of discarded packets.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ethernet/switch/ports |
receive_raw.discardsUnit: Type: delta Base: |
conf/keyperf/9.15.0/ethernet_switch_port.yaml |
The ethernet_switch_port_receive_discards metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Switch | Traffic | timeseries | Top $TopResources Interface Drops |
ethernet_switch_port_receive_errors¶
Number of packet errors.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ethernet/switch/ports |
receive_raw.errorsUnit: Type: delta Base: |
conf/keyperf/9.15.0/ethernet_switch_port.yaml |
The ethernet_switch_port_receive_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Switch | Traffic | timeseries | Top $TopResources Interface Errors |
ethernet_switch_port_receive_packets¶
Total packet count.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ethernet/switch/ports |
receive_raw.packetsUnit: Type: delta Base: |
conf/keyperf/9.15.0/ethernet_switch_port.yaml |
The ethernet_switch_port_receive_packets metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Switch | Traffic | timeseries | Top $TopResources Interface Receive Packets |
ethernet_switch_port_transmit_discards¶
Total number of discarded packets.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ethernet/switch/ports |
transmit_raw.discardsUnit: Type: delta Base: |
conf/keyperf/9.15.0/ethernet_switch_port.yaml |
The ethernet_switch_port_transmit_discards metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Switch | Traffic | timeseries | Top $TopResources Interface Drops |
ethernet_switch_port_transmit_errors¶
Number of packet errors.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ethernet/switch/ports |
transmit_raw.errorsUnit: Type: delta Base: |
conf/keyperf/9.15.0/ethernet_switch_port.yaml |
The ethernet_switch_port_transmit_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Switch | Traffic | timeseries | Top $TopResources Interface Errors |
ethernet_switch_port_transmit_packets¶
Total packet count.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ethernet/switch/ports |
transmit_raw.packetsUnit: Type: delta Base: |
conf/keyperf/9.15.0/ethernet_switch_port.yaml |
The ethernet_switch_port_transmit_packets metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Switch | Traffic | timeseries | Top $TopResources Interface Transmit Packets |
export_rule_labels¶
This metric provides information about ExportRule
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/vserver/export-policy/rule |
Harvest generated |
conf/rest/9.8.0/exports.yaml |
external_service_op_num_not_found_responses¶
Number of 'Not Found' responses for calls to this operation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
num_not_found_responsesUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_num_not_found_responses metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Number of 'Not Found' Responses Per Operation |
external_service_op_num_request_failures¶
A cumulative count of all request failures.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
num_request_failuresUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_num_request_failures metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Number of Request Failures |
external_service_op_num_requests_sent¶
Number of requests sent to this service.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
num_requests_sentUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_num_requests_sent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Number of Request Sent |
external_service_op_num_responses_received¶
Number of responses received from the server (does not include timeouts).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
num_responses_receivedUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_num_responses_received metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Number of Responses Received |
external_service_op_num_successful_responses¶
Number of successful responses to this operation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
num_successful_responsesUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_num_successful_responses metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Number of Successful Responses |
external_service_op_num_timeouts¶
Number of times requests to the server for this operation timed out, meaning no response was recevied in a given time period.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
num_timeoutsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_num_timeouts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Number of Timeouts |
external_service_op_request_latency¶
Average latency of requests for operations of this type on this server.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
request_latencyUnit: microsec Type: average Base: num_requests_sent |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
The external_service_op_request_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: External Service Operation | Highlights | timeseries | Top $TopResources Request Latency to Server |
external_service_op_request_latency_hist¶
This histogram holds the latency values for requests of this operation to the specified server.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances external_service_op |
request_latency_histUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/external_service_operation.yaml |
fabricpool_average_latency¶
This counter is deprecated.Average latencies executed during various phases of command execution. The execution-start latency represents the average time taken to start executing an operation. The request-prepare latency represent the average time taken to prepare the commplete request that needs to be sent to the server. The send latency represents the average time taken to send requests to the server. The execution-start-to-send-complete represents the average time taken to send an operation out since its execution started. The execution-start-to-first-byte-received represent the average time taken to receive the first byte of a response since the command's request execution started. These counters can be used to identify performance bottlenecks within the object store client module.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances object_store_client_op |
average_latencyUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml |
fabricpool_cloud_bin_op_latency_average¶
Cloud bin operation latency average in milliseconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_comp_aggr_vol_bin |
cloud_bin_op_latency_averageUnit: millisec Type: raw Base: |
conf/restperf/9.12.0/wafl_comp_aggr_vol_bin.yaml |
| ZAPI | perf-object-get-instances wafl_comp_aggr_vol_bin |
cloud_bin_op_latency_averageUnit: millisec Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/wafl_comp_aggr_vol_bin.yaml |
The fabricpool_cloud_bin_op_latency_average metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Object Storage | timeseries | Top $TopResources Volumes by Object Storage GET Latency |
| ONTAP: Volume | Object Storage | timeseries | Top $TopResources Volumes by Object Storage PUT Latency |
fabricpool_cloud_bin_operation¶
Cloud bin operation counters.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_comp_aggr_vol_bin |
cloud_bin_opUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl_comp_aggr_vol_bin.yaml |
| ZAPI | perf-object-get-instances wafl_comp_aggr_vol_bin |
cloud_bin_operationUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl_comp_aggr_vol_bin.yaml |
The fabricpool_cloud_bin_operation metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Object Storage | timeseries | Top $TopResources Volumes by Object Storage GET Request Count |
| ONTAP: Volume | Object Storage | timeseries | Top $TopResources Volumes by Object Storage PUT Request Count |
| ONTAP: Volume | Object Storage | table | Top $TopResources Volumes by Object Storage Requests |
fabricpool_get_throughput_bytes¶
This counter is deprecated. Counter that indicates the throughput for GET command in bytes per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances object_store_client_op |
get_throughput_bytesUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml |
fabricpool_put_throughput_bytes¶
This counter is deprecated. Counter that indicates the throughput for PUT command in bytes per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances object_store_client_op |
put_throughput_bytesUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml |
fabricpool_stats¶
This counter is deprecated. Counter that indicates the number of object store operations sent, and their success and failure counts. The objstore_client_op_name array indicate the operation name such as PUT, GET, etc. The objstore_client_op_stats_name array contain the total number of operations, their success and failure counter for each operation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances object_store_client_op |
statsUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml |
fabricpool_throughput_ops¶
Counter that indicates the throughput for commands in ops per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances object_store_client_op |
throughput_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/object_store_client_op.yaml |
fcp_avg_other_latency¶
Average latency for operations other than read and write
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
average_other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
avg_other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_avg_read_latency¶
Average latency for read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | Top $TopResources FCPs by Send Latency |
fcp_avg_write_latency¶
Average latency for write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | Top $TopResources FCPs by Receive Latency |
fcp_discarded_frames_count¶
Number of discarded frames.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
discarded_frames_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
discarded_frames_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_discarded_frames_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission errors |
fcp_fabric_connected_speed¶
The negotiated data rate between the target FC port and the fabric in gigabits per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/network/fc/ports |
fabric.connected_speed |
conf/rest/9.6.0/fcp.yaml |
The fcp_fabric_connected_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | table | FC ports with Fabric detail |
fcp_int_count¶
Number of interrupts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
interrupt_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
int_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_int_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission interrupts |
fcp_invalid_crc¶
Number of invalid cyclic redundancy checks (CRC count)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
invalid.crcUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
invalid_crcUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_invalid_crc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission interrupts |
fcp_invalid_transmission_word¶
Number of invalid transmission words
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
invalid.transmission_wordUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
invalid_transmission_wordUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_invalid_transmission_word metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission interrupts |
fcp_isr_count¶
Number of interrupt responses
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
isr.countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
isr_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_isr_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission interrupts |
fcp_labels¶
This metric provides information about FCP
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/network/fc/ports |
Harvest generated |
conf/rest/9.6.0/fcp.yaml |
The fcp_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Network | FibreChannel | table | FC ports with Fabric detail |
fcp_lif_avg_latency¶
Average latency for FCP operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | FCP Frontend | stat | FCP Latency |
| ONTAP: Node | FCP Frontend | timeseries | FCP Average Latency by Port / LIF |
| ONTAP: SVM | FCP | stat | SVM FCP Average Latency |
| ONTAP: SVM | FCP | timeseries | SVM FCP Average Latency |
fcp_lif_avg_other_latency¶
Average latency for operations other than read and write
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
average_other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
avg_other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_avg_other_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | timeseries | SVM FCP Average Latency |
fcp_lif_avg_read_latency¶
Average latency for read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | stat | SVM FCP Average Read Latency |
| ONTAP: SVM | FCP | timeseries | SVM FCP Average Latency |
| ONTAP: SVM | NVMe/FC | stat | SVM FCP Average Read Latency |
fcp_lif_avg_write_latency¶
Average latency for write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | stat | SVM FCP Average Write Latency |
| ONTAP: SVM | FCP | timeseries | SVM FCP Average Latency |
| ONTAP: SVM | NVMe/FC | stat | SVM FCP Average Write Latency |
fcp_lif_other_ops¶
Number of operations that are not read or write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | timeseries | SVM FCP IOPs |
fcp_lif_read_data¶
Amount of data read from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | stat | SVM FCP Read Throughput |
| ONTAP: SVM | FCP | timeseries | SVM FCP Throughput |
| ONTAP: SVM | FCP | timeseries | Top $TopResources FCP LIFs by Send Throughput |
| ONTAP: SVM | NVMe/FC | stat | SVM FCP Read Throughput |
fcp_lif_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | stat | SVM FCP Read IOPs |
| ONTAP: SVM | FCP | timeseries | SVM FCP IOPs |
| ONTAP: SVM | NVMe/FC | stat | SVM FCP Read IOPs |
fcp_lif_total_ops¶
Total number of operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | FCP Frontend | stat | FCP IOPs |
| ONTAP: Node | FCP Frontend | timeseries | FCP IOPs by Port / LIF |
| ONTAP: SVM | FCP | stat | SVM FCP IOPs |
fcp_lif_write_data¶
Amount of data written to the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | FCP Frontend | timeseries | FCP Throughput by Port / LIF |
| ONTAP: SVM | FCP | stat | SVM FCP Throughput |
| ONTAP: SVM | FCP | stat | SVM FCP Write Throughput |
| ONTAP: SVM | FCP | timeseries | SVM FCP Throughput |
| ONTAP: SVM | FCP | timeseries | Top $TopResources FCP LIFs by Receive Throughput |
| ONTAP: SVM | NVMe/FC | stat | SVM FCP Write Throughput |
fcp_lif_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp_lif |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp_lif.yaml |
| ZAPI | perf-object-get-instances fcp_lif |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp_lif.yaml |
The fcp_lif_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | FCP | stat | SVM FCP Write IOPs |
| ONTAP: SVM | FCP | timeseries | SVM FCP IOPs |
| ONTAP: SVM | NVMe/FC | stat | SVM FCP Write IOPs |
fcp_link_down¶
Number of times the Fibre Channel link was lost
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
link.downUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
link_downUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_link_down metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | Top $TopResources FCPs by Link Down |
fcp_link_failure¶
Number of link failures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
link_failureUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
link_failureUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_link_failure metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | Top $TopResources FCPs by Link Failure |
fcp_link_up¶
Number of times the Fibre Channel link was established
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
link.upUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
link_upUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_loss_of_signal¶
Number of times this port lost signal
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
loss_of_signalUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
loss_of_signalUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_loss_of_signal metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission errors |
fcp_loss_of_sync¶
Number of times this port lost sync
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
loss_of_syncUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
loss_of_syncUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_loss_of_sync metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission errors |
fcp_max_speed¶
The maximum speed supported by the FC port in gigabits per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/network/fc/ports |
speed.maximum |
conf/rest/9.6.0/fcp.yaml |
The fcp_max_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | table | FC ports with Fabric detail |
fcp_nvmf_avg_other_latency¶
Average latency for operations other than read and write (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.average_other_latencyUnit: microsec Type: average Base: nvmf.other_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_avg_other_latencyUnit: microsec Type: average Base: nvmf_other_ops |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_avg_read_latency¶
Average latency for read operations (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.average_read_latencyUnit: microsec Type: average Base: nvmf.read_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_avg_read_latencyUnit: microsec Type: average Base: nvmf_read_ops |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | NVMe/FC | timeseries | Top $TopResources FCP_NVMFs by Send Latency |
fcp_nvmf_avg_remote_other_latency¶
Average latency for remote operations other than read and write (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.average_remote_other_latencyUnit: microsec Type: average Base: nvmf_remote.other_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_avg_remote_other_latencyUnit: microsec Type: average Base: nvmf_remote_other_ops |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_avg_remote_read_latency¶
Average latency for remote read operations (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.average_remote_read_latencyUnit: microsec Type: average Base: nvmf_remote.read_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_avg_remote_read_latencyUnit: microsec Type: average Base: nvmf_remote_read_ops |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_avg_remote_write_latency¶
Average latency for remote write operations (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.average_remote_write_latencyUnit: microsec Type: average Base: nvmf_remote.write_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_avg_remote_write_latencyUnit: microsec Type: average Base: nvmf_remote_write_ops |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_avg_write_latency¶
Average latency for write operations (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.average_write_latencyUnit: microsec Type: average Base: nvmf.write_ops |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_avg_write_latencyUnit: microsec Type: average Base: nvmf_write_ops |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | NVMe/FC | timeseries | Top $TopResources FCP_NVMFs by Receive Latency |
fcp_nvmf_caw_data¶
Amount of CAW data sent to the storage system (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.caw_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_caw_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_caw_ops¶
Number of FC-NVMe CAW operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.caw_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_caw_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_command_slots¶
Number of command slots that have been used by initiators logging into this port. This shows the command fan-in on the port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.command_slotsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_command_slotsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_other_ops¶
Number of NVMF operations that are not read or write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_read_data¶
Amount of data read from the storage system (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | FC Read Throughput |
| ONTAP: Network | NVMe/FC | table | NVMe/FC ports |
| ONTAP: Network | NVMe/FC | timeseries | Top $TopResources FCP_NVMFs by Send Throughput |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources FC Ports by Throughput |
fcp_nvmf_read_ops¶
Number of FC-NVMe read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | FC Read Throughput |
fcp_nvmf_remote_caw_data¶
Amount of remote CAW data sent to the storage system (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.caw_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_caw_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_caw_ops¶
Number of FC-NVMe remote CAW operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.caw_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_caw_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_other_ops¶
Number of NVMF remote operations that are not read or write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_read_data¶
Amount of remote data read from the storage system (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_read_ops¶
Number of FC-NVMe remote read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_total_data¶
Amount of remote FC-NVMe traffic to and from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_total_ops¶
Total number of remote FC-NVMe operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_write_data¶
Amount of remote data written to the storage system (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_remote_write_ops¶
Number of FC-NVMe remote write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf_remote.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_remote_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
fcp_nvmf_total_data¶
Amount of FC-NVMe traffic to and from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_total_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | FC Throughput |
fcp_nvmf_total_ops¶
Total number of FC-NVMe operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | FC Throughput |
fcp_nvmf_write_data¶
Amount of data written to the storage system (FC-NVMe)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | FC Write Throughput |
| ONTAP: Network | NVMe/FC | table | NVMe/FC ports |
| ONTAP: Network | NVMe/FC | timeseries | Top $TopResources FCP_NVMFs by Receive Throughput |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources FC Ports by Throughput |
fcp_nvmf_write_ops¶
Number of FC-NVMe write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
nvmf.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
nvmf_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/fcp.yaml |
The fcp_nvmf_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | FC Write Throughput |
fcp_other_ops¶
Number of operations that are not read or write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_prim_seq_err¶
Number of primitive sequence errors
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
primitive_seq_errUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
prim_seq_errUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_prim_seq_err metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission errors |
fcp_queue_full¶
Number of times a queue full condition occurred.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
queue_fullUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
queue_fullUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_queue_full metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission errors |
fcp_read_data¶
Amount of data read from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | table | FC ports |
| ONTAP: Network | FibreChannel | timeseries | Top $TopResources FCPs by Send Throughput |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources FC Ports by Throughput |
fcp_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_reset_count¶
Number of physical port resets
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
reset_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
reset_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_shared_int_count¶
Number of shared interrupts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
shared_interrupt_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
shared_int_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_spurious_int_count¶
Number of spurious interrupts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
spurious_interrupt_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
spurious_int_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_spurious_int_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission interrupts |
fcp_threshold_full¶
Number of times the total number of outstanding commands on the port exceeds the threshold supported by this port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
threshold_fullUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
threshold_fullUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_threshold_full metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | timeseries | FCPs Transmission errors |
fcp_total_data¶
Amount of FCP traffic to and from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_total_ops¶
Total number of FCP operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcp_write_data¶
Amount of data written to the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
The fcp_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | FibreChannel | table | FC ports |
| ONTAP: Network | FibreChannel | timeseries | Top $TopResources FCPs by Receive Throughput |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources FC Ports by Throughput |
fcp_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcp |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/fcp.yaml |
| ZAPI | perf-object-get-instances fcp_port |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcp.yaml |
fcvi_firmware_invalid_crc_count¶
Firmware reported invalid CRC count
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
firmware.invalid_crc_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
fw_invalid_crcUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_firmware_invalid_crc_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Invalid CRC Count |
fcvi_firmware_invalid_transmit_word_count¶
Firmware reported invalid transmit word count
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
firmware.invalid_transmit_word_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
fw_invalid_xmit_wordsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_firmware_invalid_transmit_word_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Invalid Transmit Word Count |
fcvi_firmware_link_failure_count¶
Firmware reported link failure count
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
firmware.link_failure_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
fw_link_failureUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_firmware_link_failure_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Link Failure Count |
fcvi_firmware_loss_of_signal_count¶
Firmware reported loss of signal count
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
firmware.loss_of_signal_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
fw_loss_of_signalUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_firmware_loss_of_signal_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Loss of Signal Count |
fcvi_firmware_loss_of_sync_count¶
Firmware reported loss of sync count
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
firmware.loss_of_sync_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
fw_loss_of_syncUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_firmware_loss_of_sync_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Loss of Sync Count |
fcvi_firmware_systat_discard_frames¶
Firmware reported SyStatDiscardFrames value
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
firmware.systat.discard_framesUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
fw_SyStatDiscardFramesUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_firmware_systat_discard_frames metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | SyStatDiscardFrames Value |
fcvi_hard_reset_count¶
Number of times hard reset of FCVI adapter got issued.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
hard_reset_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
hard_reset_cntUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_hard_reset_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Hard Reset Count |
fcvi_rdma_write_avg_latency¶
Average RDMA write I/O latency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
rdma.write_average_latencyUnit: microsec Type: average Base: rdma.write_ops |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
rdma_write_avg_latencyUnit: microsec Type: average Base: rdma_write_ops |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_rdma_write_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | Highlights | stat | FCVI Write Latency |
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Write Latency |
fcvi_rdma_write_ops¶
Number of RDMA write I/Os issued per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
rdma.write_opsUnit: none Type: rate Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
rdma_write_opsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_rdma_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | Highlights | stat | FCVI Write IOPs |
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Write IOPs |
fcvi_rdma_write_throughput¶
RDMA write throughput in bytes per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
rdma.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
rdma_write_throughputUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_rdma_write_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | Highlights | stat | FCVI Write Throughput |
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Write Throughput |
fcvi_soft_reset_count¶
Number of times soft reset of FCVI adapter got issued.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/fcvi |
soft_reset_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/fcvi.yaml |
| ZAPI | perf-object-get-instances fcvi |
soft_reset_cntUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fcvi.yaml |
The fcvi_soft_reset_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FCVI | timeseries | Soft Reset Count |
flashcache_accesses¶
External cache accesses per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
accessesUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
accessesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_disk_reads_replaced¶
Estimated number of disk reads per second replaced by cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
disk_reads_replacedUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
disk_reads_replacedUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
The flashcache_disk_reads_replaced metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Flash Cache |
flashcache_evicts¶
Number of blocks evicted from the external cache to make room for new blocks
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
evictsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
evictsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_hit¶
Number of WAFL buffers served off the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
hit.totalUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
hitUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_hit_directory¶
Number of directory buffers served off the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
hit.directoryUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
hit_directoryUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_hit_indirect¶
Number of indirect file buffers served off the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
hit.indirectUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
hit_indirectUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_hit_metadata_file¶
Number of metadata file buffers served off the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
hit.metadata_fileUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
hit_metadata_fileUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_hit_normal_lev0¶
Number of normal level 0 WAFL buffers served off the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
hit.normal_level_zeroUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
hit_normal_lev0Unit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_hit_percent¶
External cache hit rate
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
hit.percentUnit: percent Type: average Base: accesses |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
hit_percentUnit: percent Type: percent Base: accesses |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
The flashcache_hit_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Flash Cache |
flashcache_inserts¶
Number of WAFL buffers inserted into the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
insertsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
insertsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_invalidates¶
Number of blocks invalidated in the external cache
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
invalidatesUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
invalidatesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_miss¶
External cache misses
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
miss.totalUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
missUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_miss_directory¶
External cache misses accessing directory buffers
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
miss.directoryUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
miss_directoryUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_miss_indirect¶
External cache misses accessing indirect file buffers
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
miss.indirectUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
miss_indirectUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_miss_metadata_file¶
External cache misses accessing metadata file buffers
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
miss.metadata_fileUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
miss_metadata_fileUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_miss_normal_lev0¶
External cache misses accessing normal level 0 buffers
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
miss.normal_level_zeroUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
miss_normal_lev0Unit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashcache_usage¶
Percentage of blocks in external cache currently containing valid data
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/external_cache |
usageUnit: percent Type: raw Base: |
conf/restperf/9.12.0/ext_cache_obj.yaml |
| ZAPI | perf-object-get-instances ext_cache_obj |
usageUnit: percent Type: raw Base: |
conf/zapiperf/cdot/9.8.0/ext_cache_obj.yaml |
flashpool_cache_stats¶
Automated Working-set Analyzer (AWA) per-interval pseudo cache statistics for the most recent intervals. The number of intervals defined as recent is CM_WAFL_HYAS_INT_DIS_CNT. This array is a table with fields corresponding to the enum type of hyas_cache_stat_type_t.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_sizer |
cache_statsUnit: none Type: raw Base: |
conf/restperf/9.12.0/wafl_hya_sizer.yaml |
| ZAPI | perf-object-get-instances wafl_hya_sizer |
cache_statsUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_sizer.yaml |
flashpool_evict_destage_rate¶
Number of block destage per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
evict_destage_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
evict_destage_rateUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_evict_destage_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Cache Removals |
flashpool_evict_remove_rate¶
Number of block free per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
evict_remove_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
evict_remove_rateUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_evict_remove_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Cache Removals |
flashpool_hya_read_hit_latency_average¶
Average of RAID I/O latency on read hit.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
hya_read_hit_latency_averageUnit: millisec Type: average Base: hya_read_hit_latency_count |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
hya_read_hit_latency_averageUnit: millisec Type: average Base: hya_read_hit_latency_count |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_hya_read_hit_latency_average metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by SSD and HDD Latency |
flashpool_hya_read_miss_latency_average¶
Average read miss latency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
hya_read_miss_latency_averageUnit: millisec Type: average Base: hya_read_miss_latency_count |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
hya_read_miss_latency_averageUnit: millisec Type: average Base: hya_read_miss_latency_count |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_hya_read_miss_latency_average metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by SSD and HDD Latency |
flashpool_hya_write_hdd_latency_average¶
Average write latency to HDD.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
hya_write_hdd_latency_averageUnit: millisec Type: average Base: hya_write_hdd_latency_count |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
hya_write_hdd_latency_averageUnit: millisec Type: average Base: hya_write_hdd_latency_count |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
flashpool_hya_write_ssd_latency_average¶
Average of RAID I/O latency on write to SSD.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
hya_write_ssd_latency_averageUnit: millisec Type: average Base: hya_write_ssd_latency_count |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
hya_write_ssd_latency_averageUnit: millisec Type: average Base: hya_write_ssd_latency_count |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
flashpool_read_cache_ins_rate¶
Cache insert rate blocks/sec.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
read_cache_insert_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
read_cache_ins_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_read_cache_ins_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Cache Inserts |
flashpool_read_ops_replaced¶
Number of HDD read operations replaced by SSD reads per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
read_ops_replacedUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
read_ops_replacedUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_read_ops_replaced metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Flash Pool Activity |
| ONTAP: Disk | Disk Utilization | timeseries | Flash Pool |
flashpool_read_ops_replaced_percent¶
Percentage of HDD read operations replace by SSD.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
read_ops_replaced_percentUnit: percent Type: percent Base: read_ops_total |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
read_ops_replaced_percentUnit: percent Type: percent Base: read_ops_total |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_read_ops_replaced_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Flash Pool Activity |
| ONTAP: Disk | Disk Utilization | timeseries | Flash Pool |
flashpool_ssd_available¶
Total SSD blocks available.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
ssd_availableUnit: none Type: raw Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
ssd_availableUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
flashpool_ssd_read_cached¶
Total read cached SSD blocks.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
ssd_read_cachedUnit: none Type: raw Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
ssd_read_cachedUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_ssd_read_cached metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Flash Pool Capacity Used |
flashpool_ssd_total¶
Total SSD blocks.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
ssd_totalUnit: none Type: raw Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
ssd_totalUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_ssd_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Flash Pool Capacity Used |
flashpool_ssd_total_used¶
Total SSD blocks used.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
ssd_total_usedUnit: none Type: raw Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
ssd_total_usedUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
flashpool_ssd_write_cached¶
Total write cached SSD blocks.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
ssd_write_cachedUnit: none Type: raw Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
ssd_write_cachedUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_ssd_write_cached metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Flash Pool Capacity Used |
flashpool_wc_write_blks_total¶
Number of write-cache blocks written per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
wc_write_blocks_totalUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
wc_write_blks_totalUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
The flashpool_wc_write_blks_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Flash Pool | timeseries | Top $TopResources Aggregates by Cache Inserts |
flashpool_write_blks_replaced¶
Number of HDD write blocks replaced by SSD writes per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
write_blocks_replacedUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
write_blks_replacedUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
flashpool_write_blks_replaced_percent¶
Percentage of blocks overwritten to write-cache among all disk writes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl_hya_per_aggregate |
write_blocks_replaced_percentUnit: percent Type: average Base: estimated_write_blocks_total |
conf/restperf/9.12.0/wafl_hya_per_aggr.yaml |
| ZAPI | perf-object-get-instances wafl_hya_per_aggr |
write_blks_replaced_percentUnit: percent Type: average Base: est_write_blks_total |
conf/zapiperf/cdot/9.8.0/wafl_hya_per_aggr.yaml |
flexcache_blocks_requested_from_client¶
Total blocks requested by the client.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.flexcache_raw.client_requested_blocksUnit: Type: delta Base: |
conf/keyperf/9.15.0/flexcache.yaml |
| StatPerf | flexcache_per_volume |
blocks_requested_from_clientUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
blocks_requested_from_clientUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_blocks_requested_from_client metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Highlights | timeseries | Top $TopResources Blocks requested from Client |
flexcache_blocks_retrieved_from_origin¶
Blocks retrieved from origin in case of a cache miss. This can be divided by the raw client_requested_blocks and multiplied by 100 to calculate the cache miss percentage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.flexcache_raw.cache_miss_blocksUnit: Type: delta Base: |
conf/keyperf/9.15.0/flexcache.yaml |
| StatPerf | flexcache_per_volume |
blocks_retrieved_from_originUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
blocks_retrieved_from_originUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_blocks_retrieved_from_origin metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Highlights | timeseries | Top $TopResources Blocks requested from Origin |
flexcache_evict_rw_cache_skipped_reason_disconnected¶
Total number of read-write cache evict operations skipped because cache is disconnected.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
evict_rw_cache_skipped_reason_disconnectedUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
evict_rw_cache_skipped_reason_disconnectedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_evict_rw_cache_skipped_reason_disconnected metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Evict | timeseries | Top $TopResources Read-Write Cache Evictions Skipped Due to Cache Disconnection |
flexcache_evict_skipped_reason_config_noent¶
Total number of evict operation is skipped because cache config is not available.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
evict_skipped_reason_config_noentUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
evict_skipped_reason_config_noentUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_evict_skipped_reason_config_noent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Evict | timeseries | Top $TopResources Evictions Skipped Due to Configuration Issues |
flexcache_evict_skipped_reason_disconnected¶
Total number of evict operation is skipped because cache is disconnected.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
evict_skipped_reason_disconnectedUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
evict_skipped_reason_disconnectedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_evict_skipped_reason_disconnected metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Evict | timeseries | Top $TopResources Evictions Skipped Due to Cache Disconnection |
flexcache_evict_skipped_reason_offline¶
Total number of evict operation is skipped because cache volume is offline.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
evict_skipped_reason_offlineUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
evict_skipped_reason_offlineUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_evict_skipped_reason_offline metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Evict | timeseries | Top $TopResources Evictions Skipped When Cache is Offline |
| ONTAP: FlexCache | Invalidate | timeseries | Top $TopResources Invalidate Operations Skipped When Cache Volume is Offline |
flexcache_invalidate_skipped_reason_config_noent¶
Total number of invalidate operation is skipped because cache config is not available.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
invalidate_skipped_reason_config_noentUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
invalidate_skipped_reason_config_noentUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_invalidate_skipped_reason_config_noent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Invalidate | timeseries | Top $TopResources Invalidate Operations Skipped Due to Unavailable Cache Configuration |
flexcache_invalidate_skipped_reason_disconnected¶
Total number of invalidate operation is skipped because cache is disconnected.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
invalidate_skipped_reason_disconnectedUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
invalidate_skipped_reason_disconnectedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_invalidate_skipped_reason_disconnected metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Invalidate | timeseries | Top $TopResources Invalidate Operations Skipped Due to Cache Disconnection |
flexcache_invalidate_skipped_reason_offline¶
Total number of invalidate operation is skipped because cache volume is offline.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
invalidate_skipped_reason_offlineUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
invalidate_skipped_reason_offlineUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
flexcache_miss_percent¶
This metric represents the percentage of block requests from a client that resulted in a "miss" in the FlexCache. A "miss" occurs when the requested data is not found in the cache and has to be retrieved from the origin volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
blocks_retrieved_from_origin, blocks_requested_from_clientUnit: Type: Base: |
conf/keyperf/9.15.0/flexcache.yaml |
| StatPerf | flexcache_per_volume |
blocks_retrieved_from_origin, blocks_requested_from_clientUnit: Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | flexcache_per_volume |
blocks_retrieved_from_origin, blocks_requested_from_clientUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_miss_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Highlights | timeseries | Top $TopResources Cache Miss percent |
flexcache_nix_retry_skipped_reason_initiator_retrieve¶
Total retry nix operations skipped because the initiator is retrieve operation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
nix_retry_skipped_reason_initiator_retrieveUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
nix_retry_skipped_reason_initiator_retrieveUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_nix_retry_skipped_reason_initiator_retrieve metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Nix | timeseries | Top $TopResources Retry Nix Operations Skipped Due to Retrieve Operation Initiator |
flexcache_nix_skipped_reason_config_noent¶
Total number of nix operation is skipped because cache config is not available.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
nix_skipped_reason_config_noentUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
nix_skipped_reason_config_noentUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_nix_skipped_reason_config_noent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Nix | timeseries | Top $TopResources Nix Operations Skipped Due to Unavailable Cache Configuration |
flexcache_nix_skipped_reason_disconnected¶
Total number of nix operation is skipped because cache is disconnected.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
nix_skipped_reason_disconnectedUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
nix_skipped_reason_disconnectedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_nix_skipped_reason_disconnected metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Nix | timeseries | Top $TopResources Nix Operations Skipped Due to Cache Disconnection |
flexcache_nix_skipped_reason_in_progress¶
Total nix operations skipped because of an in-progress nix.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
nix_skipped_reason_in_progressUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
nix_skipped_reason_in_progressUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_nix_skipped_reason_in_progress metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Nix | timeseries | Top $TopResources Nix Operations Skipped Due to In-Progress Nix Operation |
flexcache_nix_skipped_reason_offline¶
Total number of nix operation is skipped because cache volume is offline.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
nix_skipped_reason_offlineUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
nix_skipped_reason_offlineUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_nix_skipped_reason_offline metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Nix | timeseries | Top $TopResources Nix Operations Skipped When Cache Volume is Offline |
flexcache_reconciled_data_entries¶
Total number of reconciled data entries at cache side.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
reconciled_data_entriesUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
reconciled_data_entriesUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_reconciled_data_entries metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Reconcile Metrics | timeseries | Top $TopResources Reconciled data entries |
flexcache_reconciled_lock_entries¶
Total number of reconciled lock entries at cache side.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| StatPerf | flexcache_per_volume |
reconciled_lock_entriesUnit: none Type: Base: |
conf/statperf/9.8.0/flexcache.yaml |
| ZAPI | perf-object-get-instances flexcache_per_volume |
reconciled_lock_entriesUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/flexcache.yaml |
The flexcache_reconciled_lock_entries metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Reconcile Metrics | timeseries | Top $TopResources Reconciled Lock Entries |
flexcache_size¶
Physical size of the volume, in bytes. The minimum size for a FlexVol volume is 20MB and the minimum size for a FlexGroup volume is 200MB per constituent. The recommended size for a FlexGroup volume is a minimum of 100GB per constituent. For all volumes, the default size is equal to the minimum size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/flexcache/flexcaches |
size |
conf/rest/9.12.0/flexcache.yaml |
| ZAPI | flexcache-get-iter |
flexcache-info.size |
conf/zapi/cdot/9.8.0/flexcache.yaml |
The flexcache_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexCache | Highlights | table | FlexCache Details |
fpolicy_aborted_requests¶
Number of screen requests aborted
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_policy |
aborted_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy.yaml |
The fpolicy_aborted_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Highlights | timeseries | Top $TopResources Policy by Aborted Requests |
fpolicy_denied_requests¶
Number of screen requests for which deny is received from fpolicy server
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_policy |
denied_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy.yaml |
The fpolicy_denied_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Highlights | timeseries | Top $TopResources Policy by Denied Requests |
fpolicy_io_processing_latency¶
Average IO processing latency for screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_policy |
io_processing_latencyUnit: microsec Type: average Base: io_processing_latency_base |
conf/zapiperf/cdot/9.8.0/fpolicy.yaml |
The fpolicy_io_processing_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Highlights | timeseries | Top $TopResources Policy by IO Processing Latency |
fpolicy_io_thread_wait_latency¶
Average IO thread wait latency for the screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_policy |
io_thread_wait_latencyUnit: microsec Type: average Base: io_thread_wait_latency_base |
conf/zapiperf/cdot/9.8.0/fpolicy.yaml |
The fpolicy_io_thread_wait_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Highlights | timeseries | Top $TopResources Policy by IO Thread Wait Latency |
fpolicy_processed_requests¶
Number of screen requests went through fpolicy processing
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_policy |
processed_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy.yaml |
The fpolicy_processed_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Highlights | timeseries | Top $TopResources Policy by Processed Requests |
fpolicy_processing_latency¶
Average policy processing latency for screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_policy |
policy_processing_latencyUnit: microsec Type: average Base: policy_processing_latency_base |
conf/zapiperf/cdot/9.8.0/fpolicy.yaml |
The fpolicy_processing_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Highlights | timeseries | Top $TopResources Policy by Processing Latency |
fpolicy_server_cancelled_requests¶
Number of screen requests whose processing was cancelled (cancel timeout)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_server |
cancelled_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_server.yaml |
The fpolicy_server_cancelled_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Server | timeseries | Top $TopResources Servers by Cancelled Requests |
fpolicy_server_failed_requests¶
Number of screen requests the node failed to send to fpolicy server
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_server |
failed_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_server.yaml |
The fpolicy_server_failed_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Server | timeseries | Top $TopResources Servers by Failed Requests |
fpolicy_server_max_request_latency¶
Maximum latency for a screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_server |
max_request_latencyUnit: microsec Type: raw Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_server.yaml |
The fpolicy_server_max_request_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Server | timeseries | Top $TopResources Servers by Max Request Latency |
fpolicy_server_outstanding_requests¶
Total number of screen requests waiting for response
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_server |
outstanding_requestsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_server.yaml |
The fpolicy_server_outstanding_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Server | timeseries | Top $TopResources Servers by Outstanding Requests |
fpolicy_server_processed_requests¶
Total number of screen requests processed(sync and async)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_server |
processed_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_server.yaml |
The fpolicy_server_processed_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Server | timeseries | Top $TopResources Servers by Processed Requests |
fpolicy_server_request_latency¶
Average latency for screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy_server |
request_latencyUnit: microsec Type: average Base: request_latency_base |
conf/zapiperf/cdot/9.8.0/fpolicy_server.yaml |
The fpolicy_server_request_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | Server | timeseries | Top $TopResources Servers by Request Latency |
fpolicy_svm_aborted_requests¶
Number of screen requests aborted
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy |
aborted_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_svm.yaml |
The fpolicy_svm_aborted_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | SVM | timeseries | Top $TopResources SVM by Aborted Requests |
fpolicy_svm_cifs_requests¶
Number of cifs screen requests sent to fpolicy server
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy |
cifs_requestsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_svm.yaml |
The fpolicy_svm_cifs_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | SVM | timeseries | Top $TopResources SVM by Cifs Requests |
fpolicy_svm_failedop_notifications¶
Number of failed file operation notifications sent to fpolicy server
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy |
failedop_notificationsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/fpolicy_svm.yaml |
The fpolicy_svm_failedop_notifications metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | SVM | timeseries | Top $TopResources SVM by Failed File Operation |
fpolicy_svm_io_processing_latency¶
Average IO processing latency for screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy |
io_processing_latencyUnit: microsec Type: average Base: io_processing_latency_base |
conf/zapiperf/cdot/9.8.0/fpolicy_svm.yaml |
The fpolicy_svm_io_processing_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | SVM | timeseries | Top $TopResources SVM by IO Processing Latency |
fpolicy_svm_io_thread_wait_latency¶
Average IO thread wait latency for screen request
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances fpolicy |
io_thread_wait_latencyUnit: microsec Type: average Base: io_thread_wait_latency_base |
conf/zapiperf/cdot/9.8.0/fpolicy_svm.yaml |
The fpolicy_svm_io_thread_wait_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FPolicy | SVM | timeseries | Top $TopResources SVM by IO Thread Wait Latency |
fru_status¶
This metric indicates a value of 1 if the FRU status is ok and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/fru.yaml |
The fru_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Power | Field Replaceable Unit (FRU) | table | Field Replaceable Unit (FRU) |
headroom_aggr_current_latency¶
This is the storage aggregate average latency per message at the disk level.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
current_latencyUnit: microsec Type: average Base: current_ops |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
current_latencyUnit: microsec Type: average Base: current_ops |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
The headroom_aggr_current_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Aggregate Headroom | timeseries | Current Latency |
headroom_aggr_current_ops¶
Total number of I/Os processed by the aggregate per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
current_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
current_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
The headroom_aggr_current_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Highlights | timeseries | Available Ops: Aggregate |
| ONTAP: Headroom | Aggregate Headroom | timeseries | Current IOP/s |
headroom_aggr_current_utilization¶
This is the storage aggregate average utilization of all the data disks in the aggregate.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
current_utilizationUnit: percent Type: percent Base: current_utilization_denominator |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
current_utilizationUnit: percent Type: percent Base: current_utilization_total |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
The headroom_aggr_current_utilization metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Aggregate Headroom | timeseries | Current Utilization |
headroom_aggr_ewma_daily¶
Daily exponential weighted moving average.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
ewma.dailyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
ewma_dailyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
headroom_aggr_ewma_hourly¶
Hourly exponential weighted moving average.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
ewma.hourlyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
ewma_hourlyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
headroom_aggr_ewma_monthly¶
Monthly exponential weighted moving average.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
ewma.monthlyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
ewma_monthlyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
headroom_aggr_ewma_weekly¶
Weekly exponential weighted moving average.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
ewma.weeklyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
ewma_weeklyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
headroom_aggr_optimal_point_confidence_factor¶
The confidence factor for the optimal point value based on the observed resource latency and utilization.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
optimal_point.confidence_factorUnit: none Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
optimal_point_confidence_factorUnit: none Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
headroom_aggr_optimal_point_latency¶
The latency component of the optimal point of the latency/utilization curve.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
optimal_point.latencyUnit: microsec Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
optimal_point_latencyUnit: microsec Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
The headroom_aggr_optimal_point_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Aggregate Headroom | timeseries | Optimal-Point Latency |
headroom_aggr_optimal_point_ops¶
The ops component of the optimal point derived from the latency/utilzation curve.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
optimal_point.opsUnit: per_sec Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
optimal_point_opsUnit: per_sec Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
The headroom_aggr_optimal_point_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Highlights | timeseries | Available Ops: Aggregate |
| ONTAP: Headroom | Aggregate Headroom | timeseries | Optimal-Point IOP/s |
headroom_aggr_optimal_point_utilization¶
The utilization component of the optimal point of the latency/utilization curve.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_aggregate |
optimal_point.utilizationUnit: none Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_aggr.yaml |
| ZAPI | perf-object-get-instances resource_headroom_aggr |
optimal_point_utilizationUnit: none Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_aggr.yaml |
The headroom_aggr_optimal_point_utilization metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Aggregate Headroom | timeseries | Optimal-Point Utilization |
headroom_cpu_current_latency¶
Current operation latency of the resource.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
current_latencyUnit: microsec Type: average Base: current_ops |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
current_latencyUnit: microsec Type: average Base: current_ops |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_current_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | CPU Headroom | timeseries | Current Latency |
headroom_cpu_current_ops¶
Total number of operations per second (also referred to as dblade ops).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
current_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
current_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_current_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Highlights | timeseries | Available Ops: CPU |
| ONTAP: Headroom | CPU Headroom | timeseries | Current CPU Ops |
| ONTAP: NFS Troubleshooting | Highlights | table | Headroom Overview (Average by Time Range) |
headroom_cpu_current_utilization¶
Average processor utilization across all processors in the system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
current_utilizationUnit: percent Type: percent Base: elapsed_time |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
current_utilizationUnit: percent Type: percent Base: current_utilization_total |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_current_utilization metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | CPU Headroom | timeseries | Current Utilization |
| ONTAP: NFS Troubleshooting | Highlights | table | Headroom Overview (Average by Time Range) |
headroom_cpu_ewma_daily¶
Daily exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
ewma.dailyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
ewma_dailyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_ewma_daily metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFS Troubleshooting | Highlights | timeseries | Weighted Avg Daily (Headroom) |
headroom_cpu_ewma_hourly¶
Hourly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
ewma.hourlyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
ewma_hourlyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
headroom_cpu_ewma_monthly¶
Monthly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
ewma.monthlyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
ewma_monthlyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
headroom_cpu_ewma_weekly¶
Weekly exponential weighted moving average for current_ops, optimal_point_ops, current_latency, optimal_point_latency, current_utilization, optimal_point_utilization and optimal_point_confidence_factor.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
ewma.weeklyUnit: none Type: raw Base: |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
ewma_weeklyUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_ewma_weekly metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFS Troubleshooting | Highlights | timeseries | Weighted Avg Weekly (Headroom) |
headroom_cpu_optimal_point_confidence_factor¶
Confidence factor for the optimal point value based on the observed resource latency and utilization. The possible values are: 0 - unknown, 1 - low, 2 - medium, 3 - high. This counter can provide an average confidence factor over a range of time.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
optimal_point.confidence_factorUnit: none Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
optimal_point_confidence_factorUnit: none Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
headroom_cpu_optimal_point_latency¶
Latency component of the optimal point of the latency/utilization curve. This counter can provide an average latency over a range of time.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
optimal_point.latencyUnit: microsec Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
optimal_point_latencyUnit: microsec Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_optimal_point_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | CPU Headroom | timeseries | Optimal-Point Latency |
headroom_cpu_optimal_point_ops¶
Ops component of the optimal point derived from the latency/utilization curve. This counter can provide an average ops over a range of time.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
optimal_point.opsUnit: per_sec Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
optimal_point_opsUnit: per_sec Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_optimal_point_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | Highlights | timeseries | Available Ops: CPU |
| ONTAP: Headroom | CPU Headroom | timeseries | Optimal-Point Ops |
| ONTAP: NFS Troubleshooting | Highlights | table | Headroom Overview (Average by Time Range) |
headroom_cpu_optimal_point_utilization¶
Utilization component of the optimal point of the latency/utilization curve. This counter can provide an average utilization over a range of time.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/headroom_cpu |
optimal_point.utilizationUnit: none Type: average Base: optimal_point.samples |
conf/restperf/9.12.0/resource_headroom_cpu.yaml |
| ZAPI | perf-object-get-instances resource_headroom_cpu |
optimal_point_utilizationUnit: none Type: average Base: optimal_point_samples |
conf/zapiperf/cdot/9.8.0/resource_headroom_cpu.yaml |
The headroom_cpu_optimal_point_utilization metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Headroom | CPU Headroom | timeseries | Optimal-Point Utilization |
| ONTAP: NFS Troubleshooting | Highlights | table | Headroom Overview (Average by Time Range) |
health_disk_alerts¶
Provides any issues related to Disks health check if disks are broken or unassigned. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_disk_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Datacenter | Issues | piechart | Warnings |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | stat | Total Warnings |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | Highlights | piechart | Warnings |
| ONTAP: Health | Disks | table | Disks Issues |
health_ems_alerts¶
The health_ems_alerts metric monitors EMS (Event Management System), providing a count based on their severity and other attributes. This metric includes labels such as node, message, source, and severity (e.g., emergency, alert, error). By default, it monitors alerts with emergency severity.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_ems_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Highlights | stat | Active Emergency EMS Alerts (Last 24 Hours) |
| ONTAP: Health | Highlights | table | Active Emergency EMS Alerts (Last 24 Hours) |
| ONTAP: Health | Emergency EMS | table | Active Emergency EMS Alerts (Last 24 Hours) |
health_ha_alerts¶
Provides any issues related to HA health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_ha_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | HA | table | HA Issues |
health_license_alerts¶
Provides any issues related to License health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_license_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | License | table | Non Compliant License |
health_lif_alerts¶
Provides any issues related to LIF health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_lif_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Warnings |
| ONTAP: Health | Highlights | stat | Total Warnings |
| ONTAP: Health | Highlights | piechart | Warnings |
| ONTAP: Health | Lif | table | Lif not at home port |
health_network_ethernet_port_alerts¶
Provides any issues related to Network Ethernet Port health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_network_ethernet_port_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | Network Port | table | Ethernet ports are down |
health_network_fc_port_alerts¶
Provides any issues related to Network FC Port health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_network_fc_port_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | Network Port | table | FC ports are down |
health_node_alerts¶
Provides any issues related to Node health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_node_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | Node | table | Node Issues |
health_shelf_alerts¶
Provides any issues related to Shelf health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_shelf_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Errors |
| ONTAP: Datacenter | Issues | piechart | Warnings |
| ONTAP: Health | Highlights | stat | Total Errors |
| ONTAP: Health | Highlights | stat | Total Warnings |
| ONTAP: Health | Highlights | piechart | Errors |
| ONTAP: Health | Highlights | piechart | Warnings |
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
health_support_alerts¶
Provides any issues related to Support health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_support_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Warnings |
| ONTAP: Health | Highlights | stat | Total Warnings |
| ONTAP: Health | Highlights | piechart | Warnings |
| ONTAP: Health | System Health Alerts | table | System Alerts |
health_volume_move_alerts¶
Provides any issues related to Volume Move health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_volume_move_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Warnings |
| ONTAP: Health | Highlights | stat | Total Warnings |
| ONTAP: Health | Highlights | piechart | Warnings |
| ONTAP: Health | Volume | table | Volumes Move Issues |
health_volume_ransomware_alerts¶
Provides any issues related to Volume Ransomware health check. Value of 1 means issue is happening and 0 means that issue is resolved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.6.0/health.yaml |
The health_volume_ransomware_alerts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Issues | piechart | Warnings |
| ONTAP: Health | Highlights | stat | Total Warnings |
| ONTAP: Health | Highlights | piechart | Warnings |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
hostadapter_bytes_read¶
Bytes read through a host adapter
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/host_adapter |
bytes_readUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/hostadapter.yaml |
| ZAPI | perf-object-get-instances hostadapter |
bytes_readUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/hostadapter.yaml |
The hostadapter_bytes_read metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Disk and Tape Drives Throughput by Node |
| ONTAP: Disk | Disk Utilization | timeseries | Top $TopResources Disk and Tape Drives Throughput by Host Adapter |
| ONTAP: MetroCluster | Disk and Tape Adapter | timeseries | Top $TopResources Adapters by Read Data |
hostadapter_bytes_written¶
Bytes written through a host adapter
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/host_adapter |
bytes_writtenUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/hostadapter.yaml |
| ZAPI | perf-object-get-instances hostadapter |
bytes_writtenUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/hostadapter.yaml |
The hostadapter_bytes_written metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Disk and Tape Drives Throughput by Node |
| ONTAP: Disk | Disk Utilization | timeseries | Top $TopResources Disk and Tape Drives Throughput by Host Adapter |
| ONTAP: MetroCluster | Disk and Tape Adapter | timeseries | Top $TopResources Adapters by Write Data |
iscsi_lif_avg_latency¶
Average latency for iSCSI operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
average_latencyUnit: microsec Type: average Base: cmd_transferred |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
avg_latencyUnit: microsec Type: average Base: cmd_transfered |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | iSCSI Frontend | stat | iSCSI Latency |
| ONTAP: Node | iSCSI Frontend | timeseries | Average Latency by LIF |
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Average Latency |
iscsi_lif_avg_other_latency¶
Average latency for operations other than read and write (for example, Inquiry, Report LUNs, SCSI Task Management Functions)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
average_other_latencyUnit: microsec Type: average Base: iscsi_other_ops |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
avg_other_latencyUnit: microsec Type: average Base: iscsi_other_ops |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_avg_other_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | iSCSI | timeseries | SVM iSCSI Average Latency |
iscsi_lif_avg_read_latency¶
Average latency for read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
average_read_latencyUnit: microsec Type: average Base: iscsi_read_ops |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
avg_read_latencyUnit: microsec Type: average Base: iscsi_read_ops |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Average Read Latency |
| ONTAP: SVM | iSCSI | timeseries | SVM iSCSI Average Latency |
iscsi_lif_avg_write_latency¶
Average latency for write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
average_write_latencyUnit: microsec Type: average Base: iscsi_write_ops |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
avg_write_latencyUnit: microsec Type: average Base: iscsi_write_ops |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Average Write Latency |
| ONTAP: SVM | iSCSI | timeseries | SVM iSCSI Average Latency |
iscsi_lif_cmd_transfered¶
Command transferred by this iSCSI connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
cmd_transferredUnit: none Type: rate Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
cmd_transferedUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
iscsi_lif_iscsi_other_ops¶
iSCSI other operations per second on this logical interface (LIF)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
iscsi_other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
iscsi_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_iscsi_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | iSCSI Frontend | stat | iSCSI IOPs |
| ONTAP: Node | iSCSI Frontend | timeseries | IOPs by LIF |
| ONTAP: SVM | iSCSI | stat | SVM iSCSI IOPs |
iscsi_lif_iscsi_read_ops¶
iSCSI read operations per second on this logical interface (LIF)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
iscsi_read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
iscsi_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_iscsi_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Read IOPs |
iscsi_lif_iscsi_write_ops¶
iSCSI write operations per second on this logical interface (LIF)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
iscsi_write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
iscsi_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_iscsi_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Write IOPs |
iscsi_lif_protocol_errors¶
Number of protocol errors from iSCSI sessions on this logical interface (LIF)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
protocol_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
protocol_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
iscsi_lif_read_data¶
Amount of data read from the storage system in bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | LIF | timeseries | Top $TopResources iSCSI LIFs by Send Throughput |
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Read Throughput |
| ONTAP: SVM | iSCSI | timeseries | Top $TopResources iSCSI LIFs by Send Throughput |
| ONTAP: SVM | iSCSI | timeseries | SVM iSCSI Throughput |
iscsi_lif_write_data¶
Amount of data written to the storage system in bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iscsi_lif |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/iscsi_lif.yaml |
| ZAPI | perf-object-get-instances iscsi_lif |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iscsi_lif.yaml |
The iscsi_lif_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | iSCSI Frontend | timeseries | Throughput by LIF |
| ONTAP: SVM | LIF | timeseries | Top $TopResources iSCSI LIFs by Receive Throughput |
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Throughput |
| ONTAP: SVM | iSCSI | stat | SVM iSCSI Write Throughput |
| ONTAP: SVM | iSCSI | timeseries | Top $TopResources iSCSI LIFs by Receive Throughput |
| ONTAP: SVM | iSCSI | timeseries | SVM iSCSI Throughput |
iw_avg_latency¶
Average RDMA I/O latency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iwarp |
average_latencyUnit: microsec Type: average Base: ops |
conf/restperf/9.14.1/iwarp.yaml |
| ZAPI | perf-object-get-instances iwarp |
iw_avg_latencyUnit: microsec Type: average Base: iw_ops |
conf/zapiperf/cdot/9.8.0/iwarp.yaml |
The iw_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Iwarp | timeseries | Average Latency |
iw_ops¶
Number of RDMA I/Os issued.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iwarp |
opsUnit: none Type: rate Base: |
conf/restperf/9.14.1/iwarp.yaml |
| ZAPI | perf-object-get-instances iwarp |
iw_opsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iwarp.yaml |
The iw_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Iwarp | timeseries | IOPs |
iw_read_ops¶
Number of RDMA read I/Os issued.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iwarp |
read_opsUnit: none Type: rate Base: |
conf/restperf/9.14.1/iwarp.yaml |
| ZAPI | perf-object-get-instances iwarp |
iw_read_opsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iwarp.yaml |
The iw_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Iwarp | timeseries | Read IOPs |
iw_write_ops¶
Number of RDMA write I/Os issued.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/iwarp |
write_opsUnit: none Type: rate Base: |
conf/restperf/9.14.1/iwarp.yaml |
| ZAPI | perf-object-get-instances iwarp |
iw_write_opsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/iwarp.yaml |
The iw_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Iwarp | timeseries | Write IOPs |
lif_labels¶
This metric provides information about LIF
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/network/ip/interfaces |
Harvest generated |
conf/rest/9.12.0/lif.yaml |
| ZAPI | net-interface-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/lif.yaml |
The lif_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Health | Lif | table | Lif not at home port |
| ONTAP: SVM | LIF | table | LIF Details |
lif_recv_data¶
Number of bytes received per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
received_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/lif.yaml |
| KeyPerf | api/network/ip/interfaces |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
recv_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
The lif_recv_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | LIF | timeseries | Top $TopResources NAS LIFs by Receive Throughput |
lif_recv_errors¶
Number of received Errors per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
received_errorsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
recv_errorsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
lif_recv_packet¶
Number of packets received per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
received_packetsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
recv_packetUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
lif_sent_data¶
Number of bytes sent per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
sent_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/lif.yaml |
| KeyPerf | api/network/ip/interfaces |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
sent_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
The lif_sent_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | LIF | timeseries | Top $TopResources NAS LIFs by Send Throughput |
lif_sent_errors¶
Number of sent errors per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
sent_errorsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
sent_errorsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
lif_sent_packet¶
Number of packets sent per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
sent_packetsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
sent_packetUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
lif_total_data¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/network/ip/interfaces |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lif.yaml |
lif_uptime¶
Interface up time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lif |
up_timeUnit: millisec Type: raw Base: |
conf/restperf/9.12.0/lif.yaml |
| ZAPI | perf-object-get-instances lif |
up_timeUnit: millisec Type: raw Base: |
conf/zapiperf/cdot/9.8.0/lif.yaml |
The lif_uptime metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | LIF | table | LIF Details |
lun_avg_read_latency¶
Average read latency in microseconds for all operations on the LUN
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/lun.yaml |
| KeyPerf | api/storage/luns |
statistics.latency_raw.readUnit: microsec Type: average Base: lun_statistics.iops_raw.read |
conf/keyperf/9.15.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Highlights | stat | Top $TopResources Luns by Average Read Latency |
| ONTAP: LUN | LUN Table | table | Top $TopResources Luns by Read Latency |
| ONTAP: LUN | Top LUN Performance | timeseries | Top $TopResources Luns by Average Read Latency |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | Latency |
lun_avg_write_latency¶
Average write latency in microseconds for all operations on the LUN
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/lun.yaml |
| KeyPerf | api/storage/luns |
statistics.latency_raw.writeUnit: microsec Type: average Base: lun_statistics.iops_raw.write |
conf/keyperf/9.15.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Highlights | stat | Top $TopResources Luns by Average Write Latency |
| ONTAP: LUN | LUN Table | table | Top $TopResources Luns by Write Latency |
| ONTAP: LUN | Top LUN Performance | timeseries | Top $TopResources Luns by Average Write Latency |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | Latency |
lun_avg_xcopy_latency¶
Average latency in microseconds for xcopy requests
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
average_xcopy_latencyUnit: microsec Type: average Base: xcopy_requests |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
avg_xcopy_latencyUnit: microsec Type: average Base: xcopy_reqs |
conf/zapiperf/cdot/9.8.0/lun.yaml |
lun_caw_reqs¶
Number of compare and write requests
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
caw_requestsUnit: none Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
caw_reqsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_caw_reqs metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | vStorage Offload Operations |
lun_enospc¶
Number of operations receiving ENOSPC errors
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
enospcUnit: none Type: delta Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
enospcUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
lun_labels¶
This metric provides information about Lun
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/luns |
Harvest generated |
conf/rest/9.12.0/lun.yaml |
| ZAPI | lun-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/lun.yaml |
The lun_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: LUN | LUN Table | table | LUNS in Cluster |
lun_new_status¶
This metric indicates a value of 1 if the LUN state is online (indicating the LUN is operational) and a value of 0 for any other state. and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/lun.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/lun.yaml |
The lun_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | LUN Table | table | LUNS in Cluster |
lun_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/luns |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
lun_other_latency¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/luns |
statistics.latency_raw.otherUnit: microsec Type: average Base: lun_statistics.iops_raw.other |
conf/keyperf/9.15.0/lun.yaml |
lun_other_ops¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/luns |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
lun_queue_full¶
Queue full responses
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
queue_fullUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
queue_fullUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
lun_read_align_histo¶
Histogram of WAFL read alignment (number sectors off WAFL block start)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
read_align_histogramUnit: percent Type: percent Base: read_ops_sent |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
read_align_histoUnit: percent Type: percent Base: read_ops_sent |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_read_align_histo metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Top LUN Performance Efficiency | timeseries | Top $TopResources Luns by Read Misalignment Buckets |
lun_read_data¶
Read bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| KeyPerf | api/storage/luns |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Highlights | stat | Top $TopResources Luns by Read Throughput |
| ONTAP: LUN | LUN Table | table | Top $TopResources Luns by Read Throughput |
| ONTAP: LUN | Top LUN Performance | timeseries | Top $TopResources Luns by Read Throughput |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | Throughput |
lun_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| KeyPerf | api/storage/luns |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Highlights | stat | Top $TopResources Luns by Read IOPs |
| ONTAP: LUN | LUN Table | table | Top $TopResources Luns by Read IOPS |
| ONTAP: LUN | Top LUN Performance | timeseries | Top $TopResources Luns by Read IOPs |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | IOPs |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | IO Size |
lun_read_partial_blocks¶
Percentage of reads whose size is not a multiple of WAFL block size
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
read_partial_blocksUnit: percent Type: percent Base: read_ops |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
read_partial_blocksUnit: percent Type: percent Base: read_ops |
conf/zapiperf/cdot/9.8.0/lun.yaml |
lun_remote_bytes¶
I/O to or from a LUN which is not owned by the storage system handling the I/O.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
remote_bytesUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
remote_bytesUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_remote_bytes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | Indirect Access |
lun_remote_ops¶
Number of operations received by a storage system that does not own the LUN targeted by the operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
remote_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
remote_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_remote_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Top LUN Performance Efficiency | timeseries | Top $TopResources Luns by Indirect Access IOPS |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | Indirect Access |
lun_size¶
The total provisioned size of the LUN. The LUN size can be increased but not decreased using the REST interface.
The maximum and minimum sizes listed here are the absolute maximum and absolute minimum sizes, in bytes. The actual minimum and maximum sizes vary depending on the ONTAP version, ONTAP platform and the available space in the containing volume and aggregate.
For more information, see Size properties in the docs section of the ONTAP REST API documentation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/luns |
space.size |
conf/rest/9.12.0/lun.yaml |
| ZAPI | lun-get-iter |
lun-info.size |
conf/zapi/cdot/9.8.0/lun.yaml |
The lun_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | LUN Table | table | LUNS in Cluster |
lun_size_used¶
The amount of space consumed by the main data stream of the LUN.
This value is the total space consumed in the volume by the LUN, including filesystem overhead, but excluding prefix and suffix streams. Due to internal filesystem overhead and the many ways SAN filesystems and applications utilize blocks within a LUN, this value does not necessarily reflect actual consumption/availability from the perspective of the filesystem or application. Without specific knowledge of how the LUN blocks are utilized outside of ONTAP, this property should not be used as an indicator for an out-of-space condition.
For more information, see Size properties in the docs section of the ONTAP REST API documentation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/luns |
space.used |
conf/rest/9.12.0/lun.yaml |
| ZAPI | lun-get-iter |
lun-info.size-used |
conf/zapi/cdot/9.8.0/lun.yaml |
The lun_size_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | LUN Table | table | LUNS in Cluster |
lun_size_used_percent¶
This metric represents the percentage of a LUN that is currently being used.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/luns |
size_used, size |
conf/rest/9.12.0/lun.yaml |
| ZAPI | lun-get-iter |
size_used, size |
conf/zapi/cdot/9.8.0/lun.yaml |
The lun_size_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Top Volume and LUN Capacity | timeseries | Top $TopResources LUNs by Percent Most Filled |
| ONTAP: LUN | Top Volume and LUN Capacity | timeseries | Top $TopResources LUNs by Percent Least Filled |
lun_total_data¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/luns |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
lun_total_latency¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/luns |
statistics.latency_raw.totalUnit: microsec Type: average Base: lun_statistics.iops_raw.total |
conf/keyperf/9.15.0/lun.yaml |
lun_total_ops¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/luns |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
lun_unmap_reqs¶
Number of unmap command requests
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
unmap_requestsUnit: none Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
unmap_reqsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_unmap_reqs metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | vStorage Offload Operations |
lun_write_align_histo¶
Histogram of WAFL write alignment (number of sectors off WAFL block start)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
write_align_histogramUnit: percent Type: percent Base: write_ops_sent |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
write_align_histoUnit: percent Type: percent Base: write_ops_sent |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_write_align_histo metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Top LUN Performance Efficiency | timeseries | Top $TopResources Luns by Write Misalignment Buckets |
lun_write_data¶
Write bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| KeyPerf | api/storage/luns |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Highlights | stat | Top $TopResources Luns by Write Throughput |
| ONTAP: LUN | LUN Table | table | Top $TopResources Luns by Write Throughput |
| ONTAP: LUN | Top LUN Performance | timeseries | Top $TopResources Luns by Write Throughput |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | Throughput |
lun_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| KeyPerf | api/storage/luns |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Highlights | stat | Top $TopResources Luns by Write IOPs |
| ONTAP: LUN | LUN Table | table | Top $TopResources Luns by Write IOPS |
| ONTAP: LUN | Top LUN Performance | timeseries | Top $TopResources Luns by Write IOPs |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | IOPs |
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | IO Size |
lun_write_partial_blocks¶
Percentage of writes whose size is not a multiple of WAFL block size
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
write_partial_blocksUnit: percent Type: percent Base: write_ops |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
write_partial_blocksUnit: percent Type: percent Base: write_ops |
conf/zapiperf/cdot/9.8.0/lun.yaml |
lun_writesame_reqs¶
Number of write same command requests
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
writesame_requestsUnit: none Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
writesame_reqsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_writesame_reqs metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | vStorage Offload Operations |
lun_writesame_unmap_reqs¶
Number of write same commands requests with unmap bit set
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
writesame_unmap_requestsUnit: none Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
writesame_unmap_reqsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_writesame_unmap_reqs metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | vStorage Offload Operations |
lun_xcopy_reqs¶
Total number of xcopy operations on the LUN
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/lun |
xcopy_requestsUnit: none Type: rate Base: |
conf/restperf/9.12.0/lun.yaml |
| ZAPI | perf-object-get-instances lun |
xcopy_reqsUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/lun.yaml |
The lun_xcopy_reqs metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Per LUN (Must Select Cluster/SVM/Volume/LUN) | timeseries | vStorage Offload Operations |
mav_request_approve_expiry_time¶
Shows the deadline by which approved operations must be approved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/multi-admin-verify/requests |
approve_expiry_time |
conf/rest/9.12.0/mav_request.yaml |
The mav_request_approve_expiry_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MAV Request | Highlights | table | MAV Requests |
mav_request_approve_time¶
Shows the date and time when requests were approved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/multi-admin-verify/requests |
approve_time |
conf/rest/9.12.0/mav_request.yaml |
The mav_request_approve_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MAV Request | Highlights | table | MAV Requests |
mav_request_create_time¶
Displays the date and time each MAV request was initiated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/multi-admin-verify/requests |
create_time |
conf/rest/9.12.0/mav_request.yaml |
The mav_request_create_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MAV Request | Highlights | table | MAV Requests |
mav_request_details¶
This metric provides information about MAV requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/multi-admin-verify/requests |
Harvest generated. |
conf/rest/9.12.0/mav_request.yaml |
The mav_request_details metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MAV Request | Highlights | table | MAV Requests |
mav_request_execution_expiry_time¶
Shows the deadline by which approved operations must be executed.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/multi-admin-verify/requests |
execution_expiry_time |
conf/rest/9.12.0/mav_request.yaml |
The mav_request_execution_expiry_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MAV Request | Highlights | table | MAV Requests |
mediator_labels¶
This metric provides information about Mediator
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/mediators |
Harvest generated |
conf/rest/9.12.0/mediator.yaml |
metadata_collector_api_time¶
amount of time to collect data from monitored cluster object
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
The metadata_collector_api_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Collectors | timeseries | API Time |
metadata_collector_bytesRx¶
The amount of data received by the collector from the monitored cluster.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: bytes |
NA |
| ZAPI | NA |
Harvest generatedUnit: bytes |
NA |
metadata_collector_calc_time¶
amount of time it took to compute metrics between two successive polls, specifically using properties like raw, delta, rate, average, and percent. This metric is available for ZapiPerf/RestPerf collectors.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
The metadata_collector_calc_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Collectors | timeseries | Postprocessing Time |
metadata_collector_instances¶
number of objects collected from monitored cluster
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
The metadata_collector_instances metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Collectors | timeseries | Instances Per Poll |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
metadata_collector_metrics¶
number of counters collected from monitored cluster
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
The metadata_collector_metrics metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Collectors | timeseries | Data Points Per Poll |
metadata_collector_numCalls¶
The number of API calls made by the collector to the monitored cluster.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
metadata_collector_numPartials¶
The number of partial responses received by the collector from the monitored cluster.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
metadata_collector_parse_time¶
amount of time to parse XML, JSON, etc. for cluster object
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
The metadata_collector_parse_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Collectors | timeseries | Parse Time |
metadata_collector_pluginInstances¶
The number of plugin instances generated by the collector.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
metadata_collector_plugin_time¶
amount of time for all plugins to post-process metrics
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
metadata_collector_poll_time¶
amount of time it took for the poll to finish
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
The metadata_collector_poll_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | timeseries | Average Poll Time Per Poller |
| Harvest Metadata | Highlights | timeseries | Average Time Per Collector |
| Harvest Metadata | Collectors | timeseries | Time Per Data Poll |
metadata_collector_skips¶
number of metrics that were not calculated between two successive polls. This metric is available for ZapiPerf/RestPerf collectors.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
metadata_collector_task_time¶
amount of time it took for each collector's subtasks to complete
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
metadata_component_count¶
number of metrics collected for each object
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
The metadata_component_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | stat | Collected/24h |
| Harvest Metadata | Highlights | stat | Collected/m |
| Harvest Metadata | Highlights | stat | Exported/m |
| Harvest Metadata | Prometheus | timeseries | Data Points Per Export |
metadata_component_status¶
status of the collector - 0 means running, 1 means standby, 2 means failed
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: enum |
NA |
| ZAPI | NA |
Harvest generatedUnit: enum |
NA |
The metadata_component_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | stat | Total Object Count Across Collectors |
| Harvest Metadata | Highlights | stat | Failed Object Count Across Collectors |
| Harvest Metadata | Highlights | stat | Exporters |
| Harvest Metadata | Highlights | table | Collectors |
| Harvest Metadata | Highlights | table | Exporters |
metadata_exporter_count¶
number of metrics and labels exported
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
The metadata_exporter_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Prometheus | timeseries | Data Points Per Export |
metadata_exporter_time¶
amount of time it took to render, export, and serve exported data
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: microseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: microseconds |
NA |
The metadata_exporter_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | timeseries | Average Time Per Exporter |
| Harvest Metadata | Prometheus | timeseries | Average Time Per Export |
metadata_target_goroutines¶
number of goroutines that exist within the poller
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: scalar |
NA |
| ZAPI | NA |
Harvest generatedUnit: scalar |
NA |
metadata_target_ping¶
The response time (in milliseconds) of the ping to the target system. If the ping is successful, the metric records the time it took for the ping to complete.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: milliseconds |
NA |
| ZAPI | NA |
Harvest generatedUnit: milliseconds |
NA |
The metadata_target_ping metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | table | Target Systems |
metadata_target_status¶
status of the system being monitored. 0 means reachable, 1 means unreachable
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: enum |
NA |
| ZAPI | NA |
Harvest generatedUnit: enum |
NA |
The metadata_target_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | stat | Datacenters |
| Harvest Metadata | Highlights | table | Target Systems |
metrocluster_check_aggr_status¶
Detail of the type of diagnostic operation run for the Aggregate with diagnostic operation result.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/metrocluster_check.yaml |
The metrocluster_check_aggr_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Diagnostics | table | Metrocluster Aggregate Diagnostics Check Details |
metrocluster_check_cluster_status¶
Detail of the type of diagnostic operation run for the Cluster with diagnostic operation result.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/metrocluster_check.yaml |
The metrocluster_check_cluster_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Diagnostics | table | Metrocluster Cluster Diagnostics Check Details |
metrocluster_check_node_status¶
Detail of the type of diagnostic operation run for the Node with diagnostic operation result.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/metrocluster_check.yaml |
The metrocluster_check_node_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Diagnostics | table | Metrocluster Node Diagnostics Check Details |
metrocluster_check_volume_status¶
Detail of the type of diagnostic operation run for the Volume with diagnostic operation result.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/metrocluster_check.yaml |
The metrocluster_check_volume_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Diagnostics | table | Metrocluster Volume Diagnostics Check Details |
namespace_avg_other_latency¶
Average other ops latency in microseconds for all operations on the Namespace
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
average_other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.latency_raw.otherUnit: microsec Type: average Base: namespace_statistics.iops_raw.other |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
avg_other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_avg_read_latency¶
Average read latency in microseconds for all operations on the Namespace
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.latency_raw.readUnit: microsec Type: average Base: namespace_statistics.iops_raw.read |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
The namespace_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | Highlights | timeseries | Top $TopResources NVMe Namespaces by Average Read Latency |
namespace_avg_total_latency¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/namespaces |
statistics.latency_raw.totalUnit: microsec Type: average Base: namespace_statistics.iops_raw.total |
conf/keyperf/9.15.0/namespace.yaml |
namespace_avg_write_latency¶
Average write latency in microseconds for all operations on the Namespace
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.latency_raw.writeUnit: microsec Type: average Base: namespace_statistics.iops_raw.write |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
The namespace_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | Highlights | timeseries | Top $TopResources NVMe Namespaces by Average Write Latency |
namespace_block_size¶
The size of blocks in the namespace in bytes. The default for namespaces with an os_type of vmware is 512. All other namespaces default to 4096.
Valid in POST when creating an NVMe namespace that is not a clone of another. Disallowed in POST when creating a namespace clone. Valid in POST.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/namespaces |
space.block_size |
conf/rest/9.12.0/namespace.yaml |
| ZAPI | nvme-namespace-get-iter |
nvme-namespace-info.block-size |
conf/zapi/cdot/9.8.0/namespace.yaml |
namespace_labels¶
This metric provides information about Namespace
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/namespaces |
Harvest generated |
conf/rest/9.12.0/namespace.yaml |
| ZAPI | nvme-namespace-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/namespace.yaml |
The namespace_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: NVMe Namespaces | NVMe Namespaces Table | table | NVMe Namespaces |
namespace_other_ops¶
Number of other operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_read_data¶
Read bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
The namespace_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | Highlights | timeseries | Top $TopResources NVMe Namespaces by Read Throughput |
namespace_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
The namespace_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | Highlights | timeseries | Top $TopResources NVMe Namespaces by Read IOPs |
namespace_remote_other_ops¶
Number of remote other operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
remote.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
remote_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_remote_read_data¶
Remote read bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
remote.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
remote_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_remote_read_ops¶
Number of remote read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
remote.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
remote_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_remote_write_data¶
Remote write bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
remote.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
remote_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_remote_write_ops¶
Number of remote write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
remote.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
remote_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
namespace_size¶
The total provisioned size of the NVMe namespace. Valid in POST and PATCH. The NVMe namespace size can be increased but not be made smaller using the REST interface.
The maximum and minimum sizes listed here are the absolute maximum and absolute minimum sizes in bytes. The maximum size is variable with respect to large NVMe namespace support in ONTAP. If large namespaces are supported, the maximum size is 128 TB (140737488355328 bytes) and if not supported, the maximum size is just under 16 TB (17557557870592 bytes). The minimum size supported is always 4096 bytes.
For more information, see Size properties in the docs section of the ONTAP REST API documentation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/namespaces |
space.size |
conf/rest/9.12.0/namespace.yaml |
| ZAPI | nvme-namespace-get-iter |
nvme-namespace-info.size |
conf/zapi/cdot/9.8.0/namespace.yaml |
The namespace_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | NVMe Namespaces Table | table | NVMe Namespaces |
namespace_size_available¶
This metric represents the amount of available space in a namespace.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/namespaces |
size, size_used |
conf/rest/9.12.0/namespace.yaml |
| ZAPI | nvme-namespace-get-iter |
size, size_used |
conf/zapi/cdot/9.8.0/namespace.yaml |
namespace_size_available_percent¶
This metric represents the percentage of available space in a namespace.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/namespaces |
size_available, size |
conf/rest/9.12.0/namespace.yaml |
| ZAPI | nvme-namespace-get-iter |
size_available, size |
conf/zapi/cdot/9.8.0/namespace.yaml |
The namespace_size_available_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | NVMe Namespaces Table | table | NVMe Namespaces |
namespace_size_used¶
The amount of space consumed by the main data stream of the NVMe namespace.
This value is the total space consumed in the volume by the NVMe namespace, including filesystem overhead, but excluding prefix and suffix streams. Due to internal filesystem overhead and the many ways NVMe filesystems and applications utilize blocks within a namespace, this value does not necessarily reflect actual consumption/availability from the perspective of the filesystem or application. Without specific knowledge of how the namespace blocks are utilized outside of ONTAP, this property should not be used and an indicator for an out-of-space condition.
For more information, see Size properties in the docs section of the ONTAP REST API documentation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/namespaces |
space.used |
conf/rest/9.12.0/namespace.yaml |
| ZAPI | nvme-namespace-get-iter |
nvme-namespace-info.size-used |
conf/zapi/cdot/9.8.0/namespace.yaml |
The namespace_size_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | NVMe Namespaces Table | table | NVMe Namespaces |
namespace_total_data¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/namespaces |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
namespace_total_ops¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/namespaces |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
namespace_write_data¶
Write bytes
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
The namespace_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | Highlights | timeseries | Top $TopResources NVMe Namespaces by Write Throughput |
namespace_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/namespace |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/namespace.yaml |
| KeyPerf | api/storage/namespaces |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/namespace.yaml |
| ZAPI | perf-object-get-instances namespace |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/namespace.yaml |
The namespace_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NVMe Namespaces | Highlights | timeseries | Top $TopResources NVMe Namespaces by Write IOPs |
ndmp_session_data_bytes_processed¶
Indicates the NDMP data bytes processed.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/ndmp/sessions |
data.bytes_processed |
conf/rest/9.7.0/ndmp_session.yaml |
ndmp_session_mover_bytes_moved¶
Indicates the NDMP mover bytes moved.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/ndmp/sessions |
mover.bytes_moved |
conf/rest/9.7.0/ndmp_session.yaml |
net_connection_labels¶
This metric provides information about NetConnections
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/network/connections/active |
Harvest generated |
conf/rest/9.12.0/netconnections.yaml |
net_port_mtu¶
Maximum transmission unit, largest packet size on this network
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/network/ethernet/ports |
mtu |
conf/rest/9.12.0/netport.yaml |
| ZAPI | net-port-get-iter |
net-port-info.mtu |
conf/zapi/cdot/9.8.0/netport.yaml |
The net_port_mtu metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | table | Ethernet ports |
net_port_status¶
This metric indicates a value of 1 if the port state is up and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/netport.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/netport.yaml |
The net_port_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | table | Ethernet ports |
net_route_labels¶
This metric provides information about NetRoute
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/network/ip/routes |
Harvest generated |
conf/rest/9.8.0/netroute.yaml |
The net_route_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Routes | table | Routes |
netstat_bytes_recvd¶
Number of bytes received by a TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
bytes_recvdUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_bytes_sent¶
Number of bytes sent by a TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
bytes_sentUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_cong_win¶
Congestion window of a TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
cong_winUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_cong_win_th¶
Congestion window threshold of a TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
cong_win_thUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_ooorcv_pkts¶
Number of out-of-order packets received by this TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
ooorcv_pktsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_recv_window¶
Receive window size of a TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
recv_windowUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_rexmit_pkts¶
Number of packets retransmitted by this TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
rexmit_pktsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
netstat_send_window¶
Send window size of a TCP connection
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances netstat |
send_windowUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/netstat.yaml |
nfs_clients_idle_duration¶
Specifies an ISO-8601 format of date and time to retrieve the idle time duration in hours, minutes, and seconds format.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/nfs/connected-clients |
idle_duration |
conf/rest/9.7.0/nfs_clients.yaml |
The nfs_clients_idle_duration metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFS Clients | Highlights | stat | Total NFS Connections |
| ONTAP: NFS Clients | Highlights | piechart | NFS Connections by Protocol |
| ONTAP: NFS Clients | Highlights | table | NFS Clients (active in the past 48 hours) |
nfs_diag_storePool_ByteLockAlloc¶
Current number of byte range lock objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.byte_lock_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_ByteLockAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_ByteLockAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | ByteLockAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_ByteLockMax¶
Maximum number of byte range lock objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.byte_lock_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_ByteLockMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_ByteLockMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | ByteLockAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_ClientAlloc¶
Current number of client objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.client_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_ClientAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_ClientAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | ClientAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_ClientMax¶
Maximum number of client objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.client_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_ClientMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_ClientMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | ClientAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_ConnectionParentSessionReferenceAlloc¶
Current number of connection parent session reference objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.connection_parent_session_reference_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_ConnectionParentSessionReferenceAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_ConnectionParentSessionReferenceAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | ConnectionParentSessionReferenceAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_ConnectionParentSessionReferenceMax¶
Maximum number of connection parent session reference objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.connection_parent_session_reference_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_ConnectionParentSessionReferenceMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_ConnectionParentSessionReferenceMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | ConnectionParentSessionReferenceAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_CopyStateAlloc¶
Current number of copy state objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.copy_state_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_CopyStateAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_CopyStateAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | CopyStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_CopyStateMax¶
Maximum number of copy state objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.copy_state_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_CopyStateMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_CopyStateMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | CopyStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_DelegAlloc¶
Current number of delegation lock objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.delegation_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_DelegAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_DelegAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | DelegAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_DelegMax¶
Maximum number delegation lock objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.delegation_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_DelegMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_DelegMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | DelegAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_DelegStateAlloc¶
Current number of delegation state objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.delegation_state_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_DelegStateAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_DelegStateAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | DelegStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_DelegStateMax¶
Maximum number of delegation state objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.delegation_state_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_DelegStateMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_DelegStateMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | DelegStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_LayoutAlloc¶
Current number of layout objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.layout_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_LayoutAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_LayoutAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | LayoutAlloc |
nfs_diag_storePool_LayoutMax¶
Maximum number of layout objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.layout_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_LayoutMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_LayoutMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | LayoutAlloc |
nfs_diag_storePool_LayoutStateAlloc¶
Current number of layout state objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.layout_state_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_LayoutStateAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_LayoutStateAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | LayoutStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_LayoutStateMax¶
Maximum number of layout state objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.layout_state_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_LayoutStateMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_LayoutStateMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | LayoutStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_LockStateAlloc¶
Current number of lock state objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.lock_state_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_LockStateAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_LockStateAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | LockStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_LockStateMax¶
Maximum number of lock state objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.lock_state_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_LockStateMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_LockStateMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | LockStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_OpenAlloc¶
Current number of share objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.open_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_OpenAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_OpenAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | OpenAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_OpenMax¶
Maximum number of share lock objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.open_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_OpenMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_OpenMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | OpenAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_OpenStateAlloc¶
Current number of open state objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.openstate_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_OpenStateAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_OpenStateAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | OpenStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_OpenStateMax¶
Maximum number of open state objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.openstate_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_OpenStateMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_OpenStateMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | OpenStateAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_OwnerAlloc¶
Current number of owner objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.owner_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_OwnerAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_OwnerAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | OwnerAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_OwnerMax¶
Maximum number of owner objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.owner_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_OwnerMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_OwnerMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | OwnerAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_SessionAlloc¶
Current number of session objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.session_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_SessionAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_SessionAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | SessionAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_SessionConnectionHolderAlloc¶
Current number of session connection holder objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.session_connection_holder_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_SessionConnectionHolderAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_SessionConnectionHolderAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | SessionConnectionHolderAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_SessionConnectionHolderMax¶
Maximum number of session connection holder objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.session_connection_holder_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_SessionConnectionHolderMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_SessionConnectionHolderMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | SessionConnectionHolderAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_SessionHolderAlloc¶
Current number of session holder objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.session_holder_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_SessionHolderAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_SessionHolderAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | SessionHolderAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_SessionHolderMax¶
Maximum number of session holder objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.session_holder_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_SessionHolderMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_SessionHolderMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | SessionHolderAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_SessionMax¶
Maximum number of session objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.session_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_SessionMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_SessionMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | SessionAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_StateRefHistoryAlloc¶
Current number of state reference callstack history objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.state_reference_history_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_StateRefHistoryAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_StateRefHistoryAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | StateRefHistoryAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_StateRefHistoryMax¶
Maximum number of state reference callstack history objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.state_reference_history_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_StateRefHistoryMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_StateRefHistoryMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | StateRefHistoryAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_StringAlloc¶
Current number of string objects allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.string_allocatedUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_StringAllocUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_StringAlloc metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | StringAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nfs_diag_storePool_StringMax¶
Maximum number of string objects.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nfs_v4_diag |
storepool.string_maximumUnit: none Type: raw Base: |
conf/restperf/9.12.0/nfsv4_pool.yaml |
| ZAPI | perf-object-get-instances nfsv4_diag |
storePool_StringMaxUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_pool.yaml |
The nfs_diag_storePool_StringMax metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFSv4 StorePool Monitors | Allocations over 50% | timeseries | Allocations over 50% |
| ONTAP: NFSv4 StorePool Monitors | Lock | timeseries | StringAlloc |
| ONTAP: NFS Troubleshooting | Highlights | timeseries | All nodes with 1% or more allocations in $Datacenter |
nic_ifgrp_rx_bytes¶
Link Aggregation Group (LAG) Bytes received.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_ifgrp_rx_bytes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Link Aggregation Group (LAG) | table | Link Aggregation Groups |
| ONTAP: Network | Link Aggregation Group (LAG) | timeseries | Top $TopResources LAGs by Receive Throughput |
nic_ifgrp_tx_bytes¶
Link Aggregation Group (LAG) Bytes sent.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_ifgrp_tx_bytes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Link Aggregation Group (LAG) | table | Link Aggregation Groups |
| ONTAP: Network | Link Aggregation Group (LAG) | timeseries | Top $TopResources LAGs by Send Throughput |
nic_labels¶
This metric provides information about NicCommon
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | nic_common |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | table | NIC ports |
| ONTAP: Network | Ethernet | table | Ethernet port errors |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
nic_link_up_to_downs¶
Number of link state change from UP to DOWN.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
link_up_to_downUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
link_up_to_downsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_link_up_to_downs metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | table | Ethernet port errors |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
nic_new_status¶
This metric indicates a value of 1 if the NIC state is up (indicating the NIC is operational) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | table | NIC ports |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
nic_rx_alignment_errors¶
Alignment errors detected on received packets
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
receive_alignment_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
rx_alignment_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_rx_alignment_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | timeseries | NICs Receive Errors by Cluster |
nic_rx_bytes¶
Bytes received
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
receive_bytesUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
rx_bytesUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_rx_bytes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | Ethernet Throughput |
| ONTAP: Network | Highlights | stat | Ethernet Receive |
| ONTAP: Network | Ethernet | table | NIC ports |
| ONTAP: Network | Ethernet | timeseries | Top $TopResources NICs by Receive Throughput |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources Ethernet Ports by Throughput |
nic_rx_crc_errors¶
CRC errors detected on received packets
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
receive_crc_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
rx_crc_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_rx_crc_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | timeseries | NICs Receive Errors by Cluster |
| ONTAP: Network | Ethernet | table | Ethernet port errors |
nic_rx_errors¶
Error received
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
receive_errorsUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
rx_errorsUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
nic_rx_length_errors¶
Length errors detected on received packets
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
receive_length_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
rx_length_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_rx_length_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | timeseries | NICs Receive Errors by Cluster |
nic_rx_percent¶
Bytes received percentage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
nic_rx_total_errors¶
Total errors received
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
receive_total_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
rx_total_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_rx_total_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | timeseries | NICs Receive Errors by Cluster |
| ONTAP: Network | Ethernet | table | Ethernet port errors |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
nic_tx_bytes¶
Bytes sent
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
transmit_bytesUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
tx_bytesUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_tx_bytes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Highlights | stat | Ethernet Send |
| ONTAP: Network | Ethernet | table | NIC ports |
| ONTAP: Network | Ethernet | timeseries | Top $TopResources NICs by Send Throughput |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources Ethernet Ports by Throughput |
nic_tx_errors¶
Error sent
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
transmit_errorsUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
tx_errorsUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
nic_tx_hw_errors¶
Transmit errors reported by hardware
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
transmit_hw_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
tx_hw_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_tx_hw_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | timeseries | NICs Send Errors by Cluster |
| ONTAP: Network | Ethernet | table | Ethernet port errors |
nic_tx_percent¶
Bytes sent percentage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
nic_tx_total_errors¶
Total errors sent
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nic_common |
transmit_total_errorsUnit: none Type: delta Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | perf-object-get-instances nic_common |
tx_total_errorsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_tx_total_errors metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | timeseries | NICs Send Errors by Cluster |
| ONTAP: Network | Ethernet | table | Ethernet port errors |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
nic_util_percent¶
Max of Bytes received percentage and Bytes sent percentage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/nic_common.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/nic_common.yaml |
The nic_util_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Network | Ethernet | table | NIC ports |
| ONTAP: Network | Ethernet | timeseries | Top $TopResources NICs by Port Utilization % |
| ONTAP: NFS Troubleshooting | Network Port Table | table | Ethernet ports |
| ONTAP: Node | Network Layer | timeseries | Top $TopResources Ethernet Ports by Utilization % |
node_avg_processor_busy¶
Average processor utilization across active processors in the system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
average_processor_busy_percentUnit: percent Type: percent Base: cpu_elapsed_time |
conf/restperf/9.12.0/system_node.yaml |
| KeyPerf | api/cluster/nodes |
statistics.processor_utilization_rawUnit: statistics.processor_utilization_base Type: percent Base: |
conf/keyperf/9.15.0/system_node.yaml |
| StatPerf | system:node |
avg_processor_busyUnit: percent Type: Base: cpu_elapsed_time |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
avg_processor_busyUnit: percent Type: percent Base: cpu_elapsed_time |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_avg_processor_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Cluster Metrics | timeseries | Top $TopResources Clusters by Average CPU Utilization |
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by Average CPU Utilization |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | bargauge | Average CPU Utilization |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | timeseries | Node Average CPU Utilization |
| ONTAP: Cluster | Throughput | timeseries | Average CPU Utilization |
| ONTAP: Datacenter | Performance | timeseries | Top $TopResources Average CPU Utilization by Cluster |
| ONTAP: MetroCluster | Highlights | gauge | Average CPU Utilization |
| ONTAP: Node | Highlights | bargauge | Average CPU Utilization |
| ONTAP: Node | CPU Layer | timeseries | Average CPU Utilization |
node_cifs_connections¶
Number of connections
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
connectionsUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
connectionsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
The node_cifs_connections metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | CIFS Frontend | timeseries | CIFS Connections |
node_cifs_established_sessions¶
Number of established SMB and SMB2 sessions
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
established_sessionsUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
established_sessionsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
The node_cifs_established_sessions metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | CIFS Frontend | timeseries | CIFS Connections |
node_cifs_latency¶
Average latency for CIFS operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
latencyUnit: microsec Type: average Base: latency_base |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_latencyUnit: microsec Type: average Base: cifs_latency_base |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
The node_cifs_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | CIFS Frontend | stat | CIFS Latency |
node_cifs_op_count¶
Array of select CIFS operation counts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
op_countUnit: none Type: rate Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_op_countUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
The node_cifs_op_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | CIFS Frontend | timeseries | CIFS IOPs by Type |
node_cifs_open_files¶
Number of open files over SMB and SMB2
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
open_filesUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
open_filesUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
The node_cifs_open_files metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | CIFS Frontend | timeseries | CIFS Connections |
node_cifs_ops¶
Number of CIFS operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
cifs_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
cifs_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
cifs_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_cifs_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Cluster Metrics | timeseries | Top $TopResources CIFS IOPs by Cluster |
| ONTAP: Node | Backend | timeseries | Protocol Backend IOPs |
| ONTAP: Node | CIFS Frontend | stat | CIFS IOPs |
node_cifs_read_latency¶
Average latency for CIFS read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
average_read_latencyUnit: microsec Type: average Base: total_read_ops |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_read_latencyUnit: microsec Type: average Base: cifs_read_ops |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
node_cifs_read_ops¶
Total number of CIFS read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
total_read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
node_cifs_total_ops¶
Total number of CIFS operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
node_cifs_write_latency¶
Average latency for CIFS write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
average_write_latencyUnit: microsec Type: average Base: total_write_ops |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_write_latencyUnit: microsec Type: average Base: cifs_write_ops |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
node_cifs_write_ops¶
Total number of CIFS write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs:node |
total_write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/cifs_node.yaml |
| ZAPI | perf-object-get-instances cifs:node |
cifs_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_node.yaml |
node_cpu_busy¶
System CPU resource utilization. Returns a computed percentage for the default CPU field. Basically computes a 'cpu usage summary' value which indicates how 'busy' the system is based upon the most heavily utilized domain. The idea is to determine the amount of available CPU until we're limited by either a domain maxing out OR we exhaust all available idle CPU cycles, whichever occurs first.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
cpu_busyUnit: percent Type: percent Base: cpu_elapsed_time |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
cpu_busyUnit: percent Type: Base: cpu_elapsed_time |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
cpu_busyUnit: percent Type: percent Base: cpu_elapsed_time |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_cpu_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Cluster Metrics | timeseries | Top $TopResources Clusters by CPU busy |
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by CPU busy |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | bargauge | CPU busy |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | timeseries | Node CPU Busy |
| ONTAP: Cluster | Throughput | timeseries | CPU Busy |
| ONTAP: Datacenter | Performance | timeseries | Top $TopResources CPU Busy by Cluster |
| ONTAP: Node | Highlights | bargauge | CPU Busy |
| ONTAP: Node | Backend | timeseries | System Utilization |
node_cpu_busytime¶
The time (in hundredths of a second) that the CPU has been doing useful work since the last boot
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/node |
cpu_busy_time |
conf/rest/9.12.0/node.yaml |
| ZAPI | system-node-get-iter |
node-details-info.cpu-busytime |
conf/zapi/cdot/9.8.0/node.yaml |
node_cpu_domain_busy¶
Array of processor time in percentage spent in various domains
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
domain_busyUnit: percent Type: percent Base: cpu_elapsed_time |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
domain_busyUnit: percent Type: array Base: cpu_elapsed_time |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
domain_busyUnit: percent Type: percent Base: cpu_elapsed_time |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_cpu_domain_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | CPU Layer | timeseries | CPU Busy Domains |
| ONTAP: Node | Backend | timeseries | System Utilization |
node_cpu_elapsed_time¶
Elapsed time since boot
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
cpu_elapsed_timeUnit: microsec Type: delta Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
cpu_elapsed_timeUnit: none Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
cpu_elapsed_timeUnit: none Type: delta,no-display Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_disk_busy¶
The utilization percent of the disk. node_disk_busy is disk_busy aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The node_disk_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by Disk Utilization |
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | bargauge | Avg Disk Utilization by Cluster |
| ONTAP: Node | Backend | timeseries | System Utilization |
node_disk_capacity¶
Disk capacity in MB. node_disk_capacity is disk_capacity aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP. node_disk_cp_read_chain is disk_cp_read_chain aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations. node_disk_cp_read_latency is disk_cp_read_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_cp_reads¶
Number of disk read operations initiated each second for consistency point processing. node_disk_cp_reads is disk_cp_reads aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_data_read¶
Number of disk kilobytes (KB) read per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
disk_data_readUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
disk_data_readUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
disk_data_readUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_disk_data_read metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Disk Throughput by Node |
node_disk_data_written¶
Number of disk kilobytes (KB) written per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
disk_data_writtenUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
disk_data_writtenUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
disk_data_writtenUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_disk_data_written metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Disk Throughput by Node |
node_disk_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response. node_disk_io_pending is disk_io_pending aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_io_queued¶
Number of I/Os queued to the disk but not yet issued. node_disk_io_queued is disk_io_queued aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_busy¶
The utilization percent of the disk. node_disk_max_busy is the maximum of disk_busy for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The node_disk_max_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | bargauge | Max Disk Utilization by Cluster |
| ONTAP: Node | Highlights | bargauge | Max Disk Utilization |
node_disk_max_capacity¶
Disk capacity in MB. node_disk_max_capacity is the maximum of disk_capacity for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP. node_disk_max_cp_read_chain is the maximum of disk_cp_read_chain for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations. node_disk_max_cp_read_latency is the maximum of disk_cp_read_latency for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_cp_reads¶
Number of disk read operations initiated each second for consistency point processing. node_disk_max_cp_reads is the maximum of disk_cp_reads for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response. node_disk_max_io_pending is the maximum of disk_io_pending for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_io_queued¶
Number of I/Os queued to the disk but not yet issued. node_disk_max_io_queued is the maximum of disk_io_queued for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_total_data¶
Total throughput for user operations per second. node_disk_max_total_data is the maximum of disk_total_data for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_total_transfers¶
Total number of disk operations involving data transfer initiated per second. node_disk_max_total_transfers is the maximum of disk_total_transfers for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_read_blocks¶
Number of blocks transferred for user read operations per second. node_disk_max_user_read_blocks is the maximum of disk_user_read_blocks for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_read_chain¶
Average number of blocks transferred in each user read operation. node_disk_max_user_read_chain is the maximum of disk_user_read_chain for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_read_latency¶
Average latency per block in microseconds for user read operations. node_disk_max_user_read_latency is the maximum of disk_user_read_latency for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. node_disk_max_user_reads is the maximum of disk_user_reads for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_write_blocks¶
Number of blocks transferred for user write operations per second. node_disk_max_user_write_blocks is the maximum of disk_user_write_blocks for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_write_chain¶
Average number of blocks transferred in each user write operation. node_disk_max_user_write_chain is the maximum of disk_user_write_chain for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_write_latency¶
Average latency per block in microseconds for user write operations. node_disk_max_user_write_latency is the maximum of disk_user_write_latency for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_max_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests. node_disk_max_user_writes is the maximum of disk_user_writes for label node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_total_data¶
Total throughput for user operations per second. node_disk_total_data is disk_total_data aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_total_transfers¶
Total number of disk operations involving data transfer initiated per second. node_disk_total_transfers is disk_total_transfers aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_read_blocks¶
Number of blocks transferred for user read operations per second. node_disk_user_read_blocks is disk_user_read_blocks aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_read_chain¶
Average number of blocks transferred in each user read operation. node_disk_user_read_chain is disk_user_read_chain aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_read_latency¶
Average latency per block in microseconds for user read operations. node_disk_user_read_latency is disk_user_read_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. node_disk_user_reads is disk_user_reads aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_write_blocks¶
Number of blocks transferred for user write operations per second. node_disk_user_write_blocks is disk_user_write_blocks aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_write_chain¶
Average number of blocks transferred in each user write operation. node_disk_user_write_chain is disk_user_write_chain aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_write_latency¶
Average latency per block in microseconds for user write operations. node_disk_user_write_latency is disk_user_write_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_disk_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests. node_disk_user_writes is disk_user_writes aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
node_failed_fan¶
Specifies a count of the number of chassis fans that are not operating within the recommended RPM range.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/nodes |
controller.failed_fan.count |
conf/rest/9.12.0/node.yaml |
| ZAPI | system-node-get-iter |
node-details-info.env-failed-fan-count |
conf/zapi/cdot/9.8.0/node.yaml |
The node_failed_fan metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Highlights | table | Node Details |
node_failed_power¶
Number of failed power supply units.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/nodes |
controller.failed_power_supply.count |
conf/rest/9.12.0/node.yaml |
| ZAPI | system-node-get-iter |
node-details-info.env-failed-power-supply-count |
conf/zapi/cdot/9.8.0/node.yaml |
The node_failed_power metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Highlights | table | Node Details |
node_fcp_data_recv¶
Number of FCP kilobytes (KB) received per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
fcp_data_receivedUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
fcp_data_recvUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
fcp_data_recvUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_fcp_data_sent¶
Number of FCP kilobytes (KB) sent per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
fcp_data_sentUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
fcp_data_sentUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
fcp_data_sentUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_fcp_ops¶
Number of FCP operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
fcp_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
fcp_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
fcp_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_fcp_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | Protocol Backend IOPs |
node_hdd_data_read¶
Number of HDD Disk kilobytes (KB) read per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
hdd_data_readUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
hdd_data_readUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
hdd_data_readUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_hdd_data_written¶
Number of HDD kilobytes (KB) written per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
hdd_data_writtenUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
hdd_data_writtenUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
hdd_data_writtenUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_iscsi_ops¶
Number of iSCSI operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
iscsi_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
iscsi_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
iscsi_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_iscsi_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | Protocol Backend IOPs |
node_labels¶
This metric provides information about Node
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/nodes |
Harvest generated |
conf/rest/9.12.0/node.yaml |
| ZAPI | system-node-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/node.yaml |
The node_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | table | $Cluster |
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Datacenter | Health | table | Node Health |
| ONTAP: Datacenter | Power and Temperature | stat | Average Power/Used_TB |
| ONTAP: Health | HA | table | HA Issues |
| ONTAP: Health | Node | table | Node Issues |
| ONTAP: Node | Highlights | table | Node Details |
| ONTAP: Power | Highlights | stat | Average Power/Used_TB |
| ONTAP: Power | Nodes | table | Storage Nodes |
node_memory¶
Total memory in megabytes (MB)
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
memoryUnit: none Type: raw Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
memoryUnit: none Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
memoryUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_net_data_recv¶
Number of network kilobytes (KB) received per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
network_data_receivedUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
net_data_recvUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
net_data_recvUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_net_data_sent¶
Number of network kilobytes (KB) sent per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
network_data_sentUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
net_data_sentUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
net_data_sentUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_new_status¶
This metric indicates a value of 1 if the node is healthy (true or up, indicating the node is operational) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/node.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/node.yaml |
The node_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Nodes & Subsystems - $Cluster | table | $Cluster |
| ONTAP: Datacenter | Health | table | Node Health |
| ONTAP: Node | Highlights | stat | Nodes |
node_nfs_access_avg_latency¶
Average latency of Access procedure requests. The counter keeps track of the average response time of Access requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_access_total¶
Total number of Access procedure requests. It is the total number of access success and access error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_backchannel_ctl_avg_latency¶
Average latency of BACKCHANNEL_CTL operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
backchannel_ctl.average_latencyUnit: microsec Type: average Base: backchannel_ctl.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
backchannel_ctl.average_latencyUnit: microsec Type: average Base: backchannel_ctl.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
backchannel_ctl_avg_latencyUnit: microsec Type: average,no-zero-values Base: backchannel_ctl_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
backchannel_ctl_avg_latencyUnit: microsec Type: average,no-zero-values Base: backchannel_ctl_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_backchannel_ctl_total¶
Total number of BACKCHANNEL_CTL operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
backchannel_ctl.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
backchannel_ctl.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
backchannel_ctl_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
backchannel_ctl_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_bind_conn_to_session_avg_latency¶
Average latency of BIND_CONN_TO_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
bind_connections_to_session.average_latencyUnit: microsec Type: average Base: bind_connections_to_session.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
bind_conn_to_session.average_latencyUnit: microsec Type: average Base: bind_conn_to_session.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
bind_conn_to_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: bind_conn_to_session_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
bind_conn_to_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: bind_conn_to_session_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_bind_conn_to_session_total¶
Total number of BIND_CONN_TO_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
bind_connections_to_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
bind_conn_to_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
bind_conn_to_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
bind_conn_to_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_close_avg_latency¶
Average latency of CLOSE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
close.average_latencyUnit: microsec Type: average Base: close.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
close.average_latencyUnit: microsec Type: average Base: close.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
close.average_latencyUnit: microsec Type: average Base: close.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
close_avg_latencyUnit: microsec Type: average,no-zero-values Base: close_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
close_avg_latencyUnit: microsec Type: average,no-zero-values Base: close_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
close_avg_latencyUnit: microsec Type: average,no-zero-values Base: close_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_close_total¶
Total number of CLOSE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
close.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
close.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
close.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
close_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
close_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
close_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_commit_avg_latency¶
Average latency of Commit procedure requests. The counter keeps track of the average response time of Commit requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_commit_total¶
Total number of Commit procedure requests. It is the total number of Commit success and Commit error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_create_avg_latency¶
Average latency of Create procedure requests. The counter keeps track of the average response time of Create requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_create_session_avg_latency¶
Average latency of CREATE_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
create_session.average_latencyUnit: microsec Type: average Base: create_session.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
create_session.average_latencyUnit: microsec Type: average Base: create_session.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
create_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_session_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
create_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_session_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_create_session_total¶
Total number of CREATE_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
create_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
create_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
create_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
create_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_create_total¶
Total number Create of procedure requests. It is the total number of create success and create error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_delegpurge_avg_latency¶
Average latency of DELEGPURGE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
delegpurge.average_latencyUnit: microsec Type: average Base: delegpurge.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
delegpurge.average_latencyUnit: microsec Type: average Base: delegpurge.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
delegpurge.average_latencyUnit: microsec Type: average Base: delegpurge.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
delegpurge_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegpurge_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
delegpurge_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegpurge_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
delegpurge_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegpurge_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_delegpurge_total¶
Total number of DELEGPURGE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
delegpurge.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
delegpurge.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
delegpurge.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
delegpurge_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
delegpurge_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
delegpurge_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_delegreturn_avg_latency¶
Average latency of DELEGRETURN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
delegreturn.average_latencyUnit: microsec Type: average Base: delegreturn.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
delegreturn.average_latencyUnit: microsec Type: average Base: delegreturn.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
delegreturn.average_latencyUnit: microsec Type: average Base: delegreturn.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
delegreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegreturn_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
delegreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegreturn_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
delegreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegreturn_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_delegreturn_total¶
Total number of DELEGRETURN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
delegreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
delegreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
delegreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
delegreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
delegreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
delegreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_destroy_clientid_avg_latency¶
Average latency of DESTROY_CLIENTID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
destroy_clientid.average_latencyUnit: microsec Type: average Base: destroy_clientid.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
destroy_clientid.average_latencyUnit: microsec Type: average Base: destroy_clientid.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
destroy_clientid_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_clientid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
destroy_clientid_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_clientid_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_destroy_clientid_total¶
Total number of DESTROY_CLIENTID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
destroy_clientid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
destroy_clientid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
destroy_clientid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
destroy_clientid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_destroy_session_avg_latency¶
Average latency of DESTROY_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
destroy_session.average_latencyUnit: microsec Type: average Base: destroy_session.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
destroy_session.average_latencyUnit: microsec Type: average Base: destroy_session.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
destroy_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_session_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
destroy_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_session_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_destroy_session_total¶
Total number of DESTROY_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
destroy_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
destroy_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
destroy_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
destroy_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_exchange_id_avg_latency¶
Average latency of EXCHANGE_ID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
exchange_id.average_latencyUnit: microsec Type: average Base: exchange_id.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
exchange_id.average_latencyUnit: microsec Type: average Base: exchange_id.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
exchange_id_avg_latencyUnit: microsec Type: average,no-zero-values Base: exchange_id_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
exchange_id_avg_latencyUnit: microsec Type: average,no-zero-values Base: exchange_id_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_exchange_id_total¶
Total number of EXCHANGE_ID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
exchange_id.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
exchange_id.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
exchange_id_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
exchange_id_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_free_stateid_avg_latency¶
Average latency of FREE_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
free_stateid.average_latencyUnit: microsec Type: average Base: free_stateid.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
free_stateid.average_latencyUnit: microsec Type: average Base: free_stateid.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
free_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: free_stateid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
free_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: free_stateid_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_free_stateid_total¶
Total number of FREE_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
free_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
free_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
free_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
free_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_fsinfo_avg_latency¶
Average latency of FSInfo procedure requests. The counter keeps track of the average response time of FSInfo requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
fsinfo.average_latencyUnit: microsec Type: average Base: fsinfo.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
fsinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: fsinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_fsinfo_total¶
Total number FSInfo of procedure requests. It is the total number of FSInfo success and FSInfo error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
fsinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
fsinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_fsstat_avg_latency¶
Average latency of FSStat procedure requests. The counter keeps track of the average response time of FSStat requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
fsstat.average_latencyUnit: microsec Type: average Base: fsstat.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
fsstat_avg_latencyUnit: microsec Type: average,no-zero-values Base: fsstat_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_fsstat_total¶
Total number FSStat of procedure requests. It is the total number of FSStat success and FSStat error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
fsstat.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
fsstat_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_get_dir_delegation_avg_latency¶
Average latency of GET_DIR_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
get_dir_delegation.average_latencyUnit: microsec Type: average Base: get_dir_delegation.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
get_dir_delegation.average_latencyUnit: microsec Type: average Base: get_dir_delegation.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
get_dir_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: get_dir_delegation_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
get_dir_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: get_dir_delegation_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_get_dir_delegation_total¶
Total number of GET_DIR_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
get_dir_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
get_dir_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
get_dir_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
get_dir_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_getattr_avg_latency¶
Average latency of GetAttr procedure requests. This counter keeps track of the average response time of GetAttr requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_getattr_total¶
Total number of Getattr procedure requests. It is the total number of getattr success and getattr error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_getdeviceinfo_avg_latency¶
Average latency of GETDEVICEINFO operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getdeviceinfo.average_latencyUnit: microsec Type: average Base: getdeviceinfo.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getdeviceinfo.average_latencyUnit: microsec Type: average Base: getdeviceinfo.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getdeviceinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdeviceinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getdeviceinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdeviceinfo_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_getdeviceinfo_total¶
Total number of GETDEVICEINFO operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getdeviceinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getdeviceinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getdeviceinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getdeviceinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_getdevicelist_avg_latency¶
Average latency of GETDEVICELIST operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getdevicelist.average_latencyUnit: microsec Type: average Base: getdevicelist.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getdevicelist.average_latencyUnit: microsec Type: average Base: getdevicelist.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getdevicelist_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdevicelist_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getdevicelist_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdevicelist_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_getdevicelist_total¶
Total number of GETDEVICELIST operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getdevicelist.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getdevicelist.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getdevicelist_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getdevicelist_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_getfh_avg_latency¶
Average latency of GETFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getfh.average_latencyUnit: microsec Type: average Base: getfh.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getfh.average_latencyUnit: microsec Type: average Base: getfh.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
getfh.average_latencyUnit: microsec Type: average Base: getfh.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: getfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: getfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
getfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: getfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_getfh_total¶
Total number of GETFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
getfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
getfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
getfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
getfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
getfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
getfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_latency¶
Average latency of NFSv3 requests. This counter keeps track of the average response time of NFSv3 requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | stat | NFSv3 Avg Latency |
node_nfs_layoutcommit_avg_latency¶
Average latency of LAYOUTCOMMIT operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
layoutcommit.average_latencyUnit: microsec Type: average Base: layoutcommit.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
layoutcommit.average_latencyUnit: microsec Type: average Base: layoutcommit.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
layoutcommit_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutcommit_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
layoutcommit_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutcommit_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_layoutcommit_total¶
Total number of LAYOUTCOMMIT operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
layoutcommit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
layoutcommit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
layoutcommit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
layoutcommit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_layoutget_avg_latency¶
Average latency of LAYOUTGET operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
layoutget.average_latencyUnit: microsec Type: average Base: layoutget.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
layoutget.average_latencyUnit: microsec Type: average Base: layoutget.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
layoutget_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutget_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
layoutget_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutget_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_layoutget_total¶
Total number of LAYOUTGET operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
layoutget.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
layoutget.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
layoutget_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
layoutget_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_layoutreturn_avg_latency¶
Average latency of LAYOUTRETURN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
layoutreturn.average_latencyUnit: microsec Type: average Base: layoutreturn.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
layoutreturn.average_latencyUnit: microsec Type: average Base: layoutreturn.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
layoutreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutreturn_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
layoutreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutreturn_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_layoutreturn_total¶
Total number of LAYOUTRETURN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
layoutreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
layoutreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
layoutreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
layoutreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_link_avg_latency¶
Average latency of Link procedure requests. The counter keeps track of the average response time of Link requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_link_total¶
Total number Link of procedure requests. It is the total number of Link success and Link error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lock_avg_latency¶
Average latency of LOCK operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lock.average_latencyUnit: microsec Type: average Base: lock.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lock.average_latencyUnit: microsec Type: average Base: lock.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lock.average_latencyUnit: microsec Type: average Base: lock.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lock_avg_latencyUnit: microsec Type: average,no-zero-values Base: lock_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lock_avg_latencyUnit: microsec Type: average,no-zero-values Base: lock_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lock_avg_latencyUnit: microsec Type: average,no-zero-values Base: lock_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lock_total¶
Total number of LOCK operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lock.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lock.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lock.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lock_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lock_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lock_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lockt_avg_latency¶
Average latency of LOCKT operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lockt.average_latencyUnit: microsec Type: average Base: lockt.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lockt.average_latencyUnit: microsec Type: average Base: lockt.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lockt.average_latencyUnit: microsec Type: average Base: lockt.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lockt_avg_latencyUnit: microsec Type: average,no-zero-values Base: lockt_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lockt_avg_latencyUnit: microsec Type: average,no-zero-values Base: lockt_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lockt_avg_latencyUnit: microsec Type: average,no-zero-values Base: lockt_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lockt_total¶
Total number of LOCKT operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lockt.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lockt.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lockt.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lockt_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lockt_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lockt_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_locku_avg_latency¶
Average latency of LOCKU operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
locku.average_latencyUnit: microsec Type: average Base: locku.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
locku.average_latencyUnit: microsec Type: average Base: locku.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
locku.average_latencyUnit: microsec Type: average Base: locku.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
locku_avg_latencyUnit: microsec Type: average,no-zero-values Base: locku_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
locku_avg_latencyUnit: microsec Type: average,no-zero-values Base: locku_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
locku_avg_latencyUnit: microsec Type: average,no-zero-values Base: locku_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_locku_total¶
Total number of LOCKU operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
locku.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
locku.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
locku.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
locku_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
locku_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
locku_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lookup_avg_latency¶
Average latency of LookUp procedure requests. This shows the average time it takes for the LookUp operation to reply to the request.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lookup_total¶
Total number of Lookup procedure requests. It is the total number of lookup success and lookup error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lookupp_avg_latency¶
Average latency of LOOKUPP operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lookupp.average_latencyUnit: microsec Type: average Base: lookupp.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lookupp.average_latencyUnit: microsec Type: average Base: lookupp.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lookupp.average_latencyUnit: microsec Type: average Base: lookupp.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lookupp_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookupp_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lookupp_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookupp_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lookupp_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookupp_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_lookupp_total¶
Total number of LOOKUPP operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
lookupp.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
lookupp.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
lookupp.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
lookupp_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
lookupp_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
lookupp_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_mkdir_avg_latency¶
Average latency of MkDir procedure requests. The counter keeps track of the average response time of MkDir requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
mkdir.average_latencyUnit: microsec Type: average Base: mkdir.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
mkdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: mkdir_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_mkdir_total¶
Total number MkDir of procedure requests. It is the total number of MkDir success and MkDir error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
mkdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
mkdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_mknod_avg_latency¶
Average latency of MkNod procedure requests. The counter keeps track of the average response time of MkNod requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
mknod.average_latencyUnit: microsec Type: average Base: mknod.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
mknod_avg_latencyUnit: microsec Type: average,no-zero-values Base: mknod_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_mknod_total¶
Total number MkNod of procedure requests. It is the total number of MkNod success and MkNod error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
mknod.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
mknod_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_null_avg_latency¶
Average latency of Null procedure requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_null_total¶
Total number of Null procedure requests. It is the total of null success and null error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_nverify_avg_latency¶
Average latency of NVERIFY operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
nverify.average_latencyUnit: microsec Type: average Base: nverify.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
nverify.average_latencyUnit: microsec Type: average Base: nverify.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
nverify.average_latencyUnit: microsec Type: average Base: nverify.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
nverify_avg_latencyUnit: microsec Type: average,no-zero-values Base: nverify_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
nverify_avg_latencyUnit: microsec Type: average,no-zero-values Base: nverify_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
nverify_avg_latencyUnit: microsec Type: average,no-zero-values Base: nverify_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_nverify_total¶
Total number of NVERIFY operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
nverify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
nverify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
nverify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
nverify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
nverify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
nverify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_open_avg_latency¶
Average latency of OPEN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
open.average_latencyUnit: microsec Type: average Base: open.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
open.average_latencyUnit: microsec Type: average Base: open.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
open.average_latencyUnit: microsec Type: average Base: open.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
open_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
open_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
open_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_open_confirm_avg_latency¶
Average latency of OPEN_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
open_confirm.average_latencyUnit: microsec Type: average Base: open_confirm.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
open_confirm_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_confirm_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_open_confirm_total¶
Total number of OPEN_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
open_confirm.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
open_confirm_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_open_downgrade_avg_latency¶
Average latency of OPEN_DOWNGRADE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
open_downgrade.average_latencyUnit: microsec Type: average Base: open_downgrade.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
open_downgrade.average_latencyUnit: microsec Type: average Base: open_downgrade.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
open_downgrade.average_latencyUnit: microsec Type: average Base: open_downgrade.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
open_downgrade_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_downgrade_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
open_downgrade_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_downgrade_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
open_downgrade_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_downgrade_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_open_downgrade_total¶
Total number of OPEN_DOWNGRADE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
open_downgrade.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
open_downgrade.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
open_downgrade.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
open_downgrade_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
open_downgrade_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
open_downgrade_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_open_total¶
Total number of OPEN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
open.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
open.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
open.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
open_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
open_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
open_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_openattr_avg_latency¶
Average latency of OPENATTR operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
openattr.average_latencyUnit: microsec Type: average Base: openattr.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
openattr.average_latencyUnit: microsec Type: average Base: openattr.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
openattr.average_latencyUnit: microsec Type: average Base: openattr.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
openattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: openattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
openattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: openattr_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
openattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: openattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_openattr_total¶
Total number of OPENATTR operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
openattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
openattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
openattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
openattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
openattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
openattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_ops¶
Number of NFS operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nfs_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nfs_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nfs_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_nfs_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Cluster Metrics | timeseries | Top $TopResources NFS IOPs by Cluster |
| ONTAP: Node | Backend | timeseries | Protocol Backend IOPs |
| ONTAP: Node | NFSv3 Frontend | table | NFS Avg IOPS |
node_nfs_pathconf_avg_latency¶
Average latency of PathConf procedure requests. The counter keeps track of the average response time of PathConf requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
pathconf.average_latencyUnit: microsec Type: average Base: pathconf.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
pathconf_avg_latencyUnit: microsec Type: average,no-zero-values Base: pathconf_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_pathconf_total¶
Total number PathConf of procedure requests. It is the total number of PathConf success and PathConf error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
pathconf.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
pathconf_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_putfh_avg_latency¶
The number of successful PUTPUBFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
putfh.average_latencyUnit: none Type: delta Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
putfh.average_latencyUnit: microsec Type: average Base: putfh.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
putfh.average_latencyUnit: microsec Type: average Base: putfh.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
putfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
putfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
putfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_putfh_total¶
Total number of PUTFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
putfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
putfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
putfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
putfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
putfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
putfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_putpubfh_avg_latency¶
Average latency of PUTPUBFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
putpubfh.average_latencyUnit: microsec Type: average Base: putpubfh.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
putpubfh.average_latencyUnit: microsec Type: average Base: putpubfh.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
putpubfh.average_latencyUnit: microsec Type: average Base: putpubfh.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
putpubfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putpubfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
putpubfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putpubfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
putpubfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putpubfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_putpubfh_total¶
Total number of PUTPUBFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
putpubfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
putpubfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
putpubfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
putpubfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
putpubfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
putpubfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_putrootfh_avg_latency¶
Average latency of PUTROOTFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
putrootfh.average_latencyUnit: microsec Type: average Base: putrootfh.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
putrootfh.average_latencyUnit: microsec Type: average Base: putrootfh.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
putrootfh.average_latencyUnit: microsec Type: average Base: putrootfh.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
putrootfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putrootfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
putrootfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putrootfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
putrootfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putrootfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_putrootfh_total¶
Total number of PUTROOTFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
putrootfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
putrootfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
putrootfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
putrootfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
putrootfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
putrootfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_read_avg_latency¶
Average latency of Read procedure requests. The counter keeps track of the average response time of Read requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_read_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | stat | NFSv3 Avg Read Latency |
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write Latency |
node_nfs_read_ops¶
Total observed NFSv3 read operations per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
nfsv3_read_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
The node_nfs_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | table | NFS Avg IOPS |
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write IOPs |
node_nfs_read_symlink_avg_latency¶
Average latency of ReadSymLink procedure requests. The counter keeps track of the average response time of ReadSymLink requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
read_symlink.average_latencyUnit: microsec Type: average Base: read_symlink.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
read_symlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_symlink_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_read_symlink_total¶
Total number of ReadSymLink procedure requests. It is the total number of read symlink success and read symlink error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
read_symlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
read_symlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_read_throughput¶
Rate of NFSv3 read data transfers per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
total.read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
total.read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
total.read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
nfsv3_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
nfs41_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
nfs42_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
nfs4_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_read_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | table | NFSv3 Avg Throughput |
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write Throughput |
node_nfs_read_total¶
Total number Read of procedure requests. It is the total number of read success and read error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_readdir_avg_latency¶
Average latency of ReadDir procedure requests. The counter keeps track of the average response time of ReadDir requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_readdir_total¶
Total number ReadDir of procedure requests. It is the total number of ReadDir success and ReadDir error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_readdirplus_avg_latency¶
Average latency of ReadDirPlus procedure requests. The counter keeps track of the average response time of ReadDirPlus requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
readdirplus.average_latencyUnit: microsec Type: average Base: readdirplus.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
readdirplus_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdirplus_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_readdirplus_total¶
Total number ReadDirPlus of procedure requests. It is the total number of ReadDirPlus success and ReadDirPlus error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
readdirplus.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
readdirplus_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_readlink_avg_latency¶
Average latency of READLINK operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
readlink.average_latencyUnit: microsec Type: average Base: readlink.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
readlink.average_latencyUnit: microsec Type: average Base: readlink.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
readlink.average_latencyUnit: microsec Type: average Base: readlink.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
readlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: readlink_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
readlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: readlink_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
readlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: readlink_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_readlink_total¶
Total number of READLINK operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
readlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
readlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
readlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
readlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
readlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
readlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_reclaim_complete_avg_latency¶
Average latency of RECLAIM_COMPLETE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
reclaim_complete.average_latencyUnit: microsec Type: average Base: reclaim_complete.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
reclaim_complete.average_latencyUnit: microsec Type: average Base: reclaim_complete.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
reclaim_complete_avg_latencyUnit: microsec Type: average,no-zero-values Base: reclaim_complete_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
reclaim_complete_avg_latencyUnit: microsec Type: average,no-zero-values Base: reclaim_complete_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_reclaim_complete_total¶
Total number of RECLAIM_COMPLETE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
reclaim_complete.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
reclaim_complete.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
reclaim_complete_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
reclaim_complete_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_release_lock_owner_avg_latency¶
Average Latency of RELEASE_LOCKOWNER procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
release_lock_owner.average_latencyUnit: microsec Type: average Base: release_lock_owner.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
release_lock_owner_avg_latencyUnit: microsec Type: average,no-zero-values Base: release_lock_owner_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_release_lock_owner_total¶
Total number of RELEASE_LOCKOWNER procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
release_lock_owner.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
release_lock_owner_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_remove_avg_latency¶
Average latency of Remove procedure requests. The counter keeps track of the average response time of Remove requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_remove_total¶
Total number Remove of procedure requests. It is the total number of Remove success and Remove error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_rename_avg_latency¶
Average latency of Rename procedure requests. The counter keeps track of the average response time of Rename requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_rename_total¶
Total number Rename of procedure requests. It is the total number of Rename success and Rename error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_renew_avg_latency¶
Average latency of RENEW procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
renew.average_latencyUnit: microsec Type: average Base: renew.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
renew_avg_latencyUnit: microsec Type: average,no-zero-values Base: renew_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_renew_total¶
Total number of RENEW procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
renew.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
renew_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_restorefh_avg_latency¶
Average latency of RESTOREFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
restorefh.average_latencyUnit: microsec Type: average Base: restorefh.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
restorefh.average_latencyUnit: microsec Type: average Base: restorefh.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
restorefh.average_latencyUnit: microsec Type: average Base: restorefh.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
restorefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: restorefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
restorefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: restorefh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
restorefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: restorefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_restorefh_total¶
Total number of RESTOREFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
restorefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
restorefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
restorefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
restorefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
restorefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
restorefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_rmdir_avg_latency¶
Average latency of RmDir procedure requests. The counter keeps track of the average response time of RmDir requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
rmdir.average_latencyUnit: microsec Type: average Base: rmdir.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
rmdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: rmdir_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_rmdir_total¶
Total number RmDir of procedure requests. It is the total number of RmDir success and RmDir error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
rmdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
rmdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_savefh_avg_latency¶
Average latency of SAVEFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
savefh.average_latencyUnit: microsec Type: average Base: savefh.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
savefh.average_latencyUnit: microsec Type: average Base: savefh.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
savefh.average_latencyUnit: microsec Type: average Base: savefh.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
savefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: savefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
savefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: savefh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
savefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: savefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_savefh_total¶
Total number of SAVEFH operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
savefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
savefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
savefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
savefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
savefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
savefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_secinfo_avg_latency¶
Average latency of SECINFO operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
secinfo.average_latencyUnit: microsec Type: average Base: secinfo.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
secinfo.average_latencyUnit: microsec Type: average Base: secinfo.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
secinfo.average_latencyUnit: microsec Type: average Base: secinfo.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
secinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
secinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
secinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_secinfo_no_name_avg_latency¶
Average latency of SECINFO_NO_NAME operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
secinfo_no_name.average_latencyUnit: microsec Type: average Base: secinfo_no_name.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
secinfo_no_name.average_latencyUnit: microsec Type: average Base: secinfo_no_name.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
secinfo_no_name_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_no_name_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
secinfo_no_name_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_no_name_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_secinfo_no_name_total¶
Total number of SECINFO_NO_NAME operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
secinfo_no_name.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
secinfo_no_name.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
secinfo_no_name_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
secinfo_no_name_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_secinfo_total¶
Total number of SECINFO operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
secinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
secinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
secinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
secinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
secinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
secinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_sequence_avg_latency¶
Average latency of SEQUENCE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
sequence.average_latencyUnit: microsec Type: average Base: sequence.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
sequence.average_latencyUnit: microsec Type: average Base: sequence.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
sequence_avg_latencyUnit: microsec Type: average,no-zero-values Base: sequence_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
sequence_avg_latencyUnit: microsec Type: average,no-zero-values Base: sequence_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_sequence_total¶
Total number of SEQUENCE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
sequence.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
sequence.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
sequence_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
sequence_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_set_ssv_avg_latency¶
Average latency of SET_SSV operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
set_ssv.average_latencyUnit: microsec Type: average Base: set_ssv.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
set_ssv.average_latencyUnit: microsec Type: average Base: set_ssv.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
set_ssv_avg_latencyUnit: microsec Type: average,no-zero-values Base: set_ssv_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
set_ssv_avg_latencyUnit: microsec Type: average,no-zero-values Base: set_ssv_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_set_ssv_total¶
Total number of SET_SSV operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
set_ssv.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
set_ssv.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
set_ssv_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
set_ssv_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_setattr_avg_latency¶
Average latency of SetAttr procedure requests. The counter keeps track of the average response time of SetAttr requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_setattr_total¶
Total number of Setattr procedure requests. It is the total number of Setattr success and setattr error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_setclientid_avg_latency¶
Average latency of SETCLIENTID procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
setclientid.average_latencyUnit: microsec Type: average Base: setclientid.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
setclientid_avg_latencyUnit: microsec Type: average,no-zero-values Base: setclientid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_setclientid_confirm_avg_latency¶
Average latency of SETCLIENTID_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
setclientid_confirm.average_latencyUnit: microsec Type: average Base: setclientid_confirm.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
setclientid_confirm_avg_latencyUnit: microsec Type: average,no-zero-values Base: setclientid_confirm_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_setclientid_confirm_total¶
Total number of SETCLIENTID_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
setclientid_confirm.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
setclientid_confirm_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_setclientid_total¶
Total number of SETCLIENTID procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
setclientid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
setclientid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_symlink_avg_latency¶
Average latency of SymLink procedure requests. The counter keeps track of the average response time of SymLink requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
symlink.average_latencyUnit: microsec Type: average Base: symlink.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
symlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: symlink_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_symlink_total¶
Total number SymLink of procedure requests. It is the total number of SymLink success and create SymLink requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
symlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
symlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
node_nfs_test_stateid_avg_latency¶
Average latency of TEST_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
test_stateid.average_latencyUnit: microsec Type: average Base: test_stateid.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
test_stateid.average_latencyUnit: microsec Type: average Base: test_stateid.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
test_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: test_stateid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
test_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: test_stateid_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_test_stateid_total¶
Total number of TEST_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
test_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
test_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
test_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
test_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_throughput¶
Rate of NFSv3 data transfers per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
total.throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
total.throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
total.throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
nfsv3_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
nfs41_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
nfs42_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
nfs4_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | table | NFSv3 Avg Throughput |
node_nfs_total_ops¶
Total number of NFSv3 procedure requests per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
nfsv3_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
total_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
total_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
total_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write IOPs |
node_nfs_verify_avg_latency¶
Average latency of VERIFY operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
verify.average_latencyUnit: microsec Type: average Base: verify.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
verify.average_latencyUnit: microsec Type: average Base: verify.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
verify.average_latencyUnit: microsec Type: average Base: verify.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
verify_avg_latencyUnit: microsec Type: average,no-zero-values Base: verify_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
verify_avg_latencyUnit: microsec Type: average,no-zero-values Base: verify_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
verify_avg_latencyUnit: microsec Type: average,no-zero-values Base: verify_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_verify_total¶
Total number of VERIFY operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
verify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
verify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
verify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
verify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
verify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
verify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nfs_want_delegation_avg_latency¶
Average latency of WANT_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
want_delegation.average_latencyUnit: microsec Type: average Base: want_delegation.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
want_delegation.average_latencyUnit: microsec Type: average Base: want_delegation.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
want_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: want_delegation_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
want_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: want_delegation_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_want_delegation_total¶
Total number of WANT_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
want_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
want_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
want_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
want_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
node_nfs_write_avg_latency¶
Average latency of Write procedure requests. The counter keeps track of the average response time of Write requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_write_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | stat | NFSv3 Avg Write Latency |
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write Latency |
node_nfs_write_ops¶
Total observed NFSv3 write operations per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
nfsv3_write_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
The node_nfs_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | table | NFS Avg IOPS |
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write IOPs |
node_nfs_write_throughput¶
Rate of NFSv3 write data transfers per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
total.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
total.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
total.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
nfsv3_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
nfs41_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
nfs42_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
nfs4_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
The node_nfs_write_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NFSv3 Frontend | table | NFSv3 Avg Throughput |
| ONTAP: Node | NFSv3 Frontend | timeseries | NFSv3 Read and Write Throughput |
node_nfs_write_total¶
Total number of Write procedure requests. It is the total number of write success and write error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3:node |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41:node |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42:node |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2_node.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4:node |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_node.yaml |
| ZAPI | perf-object-get-instances nfsv3:node |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_1:node |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1_node.yaml |
| ZAPI | perf-object-get-instances nfsv4_2:node |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2_node.yaml |
| ZAPI | perf-object-get-instances nfsv4:node |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_node.yaml |
node_nvme_fc_data_recv¶
NVMe/FC kilobytes (KB) received per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nvme_fc_data_receivedUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nvme_fc_data_recvUnit: kb_per_sec Type: Base: |
conf/statperf/9.15.1/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nvme_fc_data_recvUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.15.1/system_node.yaml |
node_nvme_fc_data_sent¶
NVMe/FC kilobytes (KB) sent per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nvme_fc_data_sentUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nvme_fc_data_sentUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nvme_fc_data_sentUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.15.1/system_node.yaml |
node_nvme_fc_ops¶
NVMe/FC operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nvme_fc_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nvme_fc_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nvme_fc_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.15.1/system_node.yaml |
node_nvmf_data_recv¶
NVMe/FC kilobytes (KB) received per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nvme_fc_data_received, 1Unit: Type: Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nvme_fc_data_recv, 1Unit: Type: Base: |
conf/statperf/9.15.1/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nvmf_data_recvUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_nvmf_data_sent¶
NVMe/FC kilobytes (KB) sent per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nvme_fc_data_sent, 1Unit: Type: Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nvme_fc_data_sent, 1Unit: Type: Base: |
conf/statperf/9.15.1/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nvmf_data_sentUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_nvmf_ops¶
NVMe/FC operations per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
nvme_fc_ops, 1Unit: Type: Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
nvme_fc_ops, 1Unit: Type: Base: |
conf/statperf/9.15.1/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
nvmf_opsUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_nvmf_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | Protocol Backend IOPs |
node_other_data¶
Other throughput
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
other_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
other_dataUnit: b_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
other_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_other_latency¶
Average latency for all other operations in the system in microseconds
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
other_latencyUnit: microsec Type: Base: other_ops |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_other_ops¶
All other operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
other_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_read_data¶
Read throughput
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
read_dataUnit: b_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_read_latency¶
Average latency for all read operations in the system in microseconds
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
read_latencyUnit: microsec Type: Base: read_ops |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_read_ops¶
Read operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
read_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_ssd_data_read¶
Number of SSD Disk kilobytes (KB) read per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
ssd_data_readUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
ssd_data_readUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
ssd_data_readUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_ssd_data_written¶
Number of SSD Disk kilobytes (KB) written per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
ssd_data_writtenUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
ssd_data_writtenUnit: kb_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
ssd_data_writtenUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_total_data¶
Represents the total data throughput in bytes for a node, as reported by ONTAP.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
total_dataUnit: b_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_total_latency¶
Average latency for all operations in the system in microseconds
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
total_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
total_latencyUnit: microsec Type: Base: total_ops |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
total_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_total_ops¶
Total number of operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
total_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
The node_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Average IOPs/Watt |
| ONTAP: Power | Highlights | stat | Average IOPs/Watt |
node_uptime¶
The total time, in seconds, that the node has been up.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/nodes |
uptime |
conf/rest/9.12.0/node.yaml |
| ZAPI | system-node-get-iter |
node-details-info.node-uptime |
conf/zapi/cdot/9.8.0/node.yaml |
The node_uptime metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Highlights | table | Node Details |
node_vol_cifs_other_latency¶
Average time for the WAFL filesystem to process other CIFS operations to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.other_latencyUnit: microsec Type: average Base: cifs.other_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_other_latencyUnit: microsec Type: average Base: cifs_other_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_other_ops¶
Number of other CIFS operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_read_data¶
Bytes read per second via CIFS
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_read_latency¶
Average time for the WAFL filesystem to process CIFS read requests to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.read_latencyUnit: microsec Type: average Base: cifs.read_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_read_latencyUnit: microsec Type: average Base: cifs_read_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_read_ops¶
Number of CIFS read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_write_data¶
Bytes written per second via CIFS
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_write_latency¶
Average time for the WAFL filesystem to process CIFS write requests to the volume; not including CIFS protocol request processing or network communication time which will also be included in client observed CIFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.write_latencyUnit: microsec Type: average Base: cifs.write_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_write_latencyUnit: microsec Type: average Base: cifs_write_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_cifs_write_ops¶
Number of CIFS write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
cifs.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
cifs_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_other_latency¶
Average time for the WAFL filesystem to process other FCP protocol operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.other_latencyUnit: microsec Type: average Base: fcp.other_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_other_latencyUnit: microsec Type: average Base: fcp_other_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_other_ops¶
Number of other block protocol operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_read_data¶
Bytes read per second via block protocol
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_read_latency¶
Average time for the WAFL filesystem to process FCP protocol read operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.read_latencyUnit: microsec Type: average Base: fcp.read_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_read_latencyUnit: microsec Type: average Base: fcp_read_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_read_ops¶
Number of block protocol read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_write_data¶
Bytes written per second via block protocol
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_write_latency¶
Average time for the WAFL filesystem to process FCP protocol write operations to the volume; not including FCP protocol request processing or network communication time which will also be included in client observed FCP protocol request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.write_latencyUnit: microsec Type: average Base: fcp.write_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_write_latencyUnit: microsec Type: average Base: fcp_write_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_fcp_write_ops¶
Number of block protocol write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
fcp.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
fcp_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_other_latency¶
Average time for the WAFL filesystem to process other iSCSI protocol operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI protocol request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.other_latencyUnit: microsec Type: average Base: iscsi.other_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_other_latencyUnit: microsec Type: average Base: iscsi_other_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_other_ops¶
Number of other block protocol operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_read_data¶
Bytes read per second via block protocol
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_read_latency¶
Average time for the WAFL filesystem to process iSCSI protocol read operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI protocol request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.read_latencyUnit: microsec Type: average Base: iscsi.read_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_read_latencyUnit: microsec Type: average Base: iscsi_read_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_read_ops¶
Number of block protocol read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_write_data¶
Bytes written per second via block protocol
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_write_latency¶
Average time for the WAFL filesystem to process iSCSI protocol write operations to the volume; not including iSCSI protocol request processing or network communication time which will also be included in client observed iSCSI request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.write_latencyUnit: microsec Type: average Base: iscsi.write_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_write_latencyUnit: microsec Type: average Base: iscsi_write_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_iscsi_write_ops¶
Number of block protocol write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
iscsi.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
iscsi_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_other_latency¶
Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.other_latencyUnit: microsec Type: average Base: nfs.other_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_other_latencyUnit: microsec Type: average Base: nfs_other_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_other_ops¶
Number of other NFS operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_read_data¶
Bytes read per second via NFS
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_read_latency¶
Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.read_latencyUnit: microsec Type: average Base: nfs.read_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_read_latencyUnit: microsec Type: average Base: nfs_read_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_read_ops¶
Number of NFS read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_write_data¶
Bytes written per second via NFS
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_write_latency¶
Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.write_latencyUnit: microsec Type: average Base: nfs.write_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_write_latencyUnit: microsec Type: average Base: nfs_write_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_nfs_write_ops¶
Number of NFS write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
nfs.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
nfs_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_read_latency¶
Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
read_latencyUnit: microsec Type: average Base: total_read_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
node_vol_write_latency¶
Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:node |
write_latencyUnit: microsec Type: average Base: total_write_ops |
conf/restperf/9.12.0/volume_node.yaml |
| ZAPI | perf-object-get-instances volume:node |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/volume_node.yaml |
The node_vol_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | Write Latency by Node |
node_volume_avg_latency¶
Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time. node_volume_avg_latency is volume_avg_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.totalUnit: microsec Type: average Base: volume_statistics.iops_raw.total |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Cluster Metrics | timeseries | Top $TopResources Clusters by Max Node Latency |
| ONTAP: Cluster | Highlights | timeseries | Top $TopResources Nodes by Latency |
| ONTAP: Node | Highlights | stat | Average Latency |
| ONTAP: Node | Highlights | timeseries | Latency |
node_volume_nfs_access_latency¶
Average time for the WAFL filesystem to process NFS protocol access requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_access_latency is volume_nfs_access_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.access_latencyUnit: microsec Type: average Base: nfs.access_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_access_latencyUnit: microsec Type: average Base: nfs_access_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_access_ops¶
Number of NFS accesses per second to the volume. node_volume_nfs_access_ops is volume_nfs_access_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.access_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_access_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_getattr_latency¶
Average time for the WAFL filesystem to process NFS protocol getattr requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_getattr_latency is volume_nfs_getattr_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.getattr_latencyUnit: microsec Type: average Base: nfs.getattr_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_getattr_latencyUnit: microsec Type: average Base: nfs_getattr_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_getattr_ops¶
Number of NFS getattr per second to the volume. node_volume_nfs_getattr_ops is volume_nfs_getattr_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.getattr_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_getattr_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_lookup_latency¶
Average time for the WAFL filesystem to process NFS protocol lookup requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_lookup_latency is volume_nfs_lookup_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.lookup_latencyUnit: microsec Type: average Base: nfs.lookup_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_lookup_latencyUnit: microsec Type: average Base: nfs_lookup_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_lookup_ops¶
Number of NFS lookups per second to the volume. node_volume_nfs_lookup_ops is volume_nfs_lookup_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.lookup_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_lookup_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_other_latency¶
Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_other_latency is volume_nfs_other_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.other_latencyUnit: microsec Type: average Base: nfs.other_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_other_latencyUnit: microsec Type: average Base: nfs_other_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_other_ops¶
Number of other NFS operations per second to the volume. node_volume_nfs_other_ops is volume_nfs_other_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_punch_hole_latency¶
Average time for the WAFL filesystem to process NFS protocol hole-punch requests to the volume. node_volume_nfs_punch_hole_latency is volume_nfs_punch_hole_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.punch_hole_latencyUnit: microsec Type: average Base: nfs.punch_hole_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_punch_hole_latencyUnit: microsec Type: average Base: nfs_punch_hole_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_punch_hole_ops¶
Number of NFS hole-punch requests per second to the volume. node_volume_nfs_punch_hole_ops is volume_nfs_punch_hole_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.punch_hole_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_punch_hole_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_read_latency¶
Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency. node_volume_nfs_read_latency is volume_nfs_read_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.read_latencyUnit: microsec Type: average Base: nfs.read_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_read_latencyUnit: microsec Type: average Base: nfs_read_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_read_ops¶
Number of NFS read operations per second from the volume. node_volume_nfs_read_ops is volume_nfs_read_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_setattr_latency¶
Average time for the WAFL filesystem to process NFS protocol setattr requests to the volume. node_volume_nfs_setattr_latency is volume_nfs_setattr_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.setattr_latencyUnit: microsec Type: average Base: nfs.setattr_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_setattr_latencyUnit: microsec Type: average Base: nfs_setattr_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_setattr_ops¶
Number of NFS setattr requests per second to the volume. node_volume_nfs_setattr_ops is volume_nfs_setattr_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.setattr_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_setattr_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_total_ops¶
Number of total NFS operations per second to the volume. node_volume_nfs_total_ops is volume_nfs_total_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_write_latency¶
Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency. node_volume_nfs_write_latency is volume_nfs_write_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.write_latencyUnit: microsec Type: average Base: nfs.write_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_write_latencyUnit: microsec Type: average Base: nfs_write_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_nfs_write_ops¶
Number of NFS write operations per second to the volume. node_volume_nfs_write_ops is volume_nfs_write_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
node_volume_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. node_volume_other_data is volume_other_data aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
node_volume_other_latency¶
Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time. node_volume_other_latency is volume_other_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
other_latencyUnit: microsec Type: average Base: total_other_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.otherUnit: microsec Type: average Base: volume_statistics.iops_raw.other |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_other_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | Average Latency |
node_volume_other_ops¶
Number of other operations per second to the volume. node_volume_other_ops is volume_other_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | IOPs |
node_volume_read_data¶
Bytes read per second. node_volume_read_data is volume_read_data aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
bytes_readUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by Throughput |
| ONTAP: Cluster | Highlights | timeseries | Top $TopResources Nodes by Throughput |
| ONTAP: Node | Highlights | timeseries | Throughput |
| ONTAP: Node | Backend | timeseries | Throughput |
node_volume_read_latency¶
Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time. node_volume_read_latency is volume_read_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
read_latencyUnit: microsec Type: average Base: total_read_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.readUnit: microsec Type: average Base: volume_statistics.iops_raw.read |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by Read Latency |
| ONTAP: Node | Backend | timeseries | Average Latency |
node_volume_read_ops¶
Number of read operations per second from the volume. node_volume_read_ops is volume_read_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | IOPs |
node_volume_total_data¶
Represents the aggregated total data throughput in bytes across all volumes on a node. This metric is calculated by Harvest by summing the data read from and written to each volume on the node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_total_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Highlights | stat | Throughput |
node_volume_total_ops¶
Number of operations per second serviced by the volume. node_volume_total_ops is volume_total_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by IOPs |
| ONTAP: Cluster | Highlights | timeseries | Top $TopResources Nodes by IOPs |
| ONTAP: Node | Highlights | stat | IOPs |
| ONTAP: Node | Highlights | timeseries | Top Average IOPs |
node_volume_write_data¶
Bytes written per second. node_volume_write_data is volume_write_data aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
bytes_writtenUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by Throughput |
| ONTAP: Cluster | Highlights | timeseries | Top $TopResources Nodes by Throughput |
| ONTAP: Node | Highlights | timeseries | Throughput |
| ONTAP: Node | Backend | timeseries | Throughput |
node_volume_write_latency¶
Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time. node_volume_write_latency is volume_write_latency aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
write_latencyUnit: microsec Type: average Base: total_write_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.writeUnit: microsec Type: average Base: volume_statistics.iops_raw.write |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Highlights | table | Top $TopResources Nodes by Write Latency |
| ONTAP: Node | Backend | timeseries | Average Latency |
node_volume_write_ops¶
Number of write operations per second to the volume. node_volume_write_ops is volume_write_ops aggregated by node.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The node_volume_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | IOPs |
node_write_data¶
Write throughput
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
write_dataUnit: b_per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_write_latency¶
Average latency for all write operations in the system in microseconds
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
write_latencyUnit: microsec Type: Base: write_ops |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
node_write_ops¶
Write operations per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/system:node |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/system_node.yaml |
| StatPerf | system:node |
write_opsUnit: per_sec Type: Base: |
conf/statperf/9.8.0/system_node.yaml |
| ZAPI | perf-object-get-instances system:node |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/system_node.yaml |
ntpserver_labels¶
This metric provides information about NtpServer
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/ntp/servers |
Harvest generated |
conf/rest/9.12.0/ntpserver.yaml |
| ZAPI | ntp-server-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/ntpserver.yaml |
The ntpserver_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
nvme_lif_avg_latency¶
Average latency for NVMF operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NVMe/FC Frontend | stat | NVMe/FC Latency |
| ONTAP: Node | NVMe/FC Frontend | timeseries | NVMe/FC Average Latency by Port / LIF |
| ONTAP: SVM | NVMe/FC | stat | SVM NVMe/FC Average Latency |
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC Average Latency |
nvme_lif_avg_other_latency¶
Average latency for operations other than read, write, compare or compare-and-write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
average_other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
avg_other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_avg_other_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC Average Latency |
nvme_lif_avg_read_latency¶
Average latency for read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_avg_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC Average Latency |
nvme_lif_avg_write_latency¶
Average latency for write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_avg_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC Average Latency |
nvme_lif_other_ops¶
Number of operations that are not read, write, compare or compare-and-write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC IOPs |
nvme_lif_read_data¶
Amount of data read from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | LIF | timeseries | Top $TopResources NVMe/FC LIFs by Send Throughput |
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC Throughput |
| ONTAP: SVM | NVMe/FC | timeseries | Top $TopResources SVM NVMe/FC LIFs by Send Throughput |
nvme_lif_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC IOPs |
nvme_lif_total_ops¶
Total number of operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NVMe/FC Frontend | stat | NVMe/FC IOPs |
| ONTAP: Node | NVMe/FC Frontend | timeseries | NVMe/FC IOPs by Port / LIF |
| ONTAP: SVM | NVMe/FC | stat | SVM NVMe/FC IOPs |
nvme_lif_write_data¶
Amount of data written to the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | NVMe/FC Frontend | timeseries | NVMe/FC Throughput by Port / LIF |
| ONTAP: SVM | LIF | timeseries | Top $TopResources NVMe/FC LIFs by Receive Throughput |
| ONTAP: SVM | NVMe/FC | stat | SVM NVMe/FC Throughput |
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC Throughput |
| ONTAP: SVM | NVMe/FC | timeseries | Top $TopResources SVM NVMe/FC LIFs by Receive Throughput |
nvme_lif_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_lif |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nvmf_lif.yaml |
| ZAPI | perf-object-get-instances nvmf_fc_lif |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.10.1/nvmf_lif.yaml |
The nvme_lif_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NVMe/FC | timeseries | SVM NVMe/FC IOPs |
nvmf_rdma_port_avg_latency¶
Average latency for NVMF operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_avg_other_latency¶
Average latency for operations other than read, write, compare or compare-and-write
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
average_other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
avg_other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_avg_read_latency¶
Average latency for read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_avg_write_latency¶
Average latency for write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_other_ops¶
Number of operations that are not read, write, compare or compare-and-right.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_read_data¶
Amount of data read from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_total_data¶
Amount of NVMF traffic to and from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_total_ops¶
Total number of operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_write_data¶
Amount of data written to the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_rdma_port_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_rdma_port |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_rdma_port.yaml |
| ZAPI | perf-object-get-instances nvmf_rdma_port |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_rdma_port.yaml |
nvmf_tcp_port_avg_latency¶
Average latency for NVMF operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_avg_other_latency¶
Average latency for operations other than read, write, compare or compare-and-write
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
average_other_latencyUnit: microsec Type: average Base: other_ops |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
avg_other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_avg_read_latency¶
Average latency for read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
average_read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
avg_read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_avg_write_latency¶
Average latency for write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
average_write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
avg_write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_other_ops¶
Number of operations that are not read, write, compare or compare-and-write.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_read_data¶
Amount of data read from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_read_ops¶
Number of read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_total_data¶
Amount of NVMF traffic to and from the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_total_ops¶
Total number of operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_write_data¶
Amount of data written to the storage system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
nvmf_tcp_port_write_ops¶
Number of write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/nvmf_tcp_port |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/nvmf_tcp_port.yaml |
| ZAPI | perf-object-get-instances nvmf_tcp_port |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nvmf_tcp_port.yaml |
ontaps3_labels¶
This metric provides information about OntapS3
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/s3/buckets |
Harvest generated |
conf/rest/9.7.0/ontap_s3.yaml |
The ontaps3_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Bucket protection | stat | Total Buckets |
| ONTAP: Data Protection | Bucket protection | stat | Unprotected Buckets |
| ONTAP: Data Protection | Bucket protection | stat | Not Backed up to Cloud |
| ONTAP: Data Protection | Bucket protection | table | Buckets |
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: S3 Object Storage | Highlights | table | Bucket Overview |
ontaps3_logical_used_size¶
Specifies the bucket logical used size up to this point. This field cannot be specified using a POST or PATCH method.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/s3/buckets |
logical_used_size |
conf/rest/9.7.0/ontap_s3.yaml |
The ontaps3_logical_used_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Bucket protection | table | Buckets |
| ONTAP: S3 Object Storage | Highlights | table | Bucket Overview |
| ONTAP: S3 Object Storage | Highlights | timeseries | Top $TopResources Buckets by Used Size |
ontaps3_object_count¶
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/vserver/object-store-server/bucket |
object_count |
conf/rest/9.7.0/ontap_s3.yaml |
The ontaps3_object_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | Highlights | table | Bucket Overview |
ontaps3_policy_labels¶
This metric provides information about OntapS3Policy
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/vserver/object-store-server/bucket/policy |
Harvest generated |
conf/rest/9.7.0/ontap_s3_policy.yaml |
The ontaps3_policy_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | Highlights | table | Bucket Permission |
ontaps3_size¶
Specifies the bucket size in bytes; ranges from 190MB to 62PB.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/s3/buckets |
size |
conf/rest/9.7.0/ontap_s3.yaml |
The ontaps3_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Bucket protection | table | Buckets |
| ONTAP: S3 Object Storage | Highlights | table | Bucket Overview |
ontaps3_svm_abort_multipart_upload_failed¶
Number of failed Abort Multipart Upload operations. ontaps3_svm_abort_multipart_upload_failed is ontaps3_svm_abort_multipart_upload_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
abort_multipart_upload_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
abort_multipart_upload_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_abort_multipart_upload_failed_client_close¶
Number of times Abort Multipart Upload operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_abort_multipart_upload_failed_client_close is ontaps3_svm_abort_multipart_upload_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
abort_multipart_upload_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
abort_multipart_upload_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_abort_multipart_upload_latency¶
Average latency for Abort Multipart Upload operations. ontaps3_svm_abort_multipart_upload_latency is ontaps3_svm_abort_multipart_upload_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
abort_multipart_upload_latencyUnit: microsec Type: average Base: abort_multipart_upload_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
abort_multipart_upload_latencyUnit: microsec Type: average,no-zero-values Base: abort_multipart_upload_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_abort_multipart_upload_rate¶
Number of Abort Multipart Upload operations per second. ontaps3_svm_abort_multipart_upload_rate is ontaps3_svm_abort_multipart_upload_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
abort_multipart_upload_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
abort_multipart_upload_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_abort_multipart_upload_total¶
Number of Abort Multipart Upload operations. ontaps3_svm_abort_multipart_upload_total is ontaps3_svm_abort_multipart_upload_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
abort_multipart_upload_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
abort_multipart_upload_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_allow_access¶
Number of times access was allowed. ontaps3_svm_allow_access is ontaps3_svm_allow_access aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
allow_accessUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
allow_accessUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_anonymous_access¶
Number of times anonymous access was allowed. ontaps3_svm_anonymous_access is ontaps3_svm_anonymous_access aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
anonymous_accessUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
anonymous_accessUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_anonymous_deny_access¶
Number of times anonymous access was denied. ontaps3_svm_anonymous_deny_access is ontaps3_svm_anonymous_deny_access aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
anonymous_deny_accessUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
anonymous_deny_accessUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_authentication_failures¶
Number of authentication failures. ontaps3_svm_authentication_failures is ontaps3_svm_authentication_failures aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
authentication_failuresUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
authentication_failuresUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_chunked_upload_reqs¶
Total number of object store server chunked object upload requests. ontaps3_svm_chunked_upload_reqs is ontaps3_svm_chunked_upload_reqs aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
chunked_upload_requestsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
chunked_upload_reqsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_complete_multipart_upload_failed¶
Number of failed Complete Multipart Upload operations. ontaps3_svm_complete_multipart_upload_failed is ontaps3_svm_complete_multipart_upload_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
complete_multipart_upload_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
complete_multipart_upload_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_complete_multipart_upload_failed_client_close¶
Number of times Complete Multipart Upload operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_complete_multipart_upload_failed_client_close is ontaps3_svm_complete_multipart_upload_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
complete_multipart_upload_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
complete_multipart_upload_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_complete_multipart_upload_latency¶
Average latency for Complete Multipart Upload operations. ontaps3_svm_complete_multipart_upload_latency is ontaps3_svm_complete_multipart_upload_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
complete_multipart_upload_latencyUnit: microsec Type: average Base: complete_multipart_upload_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
complete_multipart_upload_latencyUnit: microsec Type: average,no-zero-values Base: complete_multipart_upload_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_complete_multipart_upload_rate¶
Number of Complete Multipart Upload operations per second. ontaps3_svm_complete_multipart_upload_rate is ontaps3_svm_complete_multipart_upload_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
complete_multipart_upload_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
complete_multipart_upload_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_complete_multipart_upload_total¶
Number of Complete Multipart Upload operations. ontaps3_svm_complete_multipart_upload_total is ontaps3_svm_complete_multipart_upload_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
complete_multipart_upload_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
complete_multipart_upload_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_connected_connections¶
Number of object store server connections currently established. ontaps3_svm_connected_connections is ontaps3_svm_connected_connections aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
connected_connectionsUnit: none Type: raw Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
connected_connectionsUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_connections¶
Total number of object store server connections. ontaps3_svm_connections is ontaps3_svm_connections aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
connectionsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
connectionsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_connections metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | table | Requests & Connections stats |
ontaps3_svm_create_bucket_failed¶
Number of failed Create Bucket operations. ontaps3_svm_create_bucket_failed is ontaps3_svm_create_bucket_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
create_bucket_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
create_bucket_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_create_bucket_failed_client_close¶
Number of times Create Bucket operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_create_bucket_failed_client_close is ontaps3_svm_create_bucket_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
create_bucket_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
create_bucket_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_create_bucket_latency¶
Average latency for Create Bucket operations. ontaps3_svm_create_bucket_latency is ontaps3_svm_create_bucket_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
create_bucket_latencyUnit: microsec Type: average Base: create_bucket_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
create_bucket_latencyUnit: microsec Type: average,no-zero-values Base: create_bucket_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_create_bucket_rate¶
Number of Create Bucket operations per second. ontaps3_svm_create_bucket_rate is ontaps3_svm_create_bucket_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
create_bucket_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
create_bucket_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_create_bucket_total¶
Number of Create Bucket operations. ontaps3_svm_create_bucket_total is ontaps3_svm_create_bucket_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
create_bucket_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
create_bucket_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_default_deny_access¶
Number of times access was denied by default and not through any policy statement. ontaps3_svm_default_deny_access is ontaps3_svm_default_deny_access aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
default_deny_accessUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
default_deny_accessUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_bucket_failed¶
Number of failed Delete Bucket operations. ontaps3_svm_delete_bucket_failed is ontaps3_svm_delete_bucket_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_bucket_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_bucket_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_bucket_failed_client_close¶
Number of times Delete Bucket operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_delete_bucket_failed_client_close is ontaps3_svm_delete_bucket_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_bucket_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_bucket_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_bucket_latency¶
Average latency for Delete Bucket operations. ontaps3_svm_delete_bucket_latency is ontaps3_svm_delete_bucket_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_bucket_latencyUnit: microsec Type: average Base: delete_bucket_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_bucket_latencyUnit: microsec Type: average,no-zero-values Base: delete_bucket_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_bucket_rate¶
Number of Delete Bucket operations per second. ontaps3_svm_delete_bucket_rate is ontaps3_svm_delete_bucket_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_bucket_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_bucket_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_bucket_total¶
Number of Delete Bucket operations. ontaps3_svm_delete_bucket_total is ontaps3_svm_delete_bucket_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_bucket_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_bucket_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_failed¶
Number of failed DELETE object operations. ontaps3_svm_delete_object_failed is ontaps3_svm_delete_object_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_failed_client_close¶
Number of times DELETE object operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_delete_object_failed_client_close is ontaps3_svm_delete_object_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_latency¶
Average latency for DELETE object operations. ontaps3_svm_delete_object_latency is ontaps3_svm_delete_object_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_latencyUnit: microsec Type: average Base: delete_object_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_latencyUnit: microsec Type: average,no-zero-values Base: delete_object_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_delete_object_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Latency |
ontaps3_svm_delete_object_rate¶
Number of DELETE object operations per second. ontaps3_svm_delete_object_rate is ontaps3_svm_delete_object_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_delete_object_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Rate |
ontaps3_svm_delete_object_tagging_failed¶
Number of failed DELETE object tagging operations. ontaps3_svm_delete_object_tagging_failed is ontaps3_svm_delete_object_tagging_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_tagging_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_tagging_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_tagging_failed_client_close¶
Number of times DELETE object tagging operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_delete_object_tagging_failed_client_close is ontaps3_svm_delete_object_tagging_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_tagging_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_tagging_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_tagging_latency¶
Average latency for DELETE object tagging operations. ontaps3_svm_delete_object_tagging_latency is ontaps3_svm_delete_object_tagging_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_tagging_latencyUnit: microsec Type: average Base: delete_object_tagging_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_tagging_latencyUnit: microsec Type: average,no-zero-values Base: delete_object_tagging_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_tagging_rate¶
Number of DELETE object tagging operations per second. ontaps3_svm_delete_object_tagging_rate is ontaps3_svm_delete_object_tagging_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_tagging_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_tagging_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_tagging_total¶
Number of DELETE object tagging operations. ontaps3_svm_delete_object_tagging_total is ontaps3_svm_delete_object_tagging_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_tagging_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_tagging_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_delete_object_total¶
Number of DELETE object operations. ontaps3_svm_delete_object_total is ontaps3_svm_delete_object_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
delete_object_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
delete_object_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_delete_object_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Operations |
ontaps3_svm_explicit_deny_access¶
Number of times access was denied explicitly by a policy statement. ontaps3_svm_explicit_deny_access is ontaps3_svm_explicit_deny_access aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
explicit_deny_accessUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
explicit_deny_accessUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_bucket_acl_failed¶
Number of failed GET Bucket ACL operations. ontaps3_svm_get_bucket_acl_failed is ontaps3_svm_get_bucket_acl_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_bucket_acl_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_bucket_acl_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_bucket_acl_total¶
Number of GET Bucket ACL operations. ontaps3_svm_get_bucket_acl_total is ontaps3_svm_get_bucket_acl_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_bucket_acl_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_bucket_acl_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_bucket_versioning_failed¶
Number of failed Get Bucket Versioning operations. ontaps3_svm_get_bucket_versioning_failed is ontaps3_svm_get_bucket_versioning_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_bucket_versioning_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_bucket_versioning_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_bucket_versioning_total¶
Number of Get Bucket Versioning operations. ontaps3_svm_get_bucket_versioning_total is ontaps3_svm_get_bucket_versioning_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_bucket_versioning_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_bucket_versioning_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_data¶
Rate of GET object data transfers per second. ontaps3_svm_get_data is ontaps3_svm_get_data aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_dataUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_get_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Data Transfer |
ontaps3_svm_get_object_acl_failed¶
Number of failed GET Object ACL operations. ontaps3_svm_get_object_acl_failed is ontaps3_svm_get_object_acl_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_acl_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_acl_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_acl_total¶
Number of GET Object ACL operations. ontaps3_svm_get_object_acl_total is ontaps3_svm_get_object_acl_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_acl_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_acl_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_failed¶
Number of failed GET object operations. ontaps3_svm_get_object_failed is ontaps3_svm_get_object_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_failed_client_close¶
Number of times GET object operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_get_object_failed_client_close is ontaps3_svm_get_object_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_lastbyte_latency¶
Average last-byte latency for GET object operations. ontaps3_svm_get_object_lastbyte_latency is ontaps3_svm_get_object_lastbyte_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_lastbyte_latencyUnit: microsec Type: average Base: get_object_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_lastbyte_latencyUnit: microsec Type: average,no-zero-values Base: get_object_lastbyte_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_latency¶
Average first-byte latency for GET object operations. ontaps3_svm_get_object_latency is ontaps3_svm_get_object_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_latencyUnit: microsec Type: average Base: get_object_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_latencyUnit: microsec Type: average,no-zero-values Base: get_object_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_get_object_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Latency |
ontaps3_svm_get_object_rate¶
Number of GET object operations per second. ontaps3_svm_get_object_rate is ontaps3_svm_get_object_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_get_object_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Rate |
ontaps3_svm_get_object_tagging_failed¶
Number of failed GET object tagging operations. ontaps3_svm_get_object_tagging_failed is ontaps3_svm_get_object_tagging_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_tagging_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_tagging_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_tagging_failed_client_close¶
Number of times GET object tagging operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_get_object_tagging_failed_client_close is ontaps3_svm_get_object_tagging_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_tagging_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_tagging_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_tagging_latency¶
Average latency for GET object tagging operations. ontaps3_svm_get_object_tagging_latency is ontaps3_svm_get_object_tagging_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_tagging_latencyUnit: microsec Type: average Base: get_object_tagging_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_tagging_latencyUnit: microsec Type: average,no-zero-values Base: get_object_tagging_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_tagging_rate¶
Number of GET object tagging operations per second. ontaps3_svm_get_object_tagging_rate is ontaps3_svm_get_object_tagging_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_tagging_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_tagging_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_tagging_total¶
Number of GET object tagging operations. ontaps3_svm_get_object_tagging_total is ontaps3_svm_get_object_tagging_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_tagging_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_tagging_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_get_object_total¶
Number of GET object operations. ontaps3_svm_get_object_total is ontaps3_svm_get_object_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
get_object_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
get_object_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_get_object_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Operations |
ontaps3_svm_group_policy_evaluated¶
Number of times group policies were evaluated. ontaps3_svm_group_policy_evaluated is ontaps3_svm_group_policy_evaluated aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
group_policy_evaluatedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
group_policy_evaluatedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_bucket_failed¶
Number of failed HEAD bucket operations. ontaps3_svm_head_bucket_failed is ontaps3_svm_head_bucket_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_bucket_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_bucket_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_bucket_failed_client_close¶
Number of times HEAD bucket operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_head_bucket_failed_client_close is ontaps3_svm_head_bucket_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_bucket_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_bucket_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_bucket_latency¶
Average latency for HEAD bucket operations. ontaps3_svm_head_bucket_latency is ontaps3_svm_head_bucket_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_bucket_latencyUnit: microsec Type: average Base: head_bucket_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_bucket_latencyUnit: microsec Type: average,no-zero-values Base: head_bucket_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_bucket_rate¶
Number of HEAD bucket operations per second. ontaps3_svm_head_bucket_rate is ontaps3_svm_head_bucket_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_bucket_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_bucket_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_bucket_total¶
Number of HEAD bucket operations. ontaps3_svm_head_bucket_total is ontaps3_svm_head_bucket_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_bucket_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_bucket_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_object_failed¶
Number of failed HEAD Object operations. ontaps3_svm_head_object_failed is ontaps3_svm_head_object_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_object_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_object_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_object_failed_client_close¶
Number of times HEAD object operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_head_object_failed_client_close is ontaps3_svm_head_object_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_object_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_object_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_head_object_latency¶
Average latency for HEAD object operations. ontaps3_svm_head_object_latency is ontaps3_svm_head_object_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_object_latencyUnit: microsec Type: average Base: head_object_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_object_latencyUnit: microsec Type: average,no-zero-values Base: head_object_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_head_object_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Latency |
ontaps3_svm_head_object_rate¶
Number of HEAD Object operations per second. ontaps3_svm_head_object_rate is ontaps3_svm_head_object_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_object_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_object_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_head_object_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Rate |
ontaps3_svm_head_object_total¶
Number of HEAD Object operations. ontaps3_svm_head_object_total is ontaps3_svm_head_object_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
head_object_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
head_object_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_head_object_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Operations |
ontaps3_svm_initiate_multipart_upload_failed¶
Number of failed Initiate Multipart Upload operations. ontaps3_svm_initiate_multipart_upload_failed is ontaps3_svm_initiate_multipart_upload_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
initiate_multipart_upload_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
initiate_multipart_upload_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_initiate_multipart_upload_failed_client_close¶
Number of times Initiate Multipart Upload operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_initiate_multipart_upload_failed_client_close is ontaps3_svm_initiate_multipart_upload_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
initiate_multipart_upload_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
initiate_multipart_upload_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_initiate_multipart_upload_latency¶
Average latency for Initiate Multipart Upload operations. ontaps3_svm_initiate_multipart_upload_latency is ontaps3_svm_initiate_multipart_upload_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
initiate_multipart_upload_latencyUnit: microsec Type: average Base: initiate_multipart_upload_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
initiate_multipart_upload_latencyUnit: microsec Type: average,no-zero-values Base: initiate_multipart_upload_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_initiate_multipart_upload_rate¶
Number of Initiate Multipart Upload operations per second. ontaps3_svm_initiate_multipart_upload_rate is ontaps3_svm_initiate_multipart_upload_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
initiate_multipart_upload_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
initiate_multipart_upload_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_initiate_multipart_upload_total¶
Number of Initiate Multipart Upload operations. ontaps3_svm_initiate_multipart_upload_total is ontaps3_svm_initiate_multipart_upload_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
initiate_multipart_upload_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
initiate_multipart_upload_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_input_flow_control_entry¶
Number of times input flow control was entered. ontaps3_svm_input_flow_control_entry is ontaps3_svm_input_flow_control_entry aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
input_flow_control_entryUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
input_flow_control_entryUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_input_flow_control_exit¶
Number of times input flow control was exited. ontaps3_svm_input_flow_control_exit is ontaps3_svm_input_flow_control_exit aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
input_flow_control_exitUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
input_flow_control_exitUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_buckets_failed¶
Number of failed LIST Buckets operations. ontaps3_svm_list_buckets_failed is ontaps3_svm_list_buckets_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_buckets_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_buckets_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_buckets_failed_client_close¶
Number of times LIST Bucket operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_list_buckets_failed_client_close is ontaps3_svm_list_buckets_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_buckets_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_buckets_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_buckets_latency¶
Average latency for LIST Buckets operations. ontaps3_svm_list_buckets_latency is ontaps3_svm_list_buckets_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_buckets_latencyUnit: microsec Type: average Base: list_buckets_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_buckets_latencyUnit: microsec Type: average,no-zero-values Base: head_object_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_buckets_rate¶
Number of LIST Buckets operations per second. ontaps3_svm_list_buckets_rate is ontaps3_svm_list_buckets_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_buckets_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_buckets_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_buckets_total¶
Number of LIST Buckets operations. ontaps3_svm_list_buckets_total is ontaps3_svm_list_buckets_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_buckets_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_buckets_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_object_versions_failed¶
Number of failed LIST object versions operations. ontaps3_svm_list_object_versions_failed is ontaps3_svm_list_object_versions_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_object_versions_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_object_versions_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_object_versions_failed_client_close¶
Number of times LIST object versions operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_list_object_versions_failed_client_close is ontaps3_svm_list_object_versions_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_object_versions_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_object_versions_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_object_versions_latency¶
Average latency for LIST Object versions operations. ontaps3_svm_list_object_versions_latency is ontaps3_svm_list_object_versions_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_object_versions_latencyUnit: microsec Type: average Base: list_object_versions_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_object_versions_latencyUnit: microsec Type: average,no-zero-values Base: list_object_versions_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_object_versions_rate¶
Number of LIST Object Versions operations per second. ontaps3_svm_list_object_versions_rate is ontaps3_svm_list_object_versions_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_object_versions_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_object_versions_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_object_versions_total¶
Number of LIST Object Versions operations. ontaps3_svm_list_object_versions_total is ontaps3_svm_list_object_versions_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_object_versions_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_object_versions_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_objects_failed¶
Number of failed LIST objects operations. ontaps3_svm_list_objects_failed is ontaps3_svm_list_objects_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_objects_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_objects_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_objects_failed_client_close¶
Number of times LIST objects operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_list_objects_failed_client_close is ontaps3_svm_list_objects_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_objects_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_objects_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_objects_latency¶
Average latency for LIST Objects operations. ontaps3_svm_list_objects_latency is ontaps3_svm_list_objects_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_objects_latencyUnit: microsec Type: average Base: list_objects_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_objects_latencyUnit: microsec Type: average,no-zero-values Base: list_objects_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_objects_rate¶
Number of LIST Objects operations per second. ontaps3_svm_list_objects_rate is ontaps3_svm_list_objects_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_objects_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_objects_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_objects_total¶
Number of LIST Objects operations. ontaps3_svm_list_objects_total is ontaps3_svm_list_objects_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_objects_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_objects_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_uploads_failed¶
Number of failed LIST Upload operations. ontaps3_svm_list_uploads_failed is ontaps3_svm_list_uploads_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_uploads_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_uploads_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_uploads_failed_client_close¶
Number of times LIST Upload operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_list_uploads_failed_client_close is ontaps3_svm_list_uploads_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_uploads_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_uploads_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_uploads_latency¶
Average latency for LIST Upload operations. ontaps3_svm_list_uploads_latency is ontaps3_svm_list_uploads_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_uploads_latencyUnit: microsec Type: average Base: list_uploads_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_uploads_latencyUnit: microsec Type: average,no-zero-values Base: list_uploads_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_uploads_rate¶
Number of LIST Upload operations per second. ontaps3_svm_list_uploads_rate is ontaps3_svm_list_uploads_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_uploads_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_uploads_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_list_uploads_total¶
Number of LIST Upload operations. ontaps3_svm_list_uploads_total is ontaps3_svm_list_uploads_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
list_uploads_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
list_uploads_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_max_cmds_per_connection¶
Maximum commands pipelined at any instance on a connection. ontaps3_svm_max_cmds_per_connection is ontaps3_svm_max_cmds_per_connection aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
maximum_commands_per_connectionUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
max_cmds_per_connectionUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_max_cmds_per_connection metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | table | Requests & Connections stats |
ontaps3_svm_max_connected_connections¶
Maximum number of object store server connections established at one time. ontaps3_svm_max_connected_connections is ontaps3_svm_max_connected_connections aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
maximum_connected_connectionsUnit: none Type: raw Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
max_connected_connectionsUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_max_connected_connections metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | table | Requests & Connections stats |
ontaps3_svm_max_requests_outstanding¶
Maximum number of object store server requests in process at one time. ontaps3_svm_max_requests_outstanding is ontaps3_svm_max_requests_outstanding aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
maximum_requests_outstandingUnit: none Type: raw Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
max_requests_outstandingUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_max_requests_outstanding metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | table | Requests & Connections stats |
ontaps3_svm_multi_delete_reqs¶
Total number of object store server multiple object delete requests. ontaps3_svm_multi_delete_reqs is ontaps3_svm_multi_delete_reqs aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
multiple_delete_requestsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
multi_delete_reqsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_output_flow_control_entry¶
Number of output flow control was entered. ontaps3_svm_output_flow_control_entry is ontaps3_svm_output_flow_control_entry aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
output_flow_control_entryUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
output_flow_control_entryUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_output_flow_control_exit¶
Number of times output flow control was exited. ontaps3_svm_output_flow_control_exit is ontaps3_svm_output_flow_control_exit aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
output_flow_control_exitUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
output_flow_control_exitUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_presigned_url_reqs¶
Total number of presigned object store server URL requests. ontaps3_svm_presigned_url_reqs is ontaps3_svm_presigned_url_reqs aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
presigned_url_requestsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
presigned_url_reqsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_bucket_versioning_failed¶
Number of failed Put Bucket Versioning operations. ontaps3_svm_put_bucket_versioning_failed is ontaps3_svm_put_bucket_versioning_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_bucket_versioning_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_bucket_versioning_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_bucket_versioning_total¶
Number of Put Bucket Versioning operations. ontaps3_svm_put_bucket_versioning_total is ontaps3_svm_put_bucket_versioning_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_bucket_versioning_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_bucket_versioning_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_data¶
Rate of PUT object data transfers per second. ontaps3_svm_put_data is ontaps3_svm_put_data aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_dataUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_put_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Data Transfer |
ontaps3_svm_put_object_failed¶
Number of failed PUT object operations. ontaps3_svm_put_object_failed is ontaps3_svm_put_object_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_failed_client_close¶
Number of times PUT object operation failed due to the case where client closed the connection while the operation was still pending on server. ontaps3_svm_put_object_failed_client_close is ontaps3_svm_put_object_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_latency¶
Average latency for PUT object operations. ontaps3_svm_put_object_latency is ontaps3_svm_put_object_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_latencyUnit: microsec Type: average Base: put_object_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_latencyUnit: microsec Type: average,no-zero-values Base: put_object_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_put_object_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Latency |
ontaps3_svm_put_object_rate¶
Number of PUT object operations per second. ontaps3_svm_put_object_rate is ontaps3_svm_put_object_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_put_object_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Rate |
ontaps3_svm_put_object_tagging_failed¶
Number of failed PUT object tagging operations. ontaps3_svm_put_object_tagging_failed is ontaps3_svm_put_object_tagging_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_tagging_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_tagging_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_tagging_failed_client_close¶
Number of times PUT object tagging operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_put_object_tagging_failed_client_close is ontaps3_svm_put_object_tagging_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_tagging_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_tagging_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_tagging_latency¶
Average latency for PUT object tagging operations. ontaps3_svm_put_object_tagging_latency is ontaps3_svm_put_object_tagging_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_tagging_latencyUnit: microsec Type: average Base: put_object_tagging_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_tagging_latencyUnit: microsec Type: average,no-zero-values Base: put_object_tagging_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_tagging_rate¶
Number of PUT object tagging operations per second. ontaps3_svm_put_object_tagging_rate is ontaps3_svm_put_object_tagging_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_tagging_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_tagging_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_tagging_total¶
Number of PUT object tagging operations. ontaps3_svm_put_object_tagging_total is ontaps3_svm_put_object_tagging_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_tagging_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_tagging_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_put_object_total¶
Number of PUT object operations. ontaps3_svm_put_object_total is ontaps3_svm_put_object_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
put_object_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
put_object_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_put_object_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | timeseries | Top $TopResources SVMs by Operations |
ontaps3_svm_request_parse_errors¶
Number of request parser errors due to malformed requests. ontaps3_svm_request_parse_errors is ontaps3_svm_request_parse_errors aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
request_parse_errorsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
request_parse_errorsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_requests¶
Total number of object store server requests. ontaps3_svm_requests is ontaps3_svm_requests aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
requestsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
requestsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
The ontaps3_svm_requests metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | S3 Object Storage SVM | table | Requests & Connections stats |
ontaps3_svm_requests_outstanding¶
Number of object store server requests in process. ontaps3_svm_requests_outstanding is ontaps3_svm_requests_outstanding aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
requests_outstandingUnit: none Type: raw Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
requests_outstandingUnit: none Type: raw,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_root_user_access¶
Number of times access was done by root user. ontaps3_svm_root_user_access is ontaps3_svm_root_user_access aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
root_user_accessUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
root_user_accessUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_server_connection_close¶
Number of connection closes triggered by server due to fatal errors. ontaps3_svm_server_connection_close is ontaps3_svm_server_connection_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
server_connection_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
server_connection_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_signature_v2_reqs¶
Total number of object store server signature V2 requests. ontaps3_svm_signature_v2_reqs is ontaps3_svm_signature_v2_reqs aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
signature_v2_requestsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
signature_v2_reqsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_signature_v4_reqs¶
Total number of object store server signature V4 requests. ontaps3_svm_signature_v4_reqs is ontaps3_svm_signature_v4_reqs aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
signature_v4_requestsUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
signature_v4_reqsUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_tagging¶
Number of requests with tagging specified. ontaps3_svm_tagging is ontaps3_svm_tagging aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
taggingUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
taggingUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_upload_part_failed¶
Number of failed Upload Part operations. ontaps3_svm_upload_part_failed is ontaps3_svm_upload_part_failed aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
upload_part_failedUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
upload_part_failedUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_upload_part_failed_client_close¶
Number of times Upload Part operation failed because client terminated connection while the operation was still pending on server. ontaps3_svm_upload_part_failed_client_close is ontaps3_svm_upload_part_failed_client_close aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
upload_part_failed_client_closeUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
upload_part_failed_client_closeUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_upload_part_latency¶
Average latency for Upload Part operations. ontaps3_svm_upload_part_latency is ontaps3_svm_upload_part_latency aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
upload_part_latencyUnit: microsec Type: average Base: upload_part_total |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
upload_part_latencyUnit: microsec Type: average,no-zero-values Base: upload_part_latency_base |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_upload_part_rate¶
Number of Upload Part operations per second. ontaps3_svm_upload_part_rate is ontaps3_svm_upload_part_rate aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
upload_part_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
upload_part_rateUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_svm_upload_part_total¶
Number of Upload Part operations. ontaps3_svm_upload_part_total is ontaps3_svm_upload_part_total aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/object_store_server |
upload_part_totalUnit: none Type: delta Base: |
conf/restperf/9.14.1/ontap_s3_svm.yaml |
| ZAPI | perf-object-get-instances object_store_server |
upload_part_totalUnit: none Type: delta,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/ontap_s3_svm.yaml |
ontaps3_used_percent¶
The used_percent metric the percentage of a bucket's total capacity that is currently being used.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/s3/buckets |
logical_used_size, size |
conf/rest/9.7.0/ontap_s3.yaml |
The ontaps3_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: S3 Object Storage | Highlights | table | Bucket Overview |
| ONTAP: S3 Object Storage | Highlights | timeseries | Top $TopResources Buckets by Used Size Percent |
path_read_data¶
The average read throughput in kilobytes per second read from the indicated target port by the controller.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
read_dataUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
read_dataUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/path.yaml |
The path_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FibreBridge/Array | timeseries | Read Data from FibreBridge/Array WWPN |
path_read_iops¶
The number of I/O read operations sent from the initiator port to the indicated target port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
read_iopsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
read_iopsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/path.yaml |
The path_read_iops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FibreBridge/Array | timeseries | Read IOPs from FibreBridge/Array WWPN |
path_read_latency¶
The average latency of I/O read operations sent from this controller to the indicated target port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
read_latencyUnit: microsec Type: average Base: read_iops |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
read_latencyUnit: microsec Type: average Base: read_iops |
conf/zapiperf/cdot/9.8.0/path.yaml |
The path_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FibreBridge/Array | timeseries | Read Latency from FibreBridge/Array WWPN |
path_total_data¶
The average throughput in kilobytes per second read and written from/to the indicated target port by the controller.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
total_dataUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
total_dataUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/path.yaml |
path_total_iops¶
The number of total read/write I/O operations sent from the initiator port to the indicated target port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
total_iopsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
total_iopsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/path.yaml |
path_write_data¶
The average write throughput in kilobytes per second written to the indicated target port by the controller.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
write_dataUnit: kb_per_sec Type: rate Base: |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
write_dataUnit: kb_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/path.yaml |
The path_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FibreBridge/Array | timeseries | Write Data to FibreBridge/Array WWPN |
path_write_iops¶
The number of I/O write operations sent from the initiator port to the indicated target port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
write_iopsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
write_iopsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/path.yaml |
The path_write_iops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FibreBridge/Array | timeseries | Write IOPs to FibreBridge/Array WWPN |
path_write_latency¶
The average latency of I/O write operations sent from this controller to the indicated target port.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/path |
write_latencyUnit: microsec Type: average Base: write_iops |
conf/restperf/9.12.0/path.yaml |
| ZAPI | perf-object-get-instances path |
write_latencyUnit: microsec Type: average Base: write_iops |
conf/zapiperf/cdot/9.8.0/path.yaml |
The path_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster FibreBridge/Array | timeseries | Write Latency to FibreBridge/Array WWPN |
plex_disk_busy¶
The utilization percent of the disk. plex_disk_busy is disk_busy aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The plex_disk_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Disk | timeseries | Top $TopResources Plexes by Disk Utilization |
| ONTAP: MetroCluster | MetroCluster Disk | table | Top $TopResources Plexes by Disk Utilization |
plex_disk_capacity¶
Disk capacity in MB. plex_disk_capacity is disk_capacity aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP. plex_disk_cp_read_chain is disk_cp_read_chain aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations. plex_disk_cp_read_latency is disk_cp_read_latency aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_cp_reads¶
Number of disk read operations initiated each second for consistency point processing. plex_disk_cp_reads is disk_cp_reads aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response. plex_disk_io_pending is disk_io_pending aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_io_queued¶
Number of I/Os queued to the disk but not yet issued. plex_disk_io_queued is disk_io_queued aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_total_data¶
Total throughput for user operations per second. plex_disk_total_data is disk_total_data aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_total_transfers¶
Total number of disk operations involving data transfer initiated per second. plex_disk_total_transfers is disk_total_transfers aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_user_read_blocks¶
Number of blocks transferred for user read operations per second. plex_disk_user_read_blocks is disk_user_read_blocks aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_user_read_chain¶
Average number of blocks transferred in each user read operation. plex_disk_user_read_chain is disk_user_read_chain aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_user_read_latency¶
Average latency per block in microseconds for user read operations. plex_disk_user_read_latency is disk_user_read_latency aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The plex_disk_user_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Disk | timeseries | Top $TopResources Plexes by User Read Latency per 4KB IO |
plex_disk_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. plex_disk_user_reads is disk_user_reads aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The plex_disk_user_reads metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Disk | timeseries | Top $TopResources Plexes by User Reads |
plex_disk_user_write_blocks¶
Number of blocks transferred for user write operations per second. plex_disk_user_write_blocks is disk_user_write_blocks aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_user_write_chain¶
Average number of blocks transferred in each user write operation. plex_disk_user_write_chain is disk_user_write_chain aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
plex_disk_user_write_latency¶
Average latency per block in microseconds for user write operations. plex_disk_user_write_latency is disk_user_write_latency aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The plex_disk_user_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Disk | timeseries | Top $TopResources Plexes by User Write Latency per 4KB IO |
plex_disk_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests. plex_disk_user_writes is disk_user_writes aggregated by plex.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The plex_disk_user_writes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: MetroCluster | MetroCluster Disk | timeseries | Top $TopResources Plexes by User Writes |
poller_memory¶
Tracks the memory usage of the poller process, including Resident Set Size (RSS), swap memory, and Virtual Memory Size (VMS).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: bytes |
NA |
| ZAPI | NA |
Harvest generatedUnit: bytes |
NA |
The poller_memory metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | table | Pollers |
| Harvest Metadata | Highlights | timeseries | Poller RSS Memory |
poller_memory_percent¶
Indicates the percentage of memory used by the poller process relative to the total available memory.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: percent |
NA |
| ZAPI | NA |
Harvest generatedUnit: percent |
NA |
The poller_memory_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | timeseries | % Memory Used |
poller_status¶
Indicates the operational status of the poller process, where 1 means operational and 0 means not operational.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
NA |
| ZAPI | NA |
Harvest generated |
NA |
The poller_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| Harvest Metadata | Highlights | stat | Pollers |
| Harvest Metadata | Highlights | table | Pollers |
qos_concurrency¶
This is the average number of concurrent requests for the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
concurrencyUnit: none Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
concurrencyUnit: none Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_concurrency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Concurrency |
qos_latency¶
This is the average response time for requests that were initiated by the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
latencyUnit: microsec Type: average Base: ops |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
latencyUnit: microsec Type: average,no-zero-values Base: ops |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | Average QoS Latency |
| ONTAP: Volume | QoS | stat | Top $TopResources QoS Volumes Latency |
| ONTAP: Volume | QoS | timeseries | Top $TopResources QoS Volumes by Latency |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Average Latency |
qos_ops¶
This field is the workload's rate of operations that completed during the measurement interval; measured per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | QoS IOPs |
| ONTAP: Volume | QoS | stat | Top $TopResources QoS Volumes by Total IOPs |
| ONTAP: Volume | QoS | timeseries | Top $TopResources QoS Volumes by Total IOPs |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Total IOPS |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | timeseries | Top $TopResources Fixed QoS Shared Policy IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | table | Fixed QoS Shared Policy IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | timeseries | Top $TopResources Fixed QoS Workload IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | table | Fixed QoS Workload IOPs Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | timeseries | Top $TopResources Adaptive QoS Workload IOPs Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | table | Adaptive QoS Workload IOPs Utilization (%) |
qos_other_ops¶
This is the rate of this workload's other operations that completed during the measurement interval.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos |
other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/workload.yaml |
| ZAPI | perf-object-get-instances workload_volume |
other_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Other IOPS |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Other IOPS |
qos_policy_adaptive_absolute_min_iops¶
Specifies the absolute minimum IOPS that is used as an override when the expected_iops is less than this value.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_adaptive.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_adaptive.yaml |
qos_policy_adaptive_expected_iops¶
Specifies the size to be used to calculate expected IOPS per TB.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_adaptive.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_adaptive.yaml |
qos_policy_adaptive_labels¶
This metric provides information about QosPolicyAdaptive
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/qos/adaptive-policy-group |
Harvest generated |
conf/rest/9.12.0/qos_policy_adaptive.yaml |
| ZAPI | qos-adaptive-policy-group-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_adaptive.yaml |
qos_policy_adaptive_peak_iops¶
Specifies the maximum possible IOPS per TB allocated based on the storage object allocated size or the storage object used size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_adaptive.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_adaptive.yaml |
qos_policy_fixed_labels¶
This metric provides information about QosPolicyFixed
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/qos/policy-group |
Harvest generated |
conf/rest/9.12.0/qos_policy_fixed.yaml |
| ZAPI | qos-policy-group-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_fixed.yaml |
The qos_policy_fixed_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | timeseries | Top $TopResources Fixed QoS Shared Policy IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | timeseries | Top $TopResources Fixed QoS Shared Policy Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | table | Fixed QoS Shared Policy IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | table | Fixed QoS Shared Policy Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | table | Fixed QoS Workload IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | table | Fixed QoS Workload Bandwidth Utilization (%) |
qos_policy_fixed_max_throughput_iops¶
Maximum throughput defined by this policy. It is specified in terms of IOPS. 0 means no maximum throughput is enforced.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_fixed.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_fixed.yaml |
The qos_policy_fixed_max_throughput_iops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | timeseries | Top $TopResources Fixed QoS Shared Policy IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | table | Fixed QoS Shared Policy IOPs Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | timeseries | Top $TopResources Fixed QoS Workload IOPs Utilization (%) |
qos_policy_fixed_max_throughput_mbps¶
Maximum throughput defined by this policy. It is specified in terms of Mbps. 0 means no maximum throughput is enforced.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_fixed.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_fixed.yaml |
The qos_policy_fixed_max_throughput_mbps metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | timeseries | Top $TopResources Fixed QoS Shared Policy Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | table | Fixed QoS Shared Policy Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | timeseries | Top $TopResources Fixed QoS Workload Bandwidth Utilization (%) |
qos_policy_fixed_min_throughput_iops¶
Minimum throughput defined by this policy. It is specified in terms of IOPS. 0 means no minimum throughput is enforced. These floors are not guaranteed on non-AFF platforms or when FabricPool tiering policies are set.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_fixed.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_fixed.yaml |
qos_policy_fixed_min_throughput_mbps¶
Minimum throughput defined by this policy. It is specified in terms of Mbps. 0 means no minimum throughput is enforced.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_policy_fixed.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_policy_fixed.yaml |
qos_read_data¶
This is the amount of data read per second from the filer by the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
read_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
read_dataUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | QoS Read Throughput |
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources SVMs by QoS Throughput |
| ONTAP: Volume | QoS | stat | Top $TopResources Qos Volumes Total Throughput |
| ONTAP: Volume | QoS | timeseries | Top $TopResources QoS Volumes by Average Throughput |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Read Throughput |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Read Throughput |
qos_read_io_type¶
This is the percentage of read requests served from various components (such as buffer cache, ext_cache, disk, etc.).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
read_io_type_percentUnit: percent Type: percent Base: read_io_type_base |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
read_io_typeUnit: percent Type: percent Base: read_io_type_base |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_read_io_type metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type bamboo_ssd |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type cache |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type cloud |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type cloud_s2c |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type disk |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type ext_cache |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type fc_miss |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type hya_cache |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type hya_hdd |
| ONTAP: Workload | Read IO Type | timeseries | Top $TopResources Workloads by Read IO Type hya_non_cache |
qos_read_latency¶
This is the average response time for read requests that were initiated by the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
read_latencyUnit: microsec Type: average,no-zero-values Base: read_ops |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | Average QoS Read Latency |
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources SVMs by QoS Latency |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Read Latency |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Average Read Latency |
qos_read_ops¶
This is the rate of this workload's read operations that completed during the measurement interval.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
read_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | QoS Read IOPs |
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources SVMs by QoS IOPs |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Read IOPS |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Read IOPS |
qos_sequential_reads¶
This is the percentage of reads, performed on behalf of the workload, that were sequential.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
sequential_reads_percentUnit: percent Type: percent Base: sequential_reads_base |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
sequential_readsUnit: percent Type: percent,no-zero-values Base: sequential_reads_base |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_sequential_reads metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources Workloads by Sequential Reads (%) |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Sequential Reads |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Sequential Reads (%) |
qos_sequential_writes¶
This is the percentage of writes, performed on behalf of the workload, that were sequential. This counter is only available on platforms with more than 4GB of NVRAM.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
sequential_writes_percentUnit: percent Type: percent Base: sequential_writes_base |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
sequential_writesUnit: percent Type: percent,no-zero-values Base: sequential_writes_base |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_sequential_writes metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources Workloads by Sequential Writes (%) |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Sequential Writes |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Sequential Writes (%) |
qos_total_data¶
This is the total amount of data read/written per second from/to the filer by the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
total_dataUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_total_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | timeseries | Top $TopResources Fixed QoS Shared Policy Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Shared Policy Utilization | table | Fixed QoS Shared Policy Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | timeseries | Top $TopResources Fixed QoS Workload Bandwidth Utilization (%) |
| ONTAP: Workload | Fixed QoS Workload Utilization | table | Fixed QoS Workload Bandwidth Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | timeseries | Top $TopResources Adaptive QoS Workload Bandwidth Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | table | Adaptive QoS Workload Bandwidth Utilization (%) |
qos_workload_labels¶
This metric provides information about QosWorkload
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/qos/workload |
Harvest generated |
conf/rest/9.12.0/qos_workload.yaml |
| ZAPI | qos-workload-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_workload.yaml |
The qos_workload_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Workload | Adaptive QoS Workload Utilization | table | Adaptive QoS Workload IOPs Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | table | Adaptive QoS Workload Bandwidth Utilization (%) |
qos_workload_max_throughput_iops¶
Maximum throughput IOPs allowed for the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_workload.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_workload.yaml |
The qos_workload_max_throughput_iops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Adaptive QoS Workload Utilization | timeseries | Top $TopResources Adaptive QoS Workload IOPs Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | table | Adaptive QoS Workload IOPs Utilization (%) |
qos_workload_max_throughput_mbps¶
Maximum throughput Mbps allowed for the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/qos_workload.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/qos_workload.yaml |
The qos_workload_max_throughput_mbps metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Workload | Adaptive QoS Workload Utilization | timeseries | Top $TopResources Adaptive QoS Workload Bandwidth Utilization (%) |
| ONTAP: Workload | Adaptive QoS Workload Utilization | table | Adaptive QoS Workload Bandwidth Utilization (%) |
qos_write_data¶
This is the amount of data written per second to the filer by the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
write_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
write_dataUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | QoS Throughput |
| ONTAP: SVM | QoS Policy Group | stat | QoS Write Throughput |
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources SVMs by QoS Throughput |
| ONTAP: Volume | QoS | stat | Top $TopResources Qos Volumes Total Throughput |
| ONTAP: Volume | QoS | timeseries | Top $TopResources QoS Volumes by Average Throughput |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Write Throughput |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Write Throughput |
qos_write_latency¶
This is the average response time for write requests that were initiated by the workload.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
write_latencyUnit: microsec Type: average,no-zero-values Base: write_ops |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | Average QoS Write Latency |
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources SVMs by QoS Latency |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Write Latency |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Average Write Latency |
qos_write_ops¶
This is the workload's write operations that completed during the measurement interval; measured per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qos_volume |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/workload_volume.yaml |
| ZAPI | perf-object-get-instances workload_volume |
write_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/workload_volume.yaml |
The qos_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | QoS Policy Group | stat | QoS Write IOPs |
| ONTAP: SVM | QoS Policy Group | timeseries | Top $TopResources SVMs by QoS IOPs |
| ONTAP: Volume | QoS | timeseries | Top $TopResources Volumes by QoS Volume Write IOPS |
| ONTAP: Workload | Highlights | timeseries | Top $TopResources Workloads by Write IOPS |
qtree_cifs_ops¶
Number of CIFS operations per second to the qtree
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qtree |
cifs_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/qtree.yaml |
| ZAPI | perf-object-get-instances qtree |
cifs_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/qtree.yaml |
The qtree_cifs_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Qtree | Highlights | timeseries | Top $TopResources CIFS by IOPs |
qtree_id¶
The identifier for the qtree, unique within the qtree's volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/qtrees |
id |
conf/rest/9.12.0/qtree.yaml |
qtree_internal_ops¶
Number of internal operations generated by activites such as snapmirror and backup per second to the qtree
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qtree |
internal_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/qtree.yaml |
| ZAPI | perf-object-get-instances qtree |
internal_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/qtree.yaml |
The qtree_internal_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Qtree | Highlights | timeseries | Top $TopResources Qtrees by Internal IOPs |
qtree_labels¶
This metric provides information about Qtree
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/qtrees |
Harvest generated |
conf/rest/9.12.0/qtree.yaml |
| ZAPI | qtree-list-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/qtree.yaml |
The qtree_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
qtree_nfs_ops¶
Number of NFS operations per second to the qtree
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qtree |
nfs_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/qtree.yaml |
| ZAPI | perf-object-get-instances qtree |
nfs_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/qtree.yaml |
The qtree_nfs_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Qtree | Highlights | timeseries | Top $TopResources NFSs by IOPs |
qtree_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
qtree_other_ops¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
qtree_read_data¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
qtree_read_ops¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
qtree_total_data¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
qtree_total_ops¶
Summation of NFS ops, CIFS ops, CSS ops and internal ops
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/qtree |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/qtree.yaml |
| KeyPerf | api/storage/qtrees |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
| ZAPI | perf-object-get-instances qtree |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/qtree.yaml |
The qtree_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Qtree | Highlights | timeseries | Top $TopResources Qtrees by IOPs |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Qtrees by IOPs |
qtree_write_data¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
qtree_write_ops¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/qtrees |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/qtree.yaml |
quota_disk_limit¶
Maximum amount of disk space, in kilobytes, allowed for the quota target (hard disk space limit). The value is -1 if the limit is unlimited.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
space.hard_limit |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
disk-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_disk_limit metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Quota | Highlights | table | Reports |
quota_disk_used¶
Current amount of disk space, in kilobytes, used by the quota target.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
space.used.total |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
disk-used |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_disk_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Qtree | Usage | timeseries | Top $TopResources Qtrees by Disk Used |
| ONTAP: Qtree | Usage | timeseries | Top $TopResources Qtrees by Disk Used Growth |
| ONTAP: Quota | Highlights | table | Reports |
| ONTAP: Quota | Space Usage | timeseries | Top $TopResources Quotas by Space Used |
quota_disk_used_pct_disk_limit¶
Current disk space used expressed as a percentage of hard disk limit.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
space.used.hard_limit_percent |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
disk-used-pct-disk-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_disk_used_pct_disk_limit metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Quota | Highlights | table | Reports |
| ONTAP: Quota | Space Usage | timeseries | Top $TopResources Quotas by Space Used % |
quota_disk_used_pct_soft_disk_limit¶
Current disk space used expressed as a percentage of soft disk limit.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
space.used.soft_limit_percent |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
disk-used-pct-soft-disk-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
quota_disk_used_pct_threshold¶
Current disk space used expressed as a percentage of threshold.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | quota-report-iter |
disk-used-pct-threshold |
conf/zapi/cdot/9.8.0/qtree.yaml |
quota_file_limit¶
Maximum number of files allowed for the quota target (hard files limit). The value is -1 if the limit is unlimited.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
files.hard_limit |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
file-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_file_limit metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Quota | Highlights | table | Reports |
quota_files_used¶
Current number of files used by the quota target.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
files.used.total |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
files-used |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_files_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Qtree | Usage | timeseries | Top $TopResources Qtrees by Files Used |
| ONTAP: Quota | Highlights | table | Reports |
| ONTAP: Quota | Space Usage | timeseries | Top $TopResources Quotas by Files Used |
quota_files_used_pct_file_limit¶
Current number of files used expressed as a percentage of hard file limit.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
files.used.hard_limit_percent |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
files-used-pct-file-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_files_used_pct_file_limit metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Quota | Highlights | table | Reports |
| ONTAP: Quota | Space Usage | timeseries | Top $TopResources Quotas by Files Used % |
quota_files_used_pct_soft_file_limit¶
Current number of files used expressed as a percentage of soft file limit.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
files.used.soft_limit_percent |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
files-used-pct-soft-file-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
quota_soft_disk_limit¶
soft disk space limit, in kilobytes, for the quota target. The value is -1 if the limit is unlimited.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
space.soft_limit |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
soft-disk-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_soft_disk_limit metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Quota | Highlights | table | Reports |
quota_soft_file_limit¶
Soft file limit, in number of files, for the quota target. The value is -1 if the limit is unlimited.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/quota/reports |
files.soft_limit |
conf/rest/9.12.0/quota.yaml |
| ZAPI | quota-report-iter |
soft-file-limit |
conf/zapi/cdot/9.8.0/qtree.yaml |
The quota_soft_file_limit metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Quota | Highlights | table | Reports |
quota_threshold¶
Disk space threshold, in kilobytes, for the quota target. The value is -1 if the limit is unlimited.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | quota-report-iter |
threshold |
conf/zapi/cdot/9.8.0/qtree.yaml |
| REST | NA |
Harvest generated |
conf/rest/9.12.0/quota.yaml |
raid_disk_busy¶
The utilization percent of the disk. raid_disk_busy is disk_busy aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
disk_busy_percentUnit: percent Type: percent Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_busyUnit: percent Type: percent Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The raid_disk_busy metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Top Disks: Raid-level Overview | timeseries | Top $TopResources Disks by Disk Busy |
raid_disk_capacity¶
Disk capacity in MB. raid_disk_capacity is disk_capacity aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
capacityUnit: mb Type: raw Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
disk_capacityUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_cp_read_chain¶
Average number of blocks transferred in each consistency point read operation during a CP. raid_disk_cp_read_chain is disk_cp_read_chain aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_chainUnit: none Type: average Base: cp_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_cp_read_latency¶
Average latency per block in microseconds for consistency point read operations. raid_disk_cp_read_latency is disk_cp_read_latency aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_read_latencyUnit: microsec Type: average Base: cp_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_cp_reads¶
Number of disk read operations initiated each second for consistency point processing. raid_disk_cp_reads is disk_cp_reads aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
cp_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
cp_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_io_pending¶
Average number of I/Os issued to the disk for which we have not yet received the response. raid_disk_io_pending is disk_io_pending aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_pendingUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_io_queued¶
Number of I/Os queued to the disk but not yet issued. raid_disk_io_queued is disk_io_queued aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
io_queuedUnit: none Type: average Base: base_for_disk_busy |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_total_data¶
Total throughput for user operations per second. raid_disk_total_data is disk_total_data aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_total_transfers¶
Total number of disk operations involving data transfer initiated per second. raid_disk_total_transfers is disk_total_transfers aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
total_transfer_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
total_transfersUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The raid_disk_total_transfers metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Top Disks: Raid-level Overview | timeseries | Top $TopResources Disks by Total Transfers |
raid_disk_user_read_blocks¶
Number of blocks transferred for user read operations per second. raid_disk_user_read_blocks is disk_user_read_blocks aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_user_read_chain¶
Average number of blocks transferred in each user read operation. raid_disk_user_read_chain is disk_user_read_chain aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_chainUnit: none Type: average Base: user_read_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_chainUnit: none Type: average Base: user_reads |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_user_read_latency¶
Average latency per block in microseconds for user read operations. raid_disk_user_read_latency is disk_user_read_latency aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_read_latencyUnit: microsec Type: average Base: user_read_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The raid_disk_user_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Top Disks: Raid-level Overview | timeseries | Top $TopResources Disks by User Read Latency |
raid_disk_user_reads¶
Number of disk read operations initiated each second for retrieving data or metadata associated with user requests. raid_disk_user_reads is disk_user_reads aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_read_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_readsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_user_write_blocks¶
Number of blocks transferred for user write operations per second. raid_disk_user_write_blocks is disk_user_write_blocks aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_block_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_blocksUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_user_write_chain¶
Average number of blocks transferred in each user write operation. raid_disk_user_write_chain is disk_user_write_chain aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_chainUnit: none Type: average Base: user_write_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_chainUnit: none Type: average Base: user_writes |
conf/zapiperf/cdot/9.8.0/disk.yaml |
raid_disk_user_write_latency¶
Average latency per block in microseconds for user write operations. raid_disk_user_write_latency is disk_user_write_latency aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_block_count |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_write_latencyUnit: microsec Type: average Base: user_write_blocks |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The raid_disk_user_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Top Disks: Raid-level Overview | timeseries | Top $TopResources Disks by User Write Latency |
raid_disk_user_writes¶
Number of disk write operations initiated each second for storing data or metadata associated with user requests. raid_disk_user_writes is disk_user_writes aggregated by raid.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/disk:constituent |
user_write_countUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | perf-object-get-instances disk:constituent |
user_writesUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
rw_ctx_cifs_giveups¶
Array of number of give-ups of CIFS ops because they rewind more than a certain threshold, categorized by their rewind reasons.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/rewind_context |
cifs_give_upsUnit: none Type: delta Base: |
conf/restperf/9.16.0/rwctx.yaml |
| ZAPI | perf-object-get-instances rw_ctx |
cifs_giveupsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/rwctx.yaml |
The rw_ctx_cifs_giveups metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Rewind View | timeseries | Top $TopResources Metric by CIFS Giveups |
rw_ctx_cifs_rewinds¶
Array of number of rewinds for CIFS ops based on their reasons.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/rewind_context |
cifs_rewindsUnit: none Type: delta Base: |
conf/restperf/9.16.0/rwctx.yaml |
| ZAPI | perf-object-get-instances rw_ctx |
cifs_rewindsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/rwctx.yaml |
The rw_ctx_cifs_rewinds metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Rewind View | timeseries | Top $TopResources Metric by CIFS Rewinds |
rw_ctx_nfs_giveups¶
Array of number of give-ups of NFS ops because they rewind more than a certain threshold, categorized by their rewind reasons.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/rewind_context |
nfs_give_upsUnit: none Type: delta Base: |
conf/restperf/9.16.0/rwctx.yaml |
| ZAPI | perf-object-get-instances rw_ctx |
nfs_giveupsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/rwctx.yaml |
The rw_ctx_nfs_giveups metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Rewind View | timeseries | Top $TopResources Metric by NFS Giveups |
rw_ctx_nfs_rewinds¶
Array of number of rewinds for NFS ops based on their reasons.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/rewind_context |
nfs_rewindsUnit: none Type: delta Base: |
conf/restperf/9.16.0/rwctx.yaml |
| ZAPI | perf-object-get-instances rw_ctx |
nfs_rewindsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/rwctx.yaml |
The rw_ctx_nfs_rewinds metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Rewind View | timeseries | Top $TopResources Metric by NFS Rewinds |
rw_ctx_qos_flowcontrol¶
The number of times QoS limiting has enabled stream flowcontrol.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances rw_ctx |
qos_flowcontrolUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/rwctx.yaml |
The rw_ctx_qos_flowcontrol metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Rewind View | timeseries | Top $TopResources Node by QoS Flowcontrol |
rw_ctx_qos_rewinds¶
The number of restarts after a rewind because of QoS limiting.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances rw_ctx |
qos_rewindsUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/rwctx.yaml |
The rw_ctx_qos_rewinds metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Rewind View | timeseries | Top $TopResources Node by QoS Rewinds |
security_account_labels¶
This metric provides information about SecurityAccount
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/accounts |
Harvest generated |
conf/rest/9.12.0/security_account.yaml |
| ZAPI | security-login-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/security_account.yaml |
The security_account_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
security_audit_destination_port¶
The destination port used to forward the message.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | cluster-log-forward-get-iter |
cluster-log-forward-info.port |
conf/zapi/cdot/9.8.0/security_audit_dest.yaml |
security_certificate_expiry_time¶
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/security/certificate |
expiration |
conf/rest/9.12.0/security_certificate.yaml |
| ZAPI | security-certificate-get-iter |
certificate-info.expiration-date |
conf/zapi/cdot/9.8.0/security_certificate.yaml |
The security_certificate_expiry_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | table | SSL Certificates Expiration |
security_certificate_labels¶
This metric provides information about SecurityCert
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/security/certificate |
Harvest generated |
conf/rest/9.12.0/security_certificate.yaml |
| ZAPI | security-certificate-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/security_certificate.yaml |
The security_certificate_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Expiring in < 60 days |
| ONTAP: Security | Highlights | stat | Expired |
| ONTAP: Security | Highlights | table | SSL Certificates Expiration |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
security_labels¶
This metric provides information about Security
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security |
Harvest generated |
conf/rest/9.12.0/security.yaml |
| ZAPI | cluster-identity-get |
Harvest generated |
conf/zapi/cdot/9.8.0/security.yaml |
The security_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
security_login_labels¶
This metric provides information about SecurityLogin
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/login/messages |
Harvest generated |
conf/rest/9.12.0/security_login.yaml |
| ZAPI | vserver-login-banner-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/security_login.yaml |
The security_login_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | stat | SVM Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Highlights | piechart | SVM Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
| ONTAP: Security | SVM Compliance | table | SVM Compliance |
security_ssh_labels¶
This metric provides information about SecuritySsh
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/ssh |
Harvest generated |
conf/rest/9.12.0/security_ssh.yaml |
The security_ssh_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
security_ssh_max_instances¶
Maximum possible simultaneous connections.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/security/ssh |
max_instances |
conf/rest/9.12.0/security_ssh.yaml |
shelf_average_ambient_temperature¶
Average temperature of all ambient sensors for shelf in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_average_ambient_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_average_fan_speed¶
Average fan speed for shelf in rpm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_average_fan_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_average_temperature¶
Average temperature of all non-ambient sensors for shelf in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_average_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Highlights | timeseries | Top $TopResources Shelves by Average Temperature |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | timeseries | Top $TopResources Shelves by Average Temperature |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_disk_count¶
Disk count in a shelf.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/shelves |
disk_count |
conf/rest/9.12.0/shelf.yaml |
| ZAPI | storage-shelf-info-get-iter |
storage-shelf-info.disk-count |
conf/zapi/cdot/9.8.0/shelf.yaml |
The shelf_disk_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_fan_labels¶
This metric provides information about shelf fans.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_fan_rpm¶
Fan Rotation Per Minute.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_fan_rpm metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | Highlights | stat | Fan RPM Avg |
| ONTAP: Shelf | Temperature Sensors | bargauge | Cooling Sensors |
shelf_fan_status¶
Fan Operational Status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_labels¶
This metric provides information about Shelf
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/shelves |
Harvest generated |
conf/rest/9.12.0/shelf.yaml |
| ZAPI | storage-shelf-info-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/shelf.yaml |
The shelf_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Datacenter | Power and Temperature | stat | Total Power |
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Highlights | stat | Total Power |
| ONTAP: Power | Highlights | stat | Average Power/Used_TB |
| ONTAP: Power | Highlights | stat | Average IOPs/Watt |
| ONTAP: Power | Highlights | timeseries | Total Power Consumed |
| ONTAP: Power | Highlights | timeseries | Average Power Consumption (kWh) Over Last Hour |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | stat | Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_max_fan_speed¶
Maximum fan speed for shelf in rpm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_max_fan_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Max Shelf Fan Speed |
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Highlights | stat | Max Shelf Fan Speed |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | stat | Max Shelf Fan Speed |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_max_temperature¶
Maximum temperature of all non-ambient sensors for shelf in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_max_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Max Shelf Temp |
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Highlights | stat | Max Shelf Temp |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | stat | Max Shelf temp |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_min_ambient_temperature¶
Minimum temperature of all ambient sensors for shelf in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_min_ambient_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_min_fan_speed¶
Minimum fan speed for shelf in rpm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_min_fan_speed metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_min_temperature¶
Minimum temperature of all non-ambient sensors for shelf in Celsius.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_min_temperature metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_module_labels¶
This metric provides information about shelf module.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_module_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | Module | table | Storage Shelf Modules |
shelf_module_status¶
Displays the shelf module labels with their status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_new_status¶
This metric indicates a value of 1 if the shelf state is online or ok (indicating the shelf is operational) and a value of 0 for any other state (indicating the shelf is not operational).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/shelf.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/shelf.yaml |
The shelf_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_power¶
Power consumed by shelf in Watts.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_power metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Power and Temperature | stat | Total Power |
| ONTAP: Datacenter | Power and Temperature | timeseries | Total Power Consumed |
| ONTAP: Health | Shelves | table | Storage Shelf Issues |
| ONTAP: Power | Highlights | stat | Total Power |
| ONTAP: Power | Highlights | stat | Average Power/Used_TB |
| ONTAP: Power | Highlights | stat | Average IOPs/Watt |
| ONTAP: Power | Highlights | timeseries | Total Power Consumed |
| ONTAP: Power | Highlights | timeseries | Average Power Consumption (kWh) Over Last Hour |
| ONTAP: Power | Highlights | timeseries | Top $TopResources Shelves by Power Consumed |
| ONTAP: Power | Shelves | table | Storage Shelves |
| ONTAP: Shelf | Highlights | stat | Total Power (Shelf) |
| ONTAP: Shelf | Highlights | timeseries | Top $TopResources Shelves by Power Consumed |
| ONTAP: Shelf | Highlights | table | Storage Shelves |
shelf_psu_labels¶
This metric provides information about shelf psu.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_psu_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | PSU | table | Storage Shelf PSUs |
shelf_psu_power_drawn¶
Power Drawn From PSU In Watts.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_psu_power_drawn metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | PSU | table | Storage Shelf PSUs |
| ONTAP: Shelf | Voltage & PSU Sensors | bargauge | Energy Drawn from PSUs |
shelf_psu_power_rating¶
Power Supply Power Ratings In Watts.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_psu_power_rating metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | PSU | table | Storage Shelf PSUs |
| ONTAP: Shelf | Voltage & PSU Sensors | bargauge | Power Rating from PSUs |
shelf_psu_status¶
Operational Status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_sensor_labels¶
This metric provides information about shelf sensor.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_sensor_reading¶
Current Sensor Reading.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_sensor_reading metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | Highlights | stat | Custom Sensor Avg |
| ONTAP: Shelf | Custom Sensors | bargauge | Custom Sensors - Shelf |
shelf_sensor_status¶
Operational Status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_temperature_labels¶
This metric provides information about shelf temperature.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_temperature_reading¶
Temperature Reading.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_temperature_reading metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | Highlights | stat | Temperature Avg |
| ONTAP: Shelf | Temperature Sensors | bargauge | Temperature Sensors |
shelf_temperature_status¶
Operational Status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_voltage_labels¶
This metric provides information about shelf voltage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
shelf_voltage_reading¶
Voltage Current Reading.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
The shelf_voltage_reading metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Shelf | Voltage & PSU Sensors | bargauge | Voltage Sensors |
shelf_voltage_status¶
Operational Status.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generatedUnit: Type: Base: |
conf/restperf/9.12.0/disk.yaml |
| ZAPI | NA |
Harvest generatedUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/disk.yaml |
smb2_close_latency¶
Average latency for SMB2_COM_CLOSE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
close_latencyUnit: microsec Type: average Base: close_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
close_latencyUnit: microsec Type: average Base: close_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_close_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_close_ops¶
Number of SMB2_COM_CLOSE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
close_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
close_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_close_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_create_latency¶
Average latency for SMB2_COM_CREATE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
create_latencyUnit: microsec Type: average Base: create_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
create_latencyUnit: microsec Type: average Base: create_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_create_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_create_ops¶
Number of SMB2_COM_CREATE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
create_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
create_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_create_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_lock_latency¶
Average latency for SMB2_COM_LOCK operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
lock_latencyUnit: microsec Type: average Base: lock_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
lock_latencyUnit: microsec Type: average Base: lock_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_lock_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_lock_ops¶
Number of SMB2_COM_LOCK operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
lock_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
lock_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_lock_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_negotiate_latency¶
Average latency for SMB2_COM_NEGOTIATE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
negotiate_latencyUnit: microsec Type: average Base: negotiate_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
negotiate_latencyUnit: microsec Type: average Base: negotiate_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_negotiate_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_negotiate_ops¶
Number of SMB2_COM_NEGOTIATE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
negotiate_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
negotiate_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_negotiate_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_oplock_break_latency¶
Average latency for SMB2_COM_OPLOCK_BREAK operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
oplock_break_latencyUnit: microsec Type: average Base: oplock_break_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
oplock_break_latencyUnit: microsec Type: average Base: oplock_break_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_oplock_break_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_oplock_break_ops¶
Number of SMB2_COM_OPLOCK_BREAK operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
oplock_break_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
oplock_break_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_oplock_break_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_query_directory_latency¶
Average latency for SMB2_COM_QUERY_DIRECTORY operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
query_directory_latencyUnit: microsec Type: average Base: query_directory_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
query_directory_latencyUnit: microsec Type: average Base: query_directory_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_query_directory_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_query_directory_ops¶
Number of SMB2_COM_QUERY_DIRECTORY operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
query_directory_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
query_directory_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_query_directory_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_query_info_latency¶
Average latency for SMB2_COM_QUERY_INFO operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
query_info_latencyUnit: microsec Type: average Base: query_info_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
query_info_latencyUnit: microsec Type: average Base: query_info_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_query_info_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_query_info_ops¶
Number of SMB2_COM_QUERY_INFO operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
query_info_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
query_info_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_query_info_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_read_latency¶
Average latency for SMB2_COM_READ operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Read / Write Latency |
smb2_read_ops¶
Number of SMB2_COM_READ operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Read / Write IOPS |
smb2_session_setup_latency¶
Average latency for SMB2_COM_SESSION_SETUP operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
session_setup_latencyUnit: microsec Type: average Base: session_setup_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
session_setup_latencyUnit: microsec Type: average Base: session_setup_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_session_setup_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_session_setup_ops¶
Number of SMB2_COM_SESSION_SETUP operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
session_setup_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
session_setup_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_session_setup_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_set_info_latency¶
Average latency for SMB2_COM_SET_INFO operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
set_info_latencyUnit: microsec Type: average Base: set_info_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
set_info_latencyUnit: microsec Type: average Base: set_info_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_set_info_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_set_info_ops¶
Number of SMB2_COM_SET_INFO operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
set_info_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
set_info_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_set_info_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_tree_connect_latency¶
Average latency for SMB2_COM_TREE_CONNECT operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
tree_connect_latencyUnit: microsec Type: average Base: tree_connect_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
tree_connect_latencyUnit: microsec Type: average Base: tree_connect_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_tree_connect_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other Latency |
smb2_tree_connect_ops¶
Number of SMB2_COM_TREE_CONNECT operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
tree_connect_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
tree_connect_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_tree_connect_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Other IOPS |
smb2_write_latency¶
Average latency for SMB2_COM_WRITE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
write_latencyUnit: microsec Type: average Base: write_latency_base |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Read / Write Latency |
smb2_write_ops¶
Number of SMB2_COM_WRITE operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/smb2 |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.14.1/smb2.yaml |
| ZAPI | perf-object-get-instances smb2 |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/smb2.yaml |
The smb2_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SMB | SMB Performance | timeseries | Read / Write IOPS |
snapmirror_break_failed_count¶
The number of failed SnapMirror break operations for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
break_failed_count |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.break-failed-count |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_break_failed_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | stat | Number of Failed SnapMirror Transfers |
snapmirror_break_successful_count¶
The number of successful SnapMirror break operations for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
break_successful_count |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.break-successful-count |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_break_successful_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | stat | Number of Successful SnapMirror Transfers |
snapmirror_labels¶
This metric provides information about SnapMirror
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
Harvest generated |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: SnapMirror Sources | Highlights | stat | Unhealthy Relationships |
| ONTAP: SnapMirror Sources | Highlights | table | Relationships |
| ONTAP: SnapMirror Sources | Policy and Lag details | piechart | Relationships by Protection Policy |
| ONTAP: SnapMirror Sources | Policy and Lag details | table | Protection Policy and Lag detail |
| ONTAP: SnapMirror Destinations | Highlights | stat | Unhealthy |
| ONTAP: SnapMirror Destinations | Highlights | piechart | Relationships by Protection Policy |
| ONTAP: SnapMirror Destinations | Highlights | stat | Healthy |
| ONTAP: SnapMirror Destinations | Highlights | table | Relationships |
| ONTAP: SnapMirror Destinations | Consistency Group Data Protection | stat | Unhealthy |
| ONTAP: SnapMirror Destinations | Consistency Group Data Protection | piechart | Consistency Group relationships by relationship type |
| ONTAP: SnapMirror Destinations | Consistency Group Data Protection | stat | Healthy |
| ONTAP: SnapMirror Destinations | Consistency Group Data Protection | table | Consistency Group Relationships |
snapmirror_lag_time¶
Amount of time since the last snapmirror transfer in seconds
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
lag_time |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.lag-time |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_lag_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | table | Relationships |
| ONTAP: SnapMirror Sources | Highlights | timeseries | Top $TopResources Relationships by Lag Time |
| ONTAP: SnapMirror Sources | Policy and Lag details | piechart | Relationships by Lag time |
| ONTAP: SnapMirror Sources | Policy and Lag details | table | Protection Policy and Lag detail |
| ONTAP: SnapMirror Destinations | Highlights | table | Relationships |
| ONTAP: SnapMirror Destinations | Highlights | timeseries | Top $TopResources Relationships by Lag Time |
snapmirror_last_transfer_duration¶
Duration of the last SnapMirror transfer in seconds
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
last_transfer_duration |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.last-transfer-duration |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_last_transfer_duration metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | table | Relationships |
| ONTAP: SnapMirror Sources | Highlights | timeseries | Top $TopResources Relationships by Transfer Duration |
| ONTAP: SnapMirror Destinations | Highlights | table | Relationships |
| ONTAP: SnapMirror Destinations | Highlights | timeseries | Top $TopResources Relationships by Transfer Duration |
snapmirror_last_transfer_end_timestamp¶
The Timestamp of the end of the last transfer
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
last_transfer_end_timestamp |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.last-transfer-end-timestamp |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
snapmirror_last_transfer_size¶
Size in kilobytes (1024 bytes) of the last transfer
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
last_transfer_size |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.last-transfer-size |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_last_transfer_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | table | Relationships |
| ONTAP: SnapMirror Sources | Highlights | timeseries | Top $TopResources Relationships by Transfer Data |
| ONTAP: SnapMirror Destinations | Highlights | table | Relationships |
| ONTAP: SnapMirror Destinations | Highlights | timeseries | Top $TopResources Relationships by Transfer Data |
snapmirror_newest_snapshot_timestamp¶
The timestamp of the newest Snapshot copy on the destination volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
newest_snapshot_timestamp |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.newest-snapshot-timestamp |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
snapmirror_policy_labels¶
This metric provides information about SnapMirrorPolicy
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/snapmirror/policies |
Harvest generated |
conf/rest/9.6.0/snapmirrorpolicy.yaml |
The snapmirror_policy_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Local Policy | table | Protection policies |
snapmirror_resync_failed_count¶
The number of failed SnapMirror resync operations for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
resync_failed_count |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.resync-failed-count |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_resync_failed_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | stat | Number of Failed SnapMirror Transfers |
snapmirror_resync_successful_count¶
The number of successful SnapMirror resync operations for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
resync_successful_count |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.resync-successful-count |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_resync_successful_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | stat | Number of Successful SnapMirror Transfers |
snapmirror_total_transfer_bytes¶
Cumulative bytes transferred for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
total_transfer_bytes |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.total-transfer-bytes |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
snapmirror_total_transfer_time_secs¶
Cumulative total transfer time in seconds for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
total_transfer_time_secs |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.total-transfer-time-secs |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
snapmirror_update_failed_count¶
The number of successful SnapMirror update operations for the relationship
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
update_failed_count |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.update-failed-count |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_update_failed_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | stat | Number of Failed SnapMirror Transfers |
snapmirror_update_successful_count¶
Number of Successful Updates
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/snapmirror |
update_successful_count |
conf/rest/9.12.0/snapmirror.yaml |
| ZAPI | snapmirror-get-iter |
snapmirror-info.update-successful-count |
conf/zapi/cdot/9.8.0/snapmirror.yaml |
The snapmirror_update_successful_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SnapMirror Sources | Highlights | stat | Number of Successful SnapMirror Transfers |
snapshot_policy_labels¶
This metric provides information about SnapshotPolicy
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/snapshot-policies |
Harvest generated |
conf/rest/9.12.0/snapshotpolicy.yaml |
| ZAPI | snapshot-policy-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/snapshotpolicy.yaml |
The snapshot_policy_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Snapshot Copies | stat | <10 Copies |
| ONTAP: Data Protection | Snapshot Copies | stat | 10-100 Copies |
| ONTAP: Data Protection | Snapshot Copies | stat | 101-500 Copies |
| ONTAP: Data Protection | Snapshot Copies | stat | >500 Copies |
| ONTAP: Data Protection | Snapshot Copies | table | Volume count by the number of Snapshot copies |
| ONTAP: Data Protection | Local Policy | table | Snapshot policies |
| ONTAP: Datacenter | Snapshots | piechart | Snapshot Copies |
support_auto_update_labels¶
This metric provides information about SupportAutoUpdate
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/support/auto-update |
Harvest generated |
conf/rest/9.12.0/support_auto_update.yaml |
The support_auto_update_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
support_labels¶
This metric provides information about Support
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/support/autosupport |
Harvest generated |
conf/rest/9.12.0/support.yaml |
| ZAPI | autosupport-config-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/support.yaml |
The support_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
svm_cifs_connections¶
Number of connections
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
connectionsUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
connectionsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_connections metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | SVM CIFS Connections and Open Files |
svm_cifs_established_sessions¶
Number of established SMB and SMB2 sessions
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
established_sessionsUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
established_sessionsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
svm_cifs_latency¶
Average latency for CIFS operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
latencyUnit: microsec Type: average Base: latency_base |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_latencyUnit: microsec Type: average Base: cifs_latency_base |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | stat | SVM CIFS Average Latency |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS Latency |
svm_cifs_op_count¶
Array of select CIFS operation counts
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
op_countUnit: none Type: rate Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_op_countUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_op_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | stat | SVM CIFS IOPs |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS IOPs |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS IOP by Type |
svm_cifs_open_files¶
Number of open files over SMB and SMB2
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
open_filesUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
open_filesUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_open_files metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | SVM CIFS Connections and Open Files |
svm_cifs_ops¶
Total number of CIFS operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
svm_cifs_other_latency¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/cifs/services |
statistics.latency_raw.otherUnit: microsec Type: average Base: svm_cifs_statistics.iops_raw.other |
conf/keyperf/9.15.0/cifs_vserver.yaml |
svm_cifs_other_ops¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/cifs/services |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cifs_vserver.yaml |
svm_cifs_read_data¶
Performance metric for read I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/cifs/services |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/cifs_vserver.yaml |
svm_cifs_read_latency¶
Average latency for CIFS read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
average_read_latencyUnit: microsec Type: average Base: total_read_ops |
conf/restperf/9.12.0/cifs_vserver.yaml |
| KeyPerf | api/protocols/cifs/services |
statistics.latency_raw.readUnit: microsec Type: average Base: svm_cifs_statistics.iops_raw.read |
conf/keyperf/9.15.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_read_latencyUnit: microsec Type: average Base: cifs_read_ops |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | stat | SVM CIFS Average Read Latency |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS Latency |
svm_cifs_read_ops¶
Total number of CIFS read operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
total_read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| KeyPerf | api/protocols/cifs/services |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | stat | SVM CIFS Read IOPs |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS IOPs |
svm_cifs_signed_sessions¶
Number of signed SMB and SMB2 sessions.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
signed_sessionsUnit: none Type: raw Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
signed_sessionsUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
svm_cifs_total_latency¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/cifs/services |
statistics.latency_raw.totalUnit: microsec Type: average Base: svm_cifs_statistics.iops_raw.total |
conf/keyperf/9.15.0/cifs_vserver.yaml |
svm_cifs_total_ops¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/cifs/services |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cifs_vserver.yaml |
svm_cifs_write_data¶
Performance metric for write I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/cifs/services |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/cifs_vserver.yaml |
svm_cifs_write_latency¶
Average latency for CIFS write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
average_write_latencyUnit: microsec Type: average Base: total_write_ops |
conf/restperf/9.12.0/cifs_vserver.yaml |
| KeyPerf | api/protocols/cifs/services |
statistics.latency_raw.writeUnit: microsec Type: average Base: svm_cifs_statistics.iops_raw.write |
conf/keyperf/9.15.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_write_latencyUnit: microsec Type: average Base: cifs_write_ops |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | stat | SVM CIFS Average Write Latency |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS Latency |
svm_cifs_write_ops¶
Total number of CIFS write operations
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_cifs |
total_write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/cifs_vserver.yaml |
| KeyPerf | api/protocols/cifs/services |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/cifs_vserver.yaml |
| ZAPI | perf-object-get-instances cifs:vserver |
cifs_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/cifs_vserver.yaml |
The svm_cifs_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | stat | SVM CIFS Write IOPs |
| ONTAP: SVM | CIFS | timeseries | SVM CIFS IOPs |
svm_labels¶
This metric provides information about SVM
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/vserver |
Harvest generated |
conf/rest/9.9.0/svm.yaml |
| ZAPI | vserver-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/svm.yaml |
The svm_labels metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources SVMs by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources SVMs by Capacity Used % |
| ONTAP: cDOT | SVM Metrics | timeseries | Top $TopResources Average Throughput by SVMs |
| ONTAP: Datacenter | Highlights | table | Object Count |
| ONTAP: Security | Highlights | stat | Cluster Compliant % |
| ONTAP: Security | Highlights | stat | SVM Compliant % |
| ONTAP: Security | Highlights | stat | SVM Anti-ransomware Status % |
| ONTAP: Security | Highlights | piechart | Cluster Compliant |
| ONTAP: Security | Highlights | piechart | SVM Compliant |
| ONTAP: Security | Highlights | piechart | SVM Anti-ransomware Status |
| ONTAP: Security | Cluster Compliance | table | Cluster Compliance |
| ONTAP: Security | SVM Compliance | table | SVM Compliance |
svm_new_status¶
This metric indicates a value of 1 if the SVM state is online (indicating the SVM is operational) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.10.0/svm.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/svm.yaml |
svm_nfs_access_avg_latency¶
Average latency of Access procedure requests. The counter keeps track of the average response time of Access requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
access.average_latencyUnit: microsec Type: average Base: access.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
access_avg_latencyUnit: microsec Type: average,no-zero-values Base: access_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_access_total¶
Total number of Access procedure requests. It is the total number of access success and access error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
access.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
access_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_backchannel_ctl_avg_latency¶
Average latency of BACKCHANNEL_CTL operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
backchannel_ctl.average_latencyUnit: microsec Type: average Base: backchannel_ctl.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
backchannel_ctl.average_latencyUnit: microsec Type: average Base: backchannel_ctl.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
backchannel_ctl_avg_latencyUnit: microsec Type: average,no-zero-values Base: backchannel_ctl_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
backchannel_ctl_avg_latencyUnit: microsec Type: average,no-zero-values Base: backchannel_ctl_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_backchannel_ctl_total¶
Total number of BACKCHANNEL_CTL operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
backchannel_ctl.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
backchannel_ctl.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
backchannel_ctl_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
backchannel_ctl_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_bind_conn_to_session_avg_latency¶
Average latency of BIND_CONN_TO_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
bind_connections_to_session.average_latencyUnit: microsec Type: average Base: bind_connections_to_session.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
bind_conn_to_session.average_latencyUnit: microsec Type: average Base: bind_conn_to_session.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
bind_conn_to_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: bind_conn_to_session_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
bind_conn_to_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: bind_conn_to_session_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_bind_conn_to_session_total¶
Total number of BIND_CONN_TO_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
bind_connections_to_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
bind_conn_to_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
bind_conn_to_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
bind_conn_to_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_close_avg_latency¶
Average latency of CLOSE procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
close.average_latencyUnit: microsec Type: average Base: close.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
close.average_latencyUnit: microsec Type: average Base: close.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
close.average_latencyUnit: microsec Type: average Base: close.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
close_avg_latencyUnit: microsec Type: average,no-zero-values Base: close_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
close_avg_latencyUnit: microsec Type: average,no-zero-values Base: close_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
close_avg_latencyUnit: microsec Type: average,no-zero-values Base: close_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_close_total¶
Total number of CLOSE procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
close.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
close.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
close.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
close_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
close_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
close_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_commit_avg_latency¶
Average latency of Commit procedure requests. The counter keeps track of the average response time of Commit requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
commit.average_latencyUnit: microsec Type: average Base: commit.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
commit_avg_latencyUnit: microsec Type: average,no-zero-values Base: commit_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_commit_total¶
Total number of Commit procedure requests. It is the total number of Commit success and Commit error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
commit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
commit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_create_avg_latency¶
Average latency of Create procedure requests. The counter keeps track of the average response time of Create requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
create.average_latencyUnit: microsec Type: average Base: create.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
create_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_create_session_avg_latency¶
Average latency of CREATE_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
create_session.average_latencyUnit: microsec Type: average Base: create_session.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
create_session.average_latencyUnit: microsec Type: average Base: create_session.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
create_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_session_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
create_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: create_session_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_create_session_total¶
Total number of CREATE_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
create_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
create_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
create_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
create_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_create_total¶
Total number Create of procedure requests. It is the total number of create success and create error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
create.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
create_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_delegpurge_avg_latency¶
Average latency of DELEGPURGE procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
delegpurge.average_latencyUnit: microsec Type: average Base: delegpurge.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
delegpurge.average_latencyUnit: microsec Type: average Base: delegpurge.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
delegpurge.average_latencyUnit: microsec Type: average Base: delegpurge.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
delegpurge_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegpurge_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
delegpurge_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegpurge_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
delegpurge_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegpurge_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_delegpurge_total¶
Total number of DELEGPURGE procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
delegpurge.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
delegpurge.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
delegpurge.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
delegpurge_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
delegpurge_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
delegpurge_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_delegreturn_avg_latency¶
Average latency of DELEGRETURN procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
delegreturn.average_latencyUnit: microsec Type: average Base: delegreturn.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
delegreturn.average_latencyUnit: microsec Type: average Base: delegreturn.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
delegreturn.average_latencyUnit: microsec Type: average Base: delegreturn.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
delegreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegreturn_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
delegreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegreturn_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
delegreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: delegreturn_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_delegreturn_total¶
Total number of DELEGRETURN procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
delegreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
delegreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
delegreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
delegreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
delegreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
delegreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_destroy_clientid_avg_latency¶
Average latency of DESTROY_CLIENTID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
destroy_clientid.average_latencyUnit: microsec Type: average Base: destroy_clientid.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
destroy_clientid.average_latencyUnit: microsec Type: average Base: destroy_clientid.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
destroy_clientid_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_clientid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
destroy_clientid_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_clientid_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_destroy_clientid_total¶
Total number of DESTROY_CLIENTID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
destroy_clientid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
destroy_clientid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
destroy_clientid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
destroy_clientid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_destroy_session_avg_latency¶
Average latency of DESTROY_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
destroy_session.average_latencyUnit: microsec Type: average Base: destroy_session.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
destroy_session.average_latencyUnit: microsec Type: average Base: destroy_session.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
destroy_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_session_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
destroy_session_avg_latencyUnit: microsec Type: average,no-zero-values Base: destroy_session_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_destroy_session_total¶
Total number of DESTROY_SESSION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
destroy_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
destroy_session.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
destroy_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
destroy_session_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_exchange_id_avg_latency¶
Average latency of EXCHANGE_ID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
exchange_id.average_latencyUnit: microsec Type: average Base: exchange_id.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
exchange_id.average_latencyUnit: microsec Type: average Base: exchange_id.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
exchange_id_avg_latencyUnit: microsec Type: average,no-zero-values Base: exchange_id_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
exchange_id_avg_latencyUnit: microsec Type: average,no-zero-values Base: exchange_id_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_exchange_id_total¶
Total number of EXCHANGE_ID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
exchange_id.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
exchange_id.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
exchange_id_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
exchange_id_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_free_stateid_avg_latency¶
Average latency of FREE_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
free_stateid.average_latencyUnit: microsec Type: average Base: free_stateid.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
free_stateid.average_latencyUnit: microsec Type: average Base: free_stateid.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
free_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: free_stateid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
free_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: free_stateid_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_free_stateid_total¶
Total number of FREE_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
free_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
free_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
free_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
free_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_fsinfo_avg_latency¶
Average latency of FSInfo procedure requests. The counter keeps track of the average response time of FSInfo requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
fsinfo.average_latencyUnit: microsec Type: average Base: fsinfo.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
fsinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: fsinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_fsinfo_total¶
Total number FSInfo of procedure requests. It is the total number of FSInfo success and FSInfo error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
fsinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
fsinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_fsstat_avg_latency¶
Average latency of FSStat procedure requests. The counter keeps track of the average response time of FSStat requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
fsstat.average_latencyUnit: microsec Type: average Base: fsstat.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
fsstat_avg_latencyUnit: microsec Type: average,no-zero-values Base: fsstat_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_fsstat_total¶
Total number FSStat of procedure requests. It is the total number of FSStat success and FSStat error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
fsstat.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
fsstat_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_get_dir_delegation_avg_latency¶
Average latency of GET_DIR_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
get_dir_delegation.average_latencyUnit: microsec Type: average Base: get_dir_delegation.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
get_dir_delegation.average_latencyUnit: microsec Type: average Base: get_dir_delegation.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
get_dir_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: get_dir_delegation_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
get_dir_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: get_dir_delegation_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_get_dir_delegation_total¶
Total number of GET_DIR_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
get_dir_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
get_dir_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
get_dir_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
get_dir_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getattr_avg_latency¶
Average latency of GetAttr procedure requests. This counter keeps track of the average response time of GetAttr requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getattr.average_latencyUnit: microsec Type: average Base: getattr.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: getattr_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getattr_total¶
Total number of Getattr procedure requests. It is the total number of getattr success and getattr error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getdeviceinfo_avg_latency¶
Average latency of GETDEVICEINFO operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getdeviceinfo.average_latencyUnit: microsec Type: average Base: getdeviceinfo.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getdeviceinfo.average_latencyUnit: microsec Type: average Base: getdeviceinfo.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getdeviceinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdeviceinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getdeviceinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdeviceinfo_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getdeviceinfo_total¶
Total number of GETDEVICEINFO operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getdeviceinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getdeviceinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getdeviceinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getdeviceinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getdevicelist_avg_latency¶
Average latency of GETDEVICELIST operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getdevicelist.average_latencyUnit: microsec Type: average Base: getdevicelist.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getdevicelist.average_latencyUnit: microsec Type: average Base: getdevicelist.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getdevicelist_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdevicelist_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getdevicelist_avg_latencyUnit: microsec Type: average,no-zero-values Base: getdevicelist_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getdevicelist_total¶
Total number of GETDEVICELIST operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getdevicelist.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getdevicelist.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getdevicelist_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getdevicelist_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getfh_avg_latency¶
Average latency of GETFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
getfh.average_latencyUnit: microsec Type: average Base: getfh.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getfh.average_latencyUnit: microsec Type: average Base: getfh.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getfh.average_latencyUnit: microsec Type: average Base: getfh.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
getfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: getfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: getfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: getfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_getfh_total¶
Total number of GETFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
getfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
getfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
getfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
getfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
getfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
getfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_latency¶
Average latency of NFSv3 requests. This counter keeps track of the average response time of NFSv3 requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.latency_raw.totalUnit: microsec Type: average Base: svm_nfs_statistics.v3.iops_raw.total |
conf/keyperf/9.15.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.latency_raw.totalUnit: microsec Type: average Base: svm_nfs_statistics.v4.iops_raw.total |
conf/keyperf/9.15.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv4_1.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.latency_raw.totalUnit: microsec Type: average Base: svm_nfs_statistics.v41.iops_raw.total |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
latencyUnit: microsec Type: average,no-zero-values Base: total_ops |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Avg Latency |
| ONTAP: SVM | NFSv4 | stat | NFSv4 Avg Latency |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Avg Latency |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Avg Latency |
svm_nfs_layoutcommit_avg_latency¶
Average latency of LAYOUTCOMMIT operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
layoutcommit.average_latencyUnit: microsec Type: average Base: layoutcommit.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
layoutcommit.average_latencyUnit: microsec Type: average Base: layoutcommit.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
layoutcommit_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutcommit_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
layoutcommit_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutcommit_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_layoutcommit_total¶
Total number of LAYOUTCOMMIT operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
layoutcommit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
layoutcommit.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
layoutcommit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
layoutcommit_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_layoutget_avg_latency¶
Average latency of LAYOUTGET operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
layoutget.average_latencyUnit: microsec Type: average Base: layoutget.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
layoutget.average_latencyUnit: microsec Type: average Base: layoutget.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
layoutget_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutget_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
layoutget_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutget_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_layoutget_total¶
Total number of LAYOUTGET operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
layoutget.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
layoutget.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
layoutget_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
layoutget_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_layoutreturn_avg_latency¶
Average latency of LAYOUTRETURN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
layoutreturn.average_latencyUnit: microsec Type: average Base: layoutreturn.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
layoutreturn.average_latencyUnit: microsec Type: average Base: layoutreturn.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
layoutreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutreturn_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
layoutreturn_avg_latencyUnit: microsec Type: average,no-zero-values Base: layoutreturn_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_layoutreturn_total¶
Total number of LAYOUTRETURN operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
layoutreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
layoutreturn.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
layoutreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
layoutreturn_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_link_avg_latency¶
Average latency of Link procedure requests. The counter keeps track of the average response time of Link requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
link.average_latencyUnit: microsec Type: average Base: link.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
link_avg_latencyUnit: microsec Type: average,no-zero-values Base: link_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_link_total¶
Total number Link of procedure requests. It is the total number of Link success and Link error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
link.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
link_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lock_avg_latency¶
Average latency of LOCK procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lock.average_latencyUnit: microsec Type: average Base: lock.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lock.average_latencyUnit: microsec Type: average Base: lock.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lock.average_latencyUnit: microsec Type: average Base: lock.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lock_avg_latencyUnit: microsec Type: average,no-zero-values Base: lock_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lock_avg_latencyUnit: microsec Type: average,no-zero-values Base: lock_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lock_avg_latencyUnit: microsec Type: average,no-zero-values Base: lock_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lock_total¶
Total number of LOCK procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lock.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lock.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lock.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lock_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lock_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lock_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lockt_avg_latency¶
Average latency of LOCKT procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lockt.average_latencyUnit: microsec Type: average Base: lockt.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lockt.average_latencyUnit: microsec Type: average Base: lockt.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lockt.average_latencyUnit: microsec Type: average Base: lockt.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lockt_avg_latencyUnit: microsec Type: average,no-zero-values Base: lockt_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lockt_avg_latencyUnit: microsec Type: average,no-zero-values Base: lockt_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lockt_avg_latencyUnit: microsec Type: average,no-zero-values Base: lockt_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lockt_total¶
Total number of LOCKT procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lockt.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lockt.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lockt.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lockt_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lockt_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lockt_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_locku_avg_latency¶
Average latency of LOCKU procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
locku.average_latencyUnit: microsec Type: average Base: locku.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
locku.average_latencyUnit: microsec Type: average Base: locku.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
locku.average_latencyUnit: microsec Type: average Base: locku.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
locku_avg_latencyUnit: microsec Type: average,no-zero-values Base: locku_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
locku_avg_latencyUnit: microsec Type: average,no-zero-values Base: locku_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
locku_avg_latencyUnit: microsec Type: average,no-zero-values Base: locku_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_locku_total¶
Total number of LOCKU procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
locku.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
locku.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
locku.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
locku_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
locku_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
locku_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lookup_avg_latency¶
Average latency of LookUp procedure requests. This shows the average time it takes for the LookUp operation to reply to the request.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lookup.average_latencyUnit: microsec Type: average Base: lookup.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lookup_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookup_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lookup_total¶
Total number of Lookup procedure requests. It is the total number of lookup success and lookup error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lookup.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lookup_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lookupp_avg_latency¶
Average latency of LOOKUPP procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lookupp.average_latencyUnit: microsec Type: average Base: lookupp.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lookupp.average_latencyUnit: microsec Type: average Base: lookupp.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lookupp.average_latencyUnit: microsec Type: average Base: lookupp.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lookupp_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookupp_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lookupp_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookupp_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lookupp_avg_latencyUnit: microsec Type: average,no-zero-values Base: lookupp_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_lookupp_total¶
Total number of LOOKUPP procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
lookupp.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
lookupp.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
lookupp.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
lookupp_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
lookupp_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
lookupp_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_mkdir_avg_latency¶
Average latency of MkDir procedure requests. The counter keeps track of the average response time of MkDir requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
mkdir.average_latencyUnit: microsec Type: average Base: mkdir.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
mkdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: mkdir_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_mkdir_total¶
Total number MkDir of procedure requests. It is the total number of MkDir success and MkDir error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
mkdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
mkdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_mknod_avg_latency¶
Average latency of MkNod procedure requests. The counter keeps track of the average response time of MkNod requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
mknod.average_latencyUnit: microsec Type: average Base: mknod.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
mknod_avg_latencyUnit: microsec Type: average,no-zero-values Base: mknod_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_mknod_total¶
Total number MkNod of procedure requests. It is the total number of MkNod success and MkNod error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
mknod.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
mknod_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_null_avg_latency¶
Average latency of Null procedure requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
null.average_latencyUnit: microsec Type: average Base: null.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
null_avg_latencyUnit: microsec Type: average,no-zero-values Base: null_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_null_total¶
Total number of Null procedure requests. It is the total of null success and null error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
null.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
null_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_nverify_avg_latency¶
Average latency of NVERIFY procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
nverify.average_latencyUnit: microsec Type: average Base: nverify.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
nverify.average_latencyUnit: microsec Type: average Base: nverify.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
nverify.average_latencyUnit: microsec Type: average Base: nverify.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
nverify_avg_latencyUnit: microsec Type: average,no-zero-values Base: nverify_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
nverify_avg_latencyUnit: microsec Type: average,no-zero-values Base: nverify_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
nverify_avg_latencyUnit: microsec Type: average,no-zero-values Base: nverify_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_nverify_total¶
Total number of NVERIFY procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
nverify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
nverify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
nverify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
nverify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
nverify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
nverify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_open_avg_latency¶
Average latency of OPEN procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
open.average_latencyUnit: microsec Type: average Base: open.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
open.average_latencyUnit: microsec Type: average Base: open.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
open.average_latencyUnit: microsec Type: average Base: open.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
open_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
open_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
open_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_open_confirm_avg_latency¶
Average latency of OPEN_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
open_confirm.average_latencyUnit: microsec Type: average Base: open_confirm.total |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
open_confirm_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_confirm_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_open_confirm_total¶
Total number of OPEN_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
open_confirm.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
open_confirm_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_open_downgrade_avg_latency¶
Average latency of OPEN_DOWNGRADE procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
open_downgrade.average_latencyUnit: microsec Type: average Base: open_downgrade.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
open_downgrade.average_latencyUnit: microsec Type: average Base: open_downgrade.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
open_downgrade.average_latencyUnit: microsec Type: average Base: open_downgrade.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
open_downgrade_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_downgrade_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
open_downgrade_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_downgrade_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
open_downgrade_avg_latencyUnit: microsec Type: average,no-zero-values Base: open_downgrade_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_open_downgrade_total¶
Total number of OPEN_DOWNGRADE procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
open_downgrade.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
open_downgrade.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
open_downgrade.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
open_downgrade_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
open_downgrade_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
open_downgrade_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_open_total¶
Total number of OPEN procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
open.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
open.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
open.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
open_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
open_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
open_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_openattr_avg_latency¶
Average latency of OPENATTR procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
openattr.average_latencyUnit: microsec Type: average Base: openattr.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
openattr.average_latencyUnit: microsec Type: average Base: openattr.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
openattr.average_latencyUnit: microsec Type: average Base: openattr.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
openattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: openattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
openattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: openattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
openattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: openattr_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_openattr_total¶
Total number of OPENATTR procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
openattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
openattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
openattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
openattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
openattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
openattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_ops¶
Total number of NFSv3 procedure requests per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
nfsv3_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
total_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
total_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
total_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 IOPs |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by IOPs |
| ONTAP: SVM | NFSv4 | stat | NFSv4 IOPs |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by IOPs |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 IOPs |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by IOPs |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 IOPs |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by IOPs |
svm_nfs_other_latency¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/nfs/services |
statistics.v3.latency_raw.otherUnit: microsec Type: average Base: svm_nfs_statistics.v3.iops_raw.other |
conf/keyperf/9.15.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.latency_raw.otherUnit: microsec Type: average Base: svm_nfs_statistics.v4.iops_raw.other |
conf/keyperf/9.15.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.latency_raw.otherUnit: microsec Type: average Base: svm_nfs_statistics.v41.iops_raw.other |
conf/keyperf/9.15.0/nfsv4_1.yaml |
svm_nfs_other_ops¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/nfs/services |
statistics.v3.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
svm_nfs_pathconf_avg_latency¶
Average latency of PathConf procedure requests. The counter keeps track of the average response time of PathConf requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
pathconf.average_latencyUnit: microsec Type: average Base: pathconf.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
pathconf_avg_latencyUnit: microsec Type: average,no-zero-values Base: pathconf_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_pathconf_total¶
Total number PathConf of procedure requests. It is the total number of PathConf success and PathConf error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
pathconf.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
pathconf_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_putfh_avg_latency¶
Average latency of PUTFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
putfh.average_latencyUnit: microsec Type: average Base: putfh.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
putfh.average_latencyUnit: none Type: delta Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
putfh.average_latencyUnit: microsec Type: average Base: putfh.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
putfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
putfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
putfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_putfh_total¶
Total number of PUTFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
putfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
putfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
putfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
putfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
putfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
putfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_putpubfh_avg_latency¶
Average latency of PUTPUBFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
putpubfh.average_latencyUnit: microsec Type: average Base: putpubfh.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
putpubfh.average_latencyUnit: microsec Type: average Base: putpubfh.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
putpubfh.average_latencyUnit: microsec Type: average Base: putpubfh.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
putpubfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putpubfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
putpubfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putpubfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
putpubfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putpubfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_putpubfh_total¶
Total number of PUTPUBFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
putpubfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
putpubfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
putpubfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
putpubfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
putpubfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
putpubfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_putrootfh_avg_latency¶
Average latency of PUTROOTFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
putrootfh.average_latencyUnit: microsec Type: average Base: putrootfh.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
putrootfh.average_latencyUnit: microsec Type: average Base: putrootfh.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
putrootfh.average_latencyUnit: microsec Type: average Base: putrootfh.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
putrootfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putrootfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
putrootfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putrootfh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
putrootfh_avg_latencyUnit: microsec Type: average,no-zero-values Base: putrootfh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_putrootfh_total¶
Total number of PUTROOTFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
putrootfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
putrootfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
putrootfh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
putrootfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
putrootfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
putrootfh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_read_avg_latency¶
Average latency of Read procedure requests. The counter keeps track of the average response time of Read requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.latency_raw.readUnit: microsec Type: average Base: svm_nfs_statistics.v3.iops_raw.read |
conf/keyperf/9.15.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.latency_raw.readUnit: microsec Type: average Base: svm_nfs_statistics.v4.iops_raw.read |
conf/keyperf/9.15.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.latency_raw.readUnit: microsec Type: average Base: svm_nfs_statistics.v41.iops_raw.read |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
read.average_latencyUnit: microsec Type: average Base: read.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
read_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_read_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Read Latency |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by Read and Write Latency |
| ONTAP: SVM | NFSv4 | stat | NFSv4 Read Latency |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by Read and Write Latency |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Read Latency |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by Read and Write Latency |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Read Latency |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by Read and Write Latency |
svm_nfs_read_ops¶
Total observed NFSv3 read operations per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
nfsv3_read_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
The svm_nfs_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Read IOPs |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by IOPs |
svm_nfs_read_symlink_avg_latency¶
Average latency of ReadSymLink procedure requests. The counter keeps track of the average response time of ReadSymLink requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
read_symlink.average_latencyUnit: microsec Type: average Base: read_symlink.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
read_symlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: read_symlink_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_read_symlink_total¶
Total number of ReadSymLink procedure requests. It is the total number of read symlink success and read symlink error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
read_symlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
read_symlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_read_throughput¶
Rate of NFSv3 read data transfers per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
total.read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
total.read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
total.read_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
nfsv3_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
nfs4_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
nfs41_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
nfs42_read_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_read_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Read Throughput |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by Throughput |
| ONTAP: SVM | NFSv4 | stat | NFSv4 Read Throughput |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by Throughput |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Read Throughput |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by Throughput |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Read Throughput |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by Throughput |
svm_nfs_read_total¶
Total number Read of procedure requests. It is the total number of read success and read error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
read.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
read_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_read_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv4 | stat | NFSv4 Read IOPs |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by IOPs |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Read IOPs |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by IOPs |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Read IOPs |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by IOPs |
svm_nfs_readdir_avg_latency¶
Average latency of ReadDir procedure requests. The counter keeps track of the average response time of ReadDir requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
readdir.average_latencyUnit: microsec Type: average Base: readdir.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
readdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdir_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_readdir_total¶
Total number ReadDir of procedure requests. It is the total number of ReadDir success and ReadDir error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
readdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
readdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_readdirplus_avg_latency¶
Average latency of ReadDirPlus procedure requests. The counter keeps track of the average response time of ReadDirPlus requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
readdirplus.average_latencyUnit: microsec Type: average Base: readdirplus.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
readdirplus_avg_latencyUnit: microsec Type: average,no-zero-values Base: readdirplus_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_readdirplus_total¶
Total number ReadDirPlus of procedure requests. It is the total number of ReadDirPlus success and ReadDirPlus error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
readdirplus.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
readdirplus_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_readlink_avg_latency¶
Average latency of READLINK procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
readlink.average_latencyUnit: microsec Type: average Base: readlink.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
readlink.average_latencyUnit: microsec Type: average Base: readlink.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
readlink.average_latencyUnit: microsec Type: average Base: readlink.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
readlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: readlink_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
readlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: readlink_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
readlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: readlink_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_readlink_total¶
Total number of READLINK procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
readlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
readlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
readlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
readlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
readlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
readlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_reclaim_complete_avg_latency¶
Average latency of RECLAIM_COMPLETE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
reclaim_complete.average_latencyUnit: microsec Type: average Base: reclaim_complete.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
reclaim_complete.average_latencyUnit: microsec Type: average Base: reclaim_complete.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
reclaim_complete_avg_latencyUnit: microsec Type: average,no-zero-values Base: reclaim_complete_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
reclaim_complete_avg_latencyUnit: microsec Type: average,no-zero-values Base: reclaim_complete_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_reclaim_complete_total¶
Total number of RECLAIM_COMPLETE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
reclaim_complete.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
reclaim_complete.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
reclaim_complete_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
reclaim_complete_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_release_lock_owner_avg_latency¶
Average Latency of RELEASE_LOCKOWNER procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
release_lock_owner.average_latencyUnit: microsec Type: average Base: release_lock_owner.total |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
release_lock_owner_avg_latencyUnit: microsec Type: average,no-zero-values Base: release_lock_owner_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_release_lock_owner_total¶
Total number of RELEASE_LOCKOWNER procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
release_lock_owner.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
release_lock_owner_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_remove_avg_latency¶
Average latency of Remove procedure requests. The counter keeps track of the average response time of Remove requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
remove.average_latencyUnit: microsec Type: average Base: remove.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
remove_avg_latencyUnit: microsec Type: average,no-zero-values Base: remove_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_remove_total¶
Total number Remove of procedure requests. It is the total number of Remove success and Remove error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
remove.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
remove_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_rename_avg_latency¶
Average latency of Rename procedure requests. The counter keeps track of the average response time of Rename requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
rename.average_latencyUnit: microsec Type: average Base: rename.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
rename_avg_latencyUnit: microsec Type: average,no-zero-values Base: rename_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_rename_total¶
Total number Rename of procedure requests. It is the total number of Rename success and Rename error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
rename.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
rename_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_renew_avg_latency¶
Average latency of RENEW procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
renew.average_latencyUnit: microsec Type: average Base: renew.total |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
renew_avg_latencyUnit: microsec Type: average,no-zero-values Base: renew_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_renew_total¶
Total number of RENEW procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
renew.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
renew_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_restorefh_avg_latency¶
Average latency of RESTOREFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
restorefh.average_latencyUnit: microsec Type: average Base: restorefh.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
restorefh.average_latencyUnit: microsec Type: average Base: restorefh.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
restorefh.average_latencyUnit: microsec Type: average Base: restorefh.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
restorefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: restorefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
restorefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: restorefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
restorefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: restorefh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_restorefh_total¶
Total number of RESTOREFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
restorefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
restorefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
restorefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
restorefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
restorefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
restorefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_rmdir_avg_latency¶
Average latency of RmDir procedure requests. The counter keeps track of the average response time of RmDir requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
rmdir.average_latencyUnit: microsec Type: average Base: rmdir.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
rmdir_avg_latencyUnit: microsec Type: average,no-zero-values Base: rmdir_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_rmdir_total¶
Total number RmDir of procedure requests. It is the total number of RmDir success and RmDir error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
rmdir.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
rmdir_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_savefh_avg_latency¶
Average latency of SAVEFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
savefh.average_latencyUnit: microsec Type: average Base: savefh.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
savefh.average_latencyUnit: microsec Type: average Base: savefh.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
savefh.average_latencyUnit: microsec Type: average Base: savefh.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
savefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: savefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
savefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: savefh_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
savefh_avg_latencyUnit: microsec Type: average,no-zero-values Base: savefh_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_savefh_total¶
Total number of SAVEFH procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
savefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
savefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
savefh.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
savefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
savefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
savefh_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_secinfo_avg_latency¶
Average latency of SECINFO procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
secinfo.average_latencyUnit: microsec Type: average Base: secinfo.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
secinfo.average_latencyUnit: microsec Type: average Base: secinfo.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
secinfo.average_latencyUnit: microsec Type: average Base: secinfo.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
secinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
secinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
secinfo_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_secinfo_no_name_avg_latency¶
Average latency of SECINFO_NO_NAME operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
secinfo_no_name.average_latencyUnit: microsec Type: average Base: secinfo_no_name.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
secinfo_no_name.average_latencyUnit: microsec Type: average Base: secinfo_no_name.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
secinfo_no_name_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_no_name_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
secinfo_no_name_avg_latencyUnit: microsec Type: average,no-zero-values Base: secinfo_no_name_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_secinfo_no_name_total¶
Total number of SECINFO_NO_NAME operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
secinfo_no_name.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
secinfo_no_name.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
secinfo_no_name_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
secinfo_no_name_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_secinfo_total¶
Total number of SECINFO procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
secinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
secinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
secinfo.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
secinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
secinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
secinfo_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_sequence_avg_latency¶
Average latency of SEQUENCE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
sequence.average_latencyUnit: microsec Type: average Base: sequence.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
sequence.average_latencyUnit: microsec Type: average Base: sequence.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
sequence_avg_latencyUnit: microsec Type: average,no-zero-values Base: sequence_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
sequence_avg_latencyUnit: microsec Type: average,no-zero-values Base: sequence_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_sequence_total¶
Total number of SEQUENCE operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
sequence.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
sequence.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
sequence_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
sequence_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_set_ssv_avg_latency¶
Average latency of SET_SSV operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
set_ssv.average_latencyUnit: microsec Type: average Base: set_ssv.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
set_ssv.average_latencyUnit: microsec Type: average Base: set_ssv.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
set_ssv_avg_latencyUnit: microsec Type: average,no-zero-values Base: set_ssv_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
set_ssv_avg_latencyUnit: microsec Type: average,no-zero-values Base: set_ssv_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_set_ssv_total¶
Total number of SET_SSV operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
set_ssv.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
set_ssv.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
set_ssv_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
set_ssv_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_setattr_avg_latency¶
Average latency of SetAttr procedure requests. The counter keeps track of the average response time of SetAttr requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
setattr.average_latencyUnit: microsec Type: average Base: setattr.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
setattr_avg_latencyUnit: microsec Type: average,no-zero-values Base: setattr_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_setattr_total¶
Total number of Setattr procedure requests. It is the total number of Setattr success and setattr error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
setattr.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
setattr_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_setclientid_avg_latency¶
Average latency of SETCLIENTID procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
setclientid.average_latencyUnit: microsec Type: average Base: setclientid.total |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
setclientid_avg_latencyUnit: microsec Type: average,no-zero-values Base: setclientid_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_setclientid_confirm_avg_latency¶
Average latency of SETCLIENTID_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
setclientid_confirm.average_latencyUnit: microsec Type: average Base: setclientid_confirm.total |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
setclientid_confirm_avg_latencyUnit: microsec Type: average,no-zero-values Base: setclientid_confirm_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_setclientid_confirm_total¶
Total number of SETCLIENTID_CONFIRM procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
setclientid_confirm.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
setclientid_confirm_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_setclientid_total¶
Total number of SETCLIENTID procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
setclientid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
setclientid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
svm_nfs_symlink_avg_latency¶
Average latency of SymLink procedure requests. The counter keeps track of the average response time of SymLink requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
symlink.average_latencyUnit: microsec Type: average Base: symlink.total |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
symlink_avg_latencyUnit: microsec Type: average,no-zero-values Base: symlink_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_symlink_total¶
Total number SymLink of procedure requests. It is the total number of SymLink success and create SymLink requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
symlink.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
symlink_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
svm_nfs_test_stateid_avg_latency¶
Average latency of TEST_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
test_stateid.average_latencyUnit: microsec Type: average Base: test_stateid.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
test_stateid.average_latencyUnit: microsec Type: average Base: test_stateid.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
test_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: test_stateid_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
test_stateid_avg_latencyUnit: microsec Type: average,no-zero-values Base: test_stateid_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_test_stateid_total¶
Total number of TEST_STATEID operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
test_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
test_stateid.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
test_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
test_stateid_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_throughput¶
Rate of NFSv3 data transfers per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
total.throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
total.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
total.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
nfsv3_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
nfs4_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
nfs41_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
nfs42_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Throughput |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by Throughput |
| ONTAP: SVM | NFSv4 | stat | NFSv4 Throughput |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Throughput |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Throughput |
svm_nfs_total_throughput¶
Performance metric aggregated over all types of I/O operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/protocols/nfs/services |
statistics.v3.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
svm_nfs_verify_avg_latency¶
Average latency of VERIFY procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
verify.average_latencyUnit: microsec Type: average Base: verify.total |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
verify.average_latencyUnit: microsec Type: average Base: verify.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
verify.average_latencyUnit: microsec Type: average Base: verify.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
verify_avg_latencyUnit: microsec Type: average,no-zero-values Base: verify_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
verify_avg_latencyUnit: microsec Type: average,no-zero-values Base: verify_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
verify_avg_latencyUnit: microsec Type: average,no-zero-values Base: verify_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_verify_total¶
Total number of VERIFY procedures
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v4 |
verify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
verify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
verify.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
verify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
verify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
verify_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_want_delegation_avg_latency¶
Average latency of WANT_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
want_delegation.average_latencyUnit: microsec Type: average Base: want_delegation.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
want_delegation.average_latencyUnit: microsec Type: average Base: want_delegation.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
want_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: want_delegation_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
want_delegation_avg_latencyUnit: microsec Type: average,no-zero-values Base: want_delegation_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_want_delegation_total¶
Total number of WANT_DELEGATION operations.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v41 |
want_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
want_delegation.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
want_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
want_delegation_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
svm_nfs_write_avg_latency¶
Average latency of Write procedure requests. The counter keeps track of the average response time of Write requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.latency_raw.writeUnit: microsec Type: average Base: svm_nfs_statistics.v3.iops_raw.write |
conf/keyperf/9.15.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.latency_raw.writeUnit: microsec Type: average Base: svm_nfs_statistics.v4.iops_raw.write |
conf/keyperf/9.15.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv4_1.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.latency_raw.writeUnit: microsec Type: average Base: svm_nfs_statistics.v41.iops_raw.write |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
write.average_latencyUnit: microsec Type: average Base: write.total |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
write_avg_latencyUnit: microsec Type: average,no-zero-values Base: write_total |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_write_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Write Latency |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by Read and Write Latency |
| ONTAP: SVM | NFSv4 | stat | NFSv4 Write Latency |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by Read and Write Latency |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Write Latency |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by Read and Write Latency |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Write Latency |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by Read and Write Latency |
svm_nfs_write_ops¶
Total observed NFSv3 write operations per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
nfsv3_write_opsUnit: per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
The svm_nfs_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Write IOPs |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by IOPs |
svm_nfs_write_throughput¶
Rate of NFSv3 write data transfers per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v3.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
total.write_throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v4.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
total.throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| KeyPerf | api/protocols/nfs/services |
statistics.v41.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
total.throughputUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
nfsv3_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
nfs4_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
nfs41_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
nfs42_write_throughputUnit: b_per_sec Type: rate,no-zero-values Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_write_throughput metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv3 | stat | NFSv3 Write Throughput |
| ONTAP: SVM | NFSv3 | timeseries | Top $TopResources NFSv3 SVMs by Throughput |
| ONTAP: SVM | NFSv4 | stat | NFSv4 Write Throughput |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by Throughput |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Write Throughput |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by Throughput |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Write Throughput |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by Throughput |
svm_nfs_write_total¶
Total number of Write procedure requests. It is the total number of write success and write error requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_nfs_v3 |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv3.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v4 |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v41 |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_1.yaml |
| REST | api/cluster/counter/tables/svm_nfs_v42 |
write.totalUnit: none Type: rate Base: |
conf/restperf/9.12.0/nfsv4_2.yaml |
| ZAPI | perf-object-get-instances nfsv3 |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv3.yaml |
| ZAPI | perf-object-get-instances nfsv4 |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4.yaml |
| ZAPI | perf-object-get-instances nfsv4_1 |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/nfsv4_1.yaml |
| ZAPI | perf-object-get-instances nfsv4_2 |
write_totalUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.11.0/nfsv4_2.yaml |
The svm_nfs_write_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | NFSv4 | stat | NFSv4 Write IOPs |
| ONTAP: SVM | NFSv4 | timeseries | Top $TopResources NFSv4 SVMs by IOPs |
| ONTAP: SVM | NFSv4.1 | stat | NFSv4.1 Write IOPs |
| ONTAP: SVM | NFSv4.1 | timeseries | Top $TopResources NFSv4.1 SVMs by IOPs |
| ONTAP: SVM | NFSv4.2 | stat | NFSv4.2 Write IOPs |
| ONTAP: SVM | NFSv4.2 | timeseries | Top $TopResources NFSv4.2 SVMs by IOPs |
svm_vol_avg_latency¶
Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.totalUnit: microsec Type: average Base: volume_statistics.iops_raw.total |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | SVM Metrics | timeseries | Top $TopResources Average Latency by SVMs |
| ONTAP: Cluster | SVM Performance | timeseries | Top $TopResources Latency |
| ONTAP: SVM | Highlights | stat | SVM Average Latency |
svm_vol_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on. svm_vol_other_data is volume_other_data aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
svm_vol_other_latency¶
Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
other_latencyUnit: microsec Type: average Base: total_other_ops |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.otherUnit: microsec Type: average Base: volume_statistics.iops_raw.other |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_other_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Highlights | timeseries | SVM Average Latency |
svm_vol_other_ops¶
Number of other operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
total_other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFS Troubleshooting | Highlights | table | SVM Performance Table |
| ONTAP: SVM | Highlights | timeseries | SVM IOPs |
svm_vol_read_data¶
Bytes read per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
bytes_readUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | SVM Metrics | timeseries | Top $TopResources Average Throughput by SVMs |
| ONTAP: Cluster | SVM Performance | timeseries | Top $TopResources Throughput |
| ONTAP: NFS Troubleshooting | Highlights | table | SVM Performance Table |
| ONTAP: SVM | Highlights | stat | SVM Read Throughput |
| ONTAP: SVM | Highlights | timeseries | SVM Throughput |
svm_vol_read_latency¶
Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
read_latencyUnit: microsec Type: average Base: total_read_ops |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.readUnit: microsec Type: average Base: volume_statistics.iops_raw.read |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Highlights | stat | SVM Average Read Latency |
| ONTAP: SVM | Highlights | timeseries | SVM Average Latency |
svm_vol_read_ops¶
Number of read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
total_read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFS Troubleshooting | Highlights | table | SVM Performance Table |
| ONTAP: SVM | Highlights | stat | SVM Read IOPs |
| ONTAP: SVM | Highlights | timeseries | SVM IOPs |
svm_vol_total_data¶
Performance metric aggregated over all types of I/O operations. svm_vol_total_data is volume_total_data aggregated by svm.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
svm_vol_total_ops¶
Number of operations per second serviced by the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | SVM Metrics | timeseries | Top $TopResources IOPs by SVMs |
| ONTAP: Cluster | SVM Performance | timeseries | Top $TopResources IOPs |
| ONTAP: NFS Troubleshooting | Highlights | table | SVM Performance Table |
| ONTAP: SVM | Highlights | stat | SVM IOPs |
svm_vol_write_data¶
Bytes written per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
bytes_writtenUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | SVM Metrics | timeseries | Top $TopResources Average Throughput by SVMs |
| ONTAP: Cluster | SVM Performance | timeseries | Top $TopResources Throughput |
| ONTAP: NFS Troubleshooting | Highlights | table | SVM Performance Table |
| ONTAP: SVM | Highlights | stat | SVM Throughput |
| ONTAP: SVM | Highlights | stat | SVM Write Throughput |
| ONTAP: SVM | Highlights | timeseries | SVM Throughput |
svm_vol_write_latency¶
Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
write_latencyUnit: microsec Type: average Base: total_write_ops |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.writeUnit: microsec Type: average Base: volume_statistics.iops_raw.write |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Highlights | stat | SVM Average Write Latency |
| ONTAP: SVM | Highlights | timeseries | SVM Average Latency |
svm_vol_write_ops¶
Number of write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume:svm |
total_write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume_svm.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume:vserver |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume_svm.yaml |
The svm_vol_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: NFS Troubleshooting | Highlights | table | SVM Performance Table |
| ONTAP: SVM | Highlights | stat | SVM Write IOPs |
| ONTAP: SVM | Highlights | timeseries | SVM IOPs |
svm_vscan_connections_active¶
Total number of current active connections
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_vscan |
connections_activeUnit: none Type: raw Base: |
conf/restperf/9.13.0/vscan_svm.yaml |
| ZAPI | perf-object-get-instances offbox_vscan |
connections_activeUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/vscan_svm.yaml |
The svm_vscan_connections_active metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | Virus Scan Connections Active |
| ONTAP: Vscan | Highlights | timeseries | Active Connections |
svm_vscan_dispatch_latency¶
Average dispatch latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_vscan |
dispatch.latencyUnit: microsec Type: average Base: dispatch.requests |
conf/restperf/9.13.0/vscan_svm.yaml |
| ZAPI | perf-object-get-instances offbox_vscan |
dispatch_latencyUnit: microsec Type: average Base: dispatch_latency_base |
conf/zapiperf/cdot/9.8.0/vscan_svm.yaml |
The svm_vscan_dispatch_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | Virus Scan Latency |
| ONTAP: Vscan | Highlights | timeseries | Top $TopResources SVM by Dispatch Latency |
svm_vscan_scan_latency¶
Average scan latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_vscan |
scan.latencyUnit: microsec Type: average Base: scan.requests |
conf/restperf/9.13.0/vscan_svm.yaml |
| ZAPI | perf-object-get-instances offbox_vscan |
scan_latencyUnit: microsec Type: average Base: scan_latency_base |
conf/zapiperf/cdot/9.8.0/vscan_svm.yaml |
The svm_vscan_scan_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | Virus Scan Latency |
| ONTAP: Vscan | Highlights | timeseries | Top $TopResources SVMs by Scan Latency |
svm_vscan_scan_noti_received_rate¶
Total number of scan notifications received by the dispatcher per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_vscan |
scan.notification_received_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.13.0/vscan_svm.yaml |
| ZAPI | perf-object-get-instances offbox_vscan |
scan_noti_received_rateUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/vscan_svm.yaml |
The svm_vscan_scan_noti_received_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | Virus Scan Requests |
| ONTAP: Vscan | Highlights | timeseries | Top $TopResources SVMs by Scan Notifications Received Throughput |
svm_vscan_scan_request_dispatched_rate¶
Total number of scan requests sent to the Vscanner per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/svm_vscan |
scan.request_dispatched_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.13.0/vscan_svm.yaml |
| ZAPI | perf-object-get-instances offbox_vscan |
scan_request_dispatched_rateUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/vscan_svm.yaml |
The svm_vscan_scan_request_dispatched_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | CIFS | timeseries | Virus Scan Requests |
| ONTAP: Vscan | Highlights | timeseries | Top $TopResources SVMs by Scan Requests Sent to Vscanner Throughput |
token_copy_bytes¶
Total number of bytes copied.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_copy.bytesUnit: none Type: rate Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_copy_bytesUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_copy_failure¶
Number of failed token copy requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_copy.failuresUnit: none Type: delta Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_copy_failureUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_copy_success¶
Number of successful token copy requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_copy.successesUnit: none Type: delta Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_copy_successUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_create_bytes¶
Total number of bytes for which tokens are created.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_create.bytesUnit: none Type: rate Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_create_bytesUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_create_failure¶
Number of failed token create requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_create.failuresUnit: none Type: delta Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_create_failureUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_create_success¶
Number of successful token create requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_create.successesUnit: none Type: delta Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_create_successUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_zero_bytes¶
Total number of bytes zeroed.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_zero.bytesUnit: none Type: rate Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_zero_bytesUnit: none Type: rate Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_zero_failure¶
Number of failed token zero requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_zero.failuresUnit: none Type: delta Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_zero_failureUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
token_zero_success¶
Number of successful token zero requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/token_manager |
token_zero.successesUnit: none Type: delta Base: |
conf/restperf/9.12.0/token_manager.yaml |
| ZAPI | perf-object-get-instances token_manager |
token_zero_successUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/token_manager.yaml |
volume_analytics_bytes_used_by_accessed_time¶
Number of bytes used on-disk, broken down by date of last access.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.by_accessed_time.bytes_used.values |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_bytes_used_by_accessed_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Volume Activity | barchart | Volume Access ($Activity) History |
volume_analytics_bytes_used_by_modified_time¶
Number of bytes used on-disk, broken down by date of last modification.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.by_modified_time.bytes_used.values |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_bytes_used_by_modified_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Volume Activity | barchart | Volume Modify ($Activity) History |
volume_analytics_bytes_used_percent_by_accessed_time¶
Percent used on-disk, broken down by date of last access.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.by_accessed_time.bytes_used.percentages |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_bytes_used_percent_by_accessed_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Volume Activity | barchart | Volume Access ($Activity) History By Percent |
volume_analytics_bytes_used_percent_by_modified_time¶
Percent used on-disk, broken down by date of last modification.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.by_modified_time.bytes_used.percentages |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_bytes_used_percent_by_modified_time metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Volume Activity | barchart | Volume Modify ($Activity) History By Percent |
volume_analytics_dir_bytes_used¶
The actual number of bytes used on disk by this file.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.bytes_used |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_dir_bytes_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Highlights | timeseries | Top $TopResources Volumes by Directory Growth |
| ONTAP: File System Analytics (FSA) | Highlights | table | Top $TopResources Volumes by Directory Growth |
volume_analytics_dir_file_count¶
Number of files in a directory.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.file_count |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_dir_file_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Highlights | stat | Files |
| ONTAP: File System Analytics (FSA) | Highlights | table | Top $TopResources Volumes by Directory Growth |
volume_analytics_dir_subdir_count¶
Number of sub directories in a directory.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/{volume.uuid}/files |
analytics.subdir_count |
conf/rest/9.12.0/volume_analytics.yaml |
The volume_analytics_dir_subdir_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Highlights | stat | Directories |
| ONTAP: File System Analytics (FSA) | Highlights | table | Top $TopResources Volumes by Directory Growth |
volume_autosize_grow_threshold_percent¶
Used space threshold which triggers autogrow. When the size-used is greater than this percent of size-total, the volume will be grown. The computed value is rounded down. The default value of this element varies from 85% to 98%, depending on the volume size. It is an error for the grow threshold to be less than or equal to the shrink threshold.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
autosize_grow_threshold_percent |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-autosize-attributes.grow-threshold-percent |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_autosize_maximum_size¶
The maximum size (in bytes) to which the volume would be grown automatically. The default value is 20% greater than the volume size. It is an error for the maximum volume size to be less than the current volume size. It is also an error for the maximum size to be less than or equal to the minimum size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
max_autosize |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-autosize-attributes.maximum-size |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_avg_latency¶
Average latency in microseconds for the WAFL filesystem to process all the operations on the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
average_latencyUnit: microsec Type: average Base: total_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.totalUnit: microsec Type: average Base: volume_statistics.iops_raw.total |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
avg_latencyUnit: microsec Type: average Base: total_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_avg_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Average Latency |
| ONTAP: cDOT | Volume Metrics | timeseries | Top $TopResources Volumes by Average Latency |
| ONTAP: Cluster | Throughput | timeseries | Max Latency |
| ONTAP: Datacenter | Performance | timeseries | Top $TopResources Latency by Cluster |
| ONTAP: FlexGroup | Highlights | timeseries | Top $TopResources Constituents by Average Latency |
| ONTAP: MetroCluster | Highlights | stat | Volume Average Latency |
| ONTAP: Node | Volume Performance | timeseries | Top $TopResources Volumes by Average Latency |
| ONTAP: Volume | Highlights | stat | Volume Average Latency |
| ONTAP: Volume | Highlights | timeseries | Top $TopResources Volumes by Average Latency |
| ONTAP: Volume Deep Dive | Highlights | stat | Avg Latency |
| ONTAP: Volume Deep Dive | Highlights | stat | Max Latency |
volume_capacity_tier_footprint¶
This field represents the footprint of blocks written to the volume in bytes for bin 1.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
volume_blocks_footprint_bin1 |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
volume-blocks-footprint-bin1 |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_capacity_tier_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Capacity Tier Footprint |
| ONTAP: FlexGroup | Top Volume FabricPool | timeseries | Top $TopResources Volumes by Capacity Tier Footprint |
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Capacity Tier Footprint |
volume_capacity_tier_footprint_percent¶
This field represents the footprint of blocks written to the volume in bin 1 as a percentage of aggregate size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
volume_blocks_footprint_bin1_percent |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
volume-blocks-footprint-bin1-percent |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_capacity_tier_footprint_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Capacity Tier Footprint % |
| ONTAP: FlexGroup | Top Volume FabricPool | timeseries | Top $TopResources Volumes by Capacity Tier Footprint % |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Capacity Tier Footprint % |
volume_clone_split_estimate¶
Display an estimate of additional storage required in the underlying aggregate to perform a volume clone split operation.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | clone.split_estimate |
api/storage/volumes |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-clone-get-iter |
split-estimate |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_clone_split_estimate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
volume_delayed_free_footprint¶
This field represents the delayed free blocks footprint in bytes. This system is used to improve delete performance by batching delete requests.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
delayed_free_footprint |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
delayed-free-footprint |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_delayed_free_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Delayed Free Footprint |
volume_filesystem_size¶
Filesystem size (in bytes) of the volume. This is the total usable size of the volume, not including WAFL reserve. This value is the same as Size except for certain SnapMirror destination volumes. It is possible for destination volumes to have a different filesystem-size because the filesystem-size is sent across from the source volume. This field is valid only when the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
filesystem_size |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.filesystem-size |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_guarantee_footprint¶
This field represents the volume guarantee footprint in bytes. Alternatively, it is the space reserved for future writes in the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
volume_guarantee_footprint |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
volume-guarantee-footprint |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_guarantee_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Capacity Tier Footprint |
volume_inode_files_total¶
Total user-visible file (inode) count, i.e., current maximum number of user-visible files (inodes) that this volume can currently hold.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
files |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-inode-attributes.files-total |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_inode_files_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Inode | timeseries | Top $TopResources Volumes by Inode Files Total |
| ONTAP: Volume Deep Dive | Inodes | timeseries | Inode Files Total |
volume_inode_files_used¶
Number of user-visible files (inodes) used. This field is valid only when the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
files_used |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-inode-attributes.files-used |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_inode_files_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Inode | timeseries | Top $TopResources Volumes by Inode Files Used |
| ONTAP: Volume Deep Dive | Inodes | timeseries | Inode Files Used |
volume_inode_used_percent¶
volume_inode_files_used / volume_inode_total
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
inode_files_used, inode_files_total |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
inode_files_used, inode_files_total |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_inode_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Inode | timeseries | Top $TopResources Volumes by Inode Files Used Percentage |
| ONTAP: Volume Deep Dive | Inodes | timeseries | Inode Files Used Percentage |
volume_labels¶
This metric provides information about Volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
Harvest generated |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
Harvest generated |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_labels metric is visualized in the following Grafana dashboards:
volume_metadata_footprint¶
This field represents flexible volume or flexgroup metadata in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
flexvol_metadata_footprint |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
flexvol-metadata-footprint |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_metadata_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Metadata Footprint |
volume_new_status¶
This metric indicates a value of 1 if the volume state is online (indicating the volume is operational) and a value of 0 for any other state.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/volume.yaml |
| ZAPI | NA |
Harvest generated |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_new_status metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
volume_nfs_access_latency¶
Average time for the WAFL filesystem to process NFS protocol access requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.access_latencyUnit: microsec Type: average Base: nfs.access_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_access_latencyUnit: microsec Type: average Base: nfs_access_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_access_ops¶
Number of NFS accesses per second to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.access_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_access_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_getattr_latency¶
Average time for the WAFL filesystem to process NFS protocol getattr requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.getattr_latencyUnit: microsec Type: average Base: nfs.getattr_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_getattr_latencyUnit: microsec Type: average Base: nfs_getattr_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_getattr_ops¶
Number of NFS getattr per second to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.getattr_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_getattr_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_lookup_latency¶
Average time for the WAFL filesystem to process NFS protocol lookup requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.lookup_latencyUnit: microsec Type: average Base: nfs.lookup_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_lookup_latencyUnit: microsec Type: average Base: nfs_lookup_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_lookup_ops¶
Number of NFS lookups per second to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.lookup_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_lookup_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_other_latency¶
Average time for the WAFL filesystem to process other NFS operations to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.other_latencyUnit: microsec Type: average Base: nfs.other_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_other_latencyUnit: microsec Type: average Base: nfs_other_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_other_ops¶
Number of other NFS operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_punch_hole_latency¶
Average time for the WAFL filesystem to process NFS protocol hole-punch requests to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.punch_hole_latencyUnit: microsec Type: average Base: nfs.punch_hole_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_punch_hole_latencyUnit: microsec Type: average Base: nfs_punch_hole_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_punch_hole_ops¶
Number of NFS hole-punch requests per second to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.punch_hole_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_punch_hole_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_read_latency¶
Average time for the WAFL filesystem to process NFS protocol read requests to the volume; not including NFS protocol request processing or network communication time which will also be included in client observed NFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.read_latencyUnit: microsec Type: average Base: nfs.read_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_read_latencyUnit: microsec Type: average Base: nfs_read_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_read_ops¶
Number of NFS read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_setattr_latency¶
Average time for the WAFL filesystem to process NFS protocol setattr requests to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.setattr_latencyUnit: microsec Type: average Base: nfs.setattr_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_setattr_latencyUnit: microsec Type: average Base: nfs_setattr_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_setattr_ops¶
Number of NFS setattr requests per second to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.setattr_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_setattr_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_total_ops¶
Number of total NFS operations per second to the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_write_latency¶
Average time for the WAFL filesystem to process NFS protocol write requests to the volume; not including NFS protocol request processing or network communication time, which will also be included in client observed NFS request latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.write_latencyUnit: microsec Type: average Base: nfs.write_ops |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_write_latencyUnit: microsec Type: average Base: nfs_write_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_nfs_write_ops¶
Number of NFS write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
nfs.write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
nfs_write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
volume_num_compress_attempts¶
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/efficiency/stat |
num_compress_attempts |
conf/rest/9.14.0/volume.yaml |
The volume_num_compress_attempts metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Sis Stat | timeseries | Top $TopResources Volumes by Number of Compress Fail % |
| ONTAP: Volume | Sis Stat | timeseries | Top $TopResources Volumes by Number of Compress Attempts |
volume_num_compress_fail¶
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/efficiency/stat |
num_compress_fail |
conf/rest/9.14.0/volume.yaml |
The volume_num_compress_fail metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Sis Stat | timeseries | Top $TopResources Volumes by Number of Compress Fail % |
| ONTAP: Volume | Sis Stat | timeseries | Top $TopResources Volumes by Number of Compress Fail |
volume_other_data¶
Performance metric for other I/O operations. Other I/O operations can be metadata operations, such as directory lookups and so on.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.otherUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
volume_other_latency¶
Average latency in microseconds for the WAFL filesystem to process other operations to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
other_latencyUnit: microsec Type: average Base: total_other_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.otherUnit: microsec Type: average Base: volume_statistics.iops_raw.other |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
other_latencyUnit: microsec Type: average Base: other_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_other_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Other Latency |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Other Latency |
volume_other_ops¶
Number of other operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_other_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.otherUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
other_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_other_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Other IOPs |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Other IOPs |
| ONTAP: Volume by SVM | Highlights | table | Volume Performance for $SVM (Click volume for detailed drill-down) |
| ONTAP: Volume Deep Dive | Highlights | table | Volume Performance |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Other IOPs |
volume_overwrite_reserve_available¶
amount of storage space that is currently available for overwrites, calculated by subtracting the total amount of overwrite reserve space from the amount that has already been used.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
overwrite_reserve_total, overwrite_reserve_used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
overwrite_reserve_total, overwrite_reserve_used |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_overwrite_reserve_total¶
The size (in bytes) that is reserved for overwriting snapshotted data in an otherwise full volume. This space is usable only by space-reserved LUNs and files, and then only when the volume is full.This field is valid only when the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
overwrite_reserve |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.overwrite-reserve |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_overwrite_reserve_used¶
The reserved size (in bytes) that is not available for new overwrites. The number includes both the reserved size which has actually been used for overwrites as well as the size which was never allocated in the first place. This field is valid only when the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
overwrite_reserve_used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.overwrite-reserve-used |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_performance_tier_footprint¶
This field represents the footprint of blocks written to the volume in bytes for the performance tier (bin 0).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
volume_blocks_footprint_bin0 |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
volume-blocks-footprint-bin0 |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_performance_tier_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Performance Tier Footprint |
| ONTAP: FlexGroup | Top Volume FabricPool | timeseries | Top $TopResources Volumes by Performance Tier Footprint |
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Performance Tier Footprint |
volume_performance_tier_footprint_percent¶
This field represents the footprint of blocks written to the volume in bin 0 as a percentage of aggregate size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
volume_blocks_footprint_bin0_percent |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
volume-blocks-footprint-bin0-percent |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_performance_tier_footprint_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Performance Tier Footprint % |
| ONTAP: FlexGroup | Top Volume FabricPool | timeseries | Top $TopResources Volumes by Performance Tier Footprint % |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Performance Tier Footprint % |
volume_read_data¶
Bytes read per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
bytes_readUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.readUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
read_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Read Throughput |
| ONTAP: cDOT | Volume Metrics | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: FlexGroup | Highlights | timeseries | Top $TopResources Constituents by Average Throughput |
| ONTAP: FlexGroup | Volume Table | table | Top $TopResources Volumes by Read Throughput |
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Read Throughput |
| ONTAP: Node | Volume Performance | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: SVM | Volume Performance | timeseries | Top $TopResources Volumes by Read Throughput |
| ONTAP: Volume | Highlights | stat | Top $TopResources Volumes Total Throughput |
| ONTAP: Volume | Highlights | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: Volume | Volume Table | table | Top $TopResources Volumes by Read Throughput |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Read Throughput |
| ONTAP: Volume by SVM | Highlights | table | Volume Performance for $SVM (Click volume for detailed drill-down) |
| ONTAP: Volume Deep Dive | Highlights | table | Volume Performance |
| ONTAP: Volume Deep Dive | Highlights | stat | Max Read Op Size |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Read Throughput |
volume_read_latency¶
Average latency in microseconds for the WAFL filesystem to process read request to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
read_latencyUnit: microsec Type: average Base: total_read_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.readUnit: microsec Type: average Base: volume_statistics.iops_raw.read |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
read_latencyUnit: microsec Type: average Base: read_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_read_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Read Latency |
| ONTAP: FlexGroup | Volume Table | table | Top $TopResources Volumes by Read Latency |
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Read Latency |
| ONTAP: SVM | Volume Performance | timeseries | Top $TopResources Volumes by Read Latency |
| ONTAP: Volume | Volume Table | table | Top $TopResources Volumes by Read Latency |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Read Latency |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Read Latency |
volume_read_ops¶
Number of read operations per second from the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_read_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.readUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
read_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Read IOPs |
| ONTAP: FlexGroup | Volume Table | table | Top $TopResources Volumes by Read IOPS |
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Read IOPs |
| ONTAP: SVM | Volume Performance | timeseries | Top $TopResources Volumes by Read IOPs |
| ONTAP: Volume | Volume Table | table | Top $TopResources Volumes by Read IOPS |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Read IOPs |
| ONTAP: Volume by SVM | Highlights | table | Volume Performance for $SVM (Click volume for detailed drill-down) |
| ONTAP: Volume Deep Dive | Highlights | table | Volume Performance |
| ONTAP: Volume Deep Dive | Highlights | stat | Max Read Op Size |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Read IOPs |
volume_sis_compress_saved¶
The total disk space (in bytes) that is saved by compressing blocks on the referenced file system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
compression_space_saved |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-sis-attributes.compression-space-saved |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_sis_compress_saved metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes by Compression Savings |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
volume_sis_compress_saved_percent¶
Percentage of the total disk space that is saved by compressing blocks on the referenced file system
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
compression_space_saved_percent |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-sis-attributes.percentage-compression-space-saved |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_sis_compress_saved_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Top Volume and LUN Capacity | timeseries | Top $TopResources Volumes by Compression Percent Saved |
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes by Compression Saved % |
volume_sis_dedup_saved¶
The total disk space (in bytes) that is saved by deduplication and file cloning.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
dedupe_space_saved |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-sis-attributes.deduplication-space-saved |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_sis_dedup_saved metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes by Deduplication Savings |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
volume_sis_dedup_saved_percent¶
Percentage of the total disk space that is saved by deduplication and file cloning.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
dedupe_space_saved_percent |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-sis-attributes.percentage-deduplication-space-saved |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_sis_dedup_saved_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: LUN | Top Volume and LUN Capacity | timeseries | Top $TopResources Volumes by Deduplication Percent Saved |
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes by Deduplication Saved % |
volume_sis_total_saved¶
Total space saved (in bytes) in the volume due to deduplication, compression, and file cloning.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
sis_space_saved |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-sis-attributes.total-space-saved |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_sis_total_saved metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes by Total Efficiency Savings |
volume_sis_total_saved_percent¶
Percentage of total disk space that is saved by compressing blocks, deduplication and file cloning.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
sis_space_saved_percent |
conf/rest/9.12.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-sis-attributes.percentage-total-space-saved |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_size¶
Physical size of the volume, in bytes. The minimum size for a FlexVol volume is 20MB and the minimum size for a FlexGroup volume is 200MB per constituent. The recommended size for a FlexGroup volume is a minimum of 100GB per constituent. For all volumes, the default size is equal to the minimum size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
size |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.size |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Volume Activity | barchart | Volume Access ($Activity) History By Percent |
| ONTAP: File System Analytics (FSA) | Volume Activity | barchart | Volume Modify ($Activity) History By Percent |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Volume Total Size |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Volume Total Size |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used |
volume_size_available¶
The size (in bytes) that is still available in the volume. This field is valid only when the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
available |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.size-available |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_size_available metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: File System Analytics (FSA) | Highlights | stat | Available |
volume_size_total¶
Total usable size (in bytes) of the volume, not including WAFL reserve or volume snapshot reserve. If the volume is restricted or offline, a value of 0 is returned.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
total |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.size-total |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_size_total metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources SVMs by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources Volumes by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources SVMs by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources Volumes by Capacity Used % |
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: File System Analytics (FSA) | Highlights | stat | Size |
| ONTAP: File System Analytics (FSA) | Highlights | bargauge | Used Percentage |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
volume_size_used¶
Number of bytes used in the volume. If the volume is restricted or offline, a value of 0 is returned.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.size-used |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_size_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Space Used by Aggregate |
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources SVMs by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | table | Top $TopResources Volumes by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources SVMs by Capacity Used % |
| ONTAP: cDOT | Capacity Metrics | timeseries | Top $TopResources Volumes by Capacity Used % |
| ONTAP: File System Analytics (FSA) | Highlights | stat | Used |
| ONTAP: File System Analytics (FSA) | Highlights | bargauge | Used Percentage |
| ONTAP: SVM | Capacity | timeseries | Top $TopResources SVMs by Volume Space Usage |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Volume Size Used |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Volume Size Used |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used |
volume_size_used_percent¶
percentage of utilized storage space in a volume relative to its total capacity
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
percent_used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.percentage-size-used |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_size_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Space Used % |
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: LUN | Top Volume and LUN Capacity | timeseries | Top $TopResources Volumes by Used % |
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes Per Volume Size Used |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume | Capacity % | timeseries | Top $TopResources Volumes Per Volume Size Used |
| ONTAP: Volume | Forecast Volume Capacity | table | Top $TopResources Volumes Per Size Used Percentage Trend |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used Percent |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used Percent |
volume_snapshot_count¶
Number of snapshots in the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
snapshot_count |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-snapshot-attributes.snapshot-count |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshot_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Snapshot Copies | stat | <10 Copies |
| ONTAP: Data Protection | Snapshot Copies | stat | 10-100 Copies |
| ONTAP: Data Protection | Snapshot Copies | stat | 101-500 Copies |
| ONTAP: Data Protection | Snapshot Copies | stat | >500 Copies |
| ONTAP: Data Protection | Snapshot Copies | table | Volume count by the number of Snapshot copies |
| ONTAP: Datacenter | Snapshots | piechart | Snapshot Copies |
volume_snapshot_reserve_available¶
The size (in bytes) that is available for Snapshot copies inside the Snapshot reserve. This value is zero if Snapshot spill is present. For 'none' guaranteed volumes, this may get reduced due to less available space in the aggregate. This parameter is not supported on Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
snapshot_reserve_available |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.snapshot-reserve-available |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshot_reserve_available metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Snapshot Reserve Available |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Snapshot Reserve Available |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used |
volume_snapshot_reserve_percent¶
The percentage of volume disk space that has been set aside as reserve for snapshot usage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
percent_snapshot_space |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.percentage-snapshot-reserve |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshot_reserve_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes Per Snapshot Reserve |
| ONTAP: Volume | Capacity % | timeseries | Top $TopResources Volumes Per Snapshot Reserve |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used Percent |
volume_snapshot_reserve_size¶
The size (in bytes) in the volume that has been set aside as reserve for snapshot usage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
snapshot_reserve_size |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.snapshot-reserve-size |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshot_reserve_size metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Data Protection | Snapshot Copies | stat | Volumes breached |
| ONTAP: Data Protection | Snapshot Copies | stat | Volumes not breached |
| ONTAP: Data Protection | Snapshot Copies | table | Volumes Breaching Snapshot Copy Reserve Space |
| ONTAP: Datacenter | Snapshots | piechart | Breached Status |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Snapshot Reserve Size |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Snapshot Reserve Size |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used |
volume_snapshot_reserve_used¶
amount of storage space currently used by a volume's snapshot reserve, which is calculated by subtracting the snapshot reserve available space from the snapshot reserve size.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
snapshot_reserve_size, snapshot_reserve_available |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
snapshot_reserve_size, snapshot_reserve_available |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_snapshot_reserve_used_percent¶
Percentage of the volume reserved for snapshots that has been used. Note that in some scenarios, it is possible to pass 100% of the space allocated.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
snapshot_space_used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.percentage-snapshot-reserve-used |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshot_reserve_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Snapshot Space Used % |
| ONTAP: LUN | Top Volume and LUN Capacity | timeseries | Top $TopResources Volumes by Snapshot Used % |
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes Per Snapshot Reserve Used |
| ONTAP: Volume | Capacity % | timeseries | Top $TopResources Volumes Per Snapshot Reserve Used |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used Percent |
volume_snapshots_size_available¶
Total free space (in bytes) available in the volume and the snapshot reserve. If this value is 0 or negative, a new snapshot cannot be created.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
size_available_for_snapshots |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.size-available-for-snapshots |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshots_size_available metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Snapshot Size Available |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Snapshot Size Available |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used |
volume_snapshots_size_used¶
The size (in bytes) that is used by snapshots in the volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
size_used_by_snapshots |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.size-used-by-snapshots |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_snapshots_size_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Capacity | timeseries | Top $TopResources Volumes by Snapshot Space Used |
| ONTAP: Data Protection | Snapshot Copies | stat | Volumes breached |
| ONTAP: Data Protection | Snapshot Copies | stat | Volumes not breached |
| ONTAP: Data Protection | Snapshot Copies | table | Volumes Breaching Snapshot Copy Reserve Space |
| ONTAP: Datacenter | Snapshots | piechart | Breached Status |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Snapshot Size Used |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Snapshot Size Used |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Snapshot Space Used |
volume_space_expected_available¶
The size (in bytes) that should be available for the volume irrespective of available size in the aggregate.This is same as size-available for 'volume' guaranteed volumes.For 'none' guaranteed volumes this value is calculated as if the aggregate has enough backing disk space to fully support the volume's size.Similar to the size-available property, this does not include Snapshot reserve.This count gets reduced if snapshots consume space above Snapshot reserve threshold.This parameter is not supported on Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
expected_available |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.expected-available |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_space_logical_available¶
The size (in bytes) that is logically available in the volume.This is the amount of free space available considering space saved by the storage efficiency features as being used.This does not include Snapshot reserve.This parameter is not supported on FlexGroups or Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
logical_available |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.logical-available |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_space_logical_used¶
The size (in bytes) that is logically used in the volume.This value includes all the space saved by the storage efficiency features along with the physically used space.This does not include Snapshot reserve but does consider Snapshot spill.This parameter is not supported on FlexGroups or Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
logical_used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.logical-used |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_space_logical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: SVM | Capacity | timeseries | Top $TopResources SVMs by Logical Space Usage Across Volumes |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Logical Space Used |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume | I/O Density | timeseries | Top $TopResources Volumes by IO Density (IOPs/TiB) |
| ONTAP: Volume | I/O Density | timeseries | Bottom $TopResources Volumes by IO Density (IOPs/TiB) |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Logical Space Used |
| ONTAP: Volume | Growth Rate | timeseries | Top $TopResources Volumes Per Growth Rate of Logical Used |
| ONTAP: Volume | Growth Rate | table | Top $TopResources Volumes by Logical Usage: Delta |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used |
volume_space_logical_used_by_afs¶
The size (in bytes) that is logically used by the active filesystem of the volume.This value differs from 'logical-used' by the amount of Snapshot spill that exceeds Snapshot reserve.This parameter is not supported on FlexGroups or Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
logical_used_by_afs |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.logical-used-by-afs |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_space_logical_used_by_snapshots¶
The size (in bytes) that is logically used across all Snapshot copies in the volume. This value differs from 'size-used-by-snapshots' by the space saved by the storage efficiency features across the Snapshot copies.This parameter is not supported on FlexGroups or Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
logical_used_by_snapshots |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.logical-used-by-snapshots |
conf/zapi/cdot/9.8.0/volume.yaml |
volume_space_logical_used_percent¶
Percentage of the logical used size of the volume.This parameter is not supported on FlexGroups or Infinite Volumes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
logical_used_percent |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.logical-used-percent |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_space_logical_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes Per Logical Space Used |
| ONTAP: Volume | Capacity % | timeseries | Top $TopResources Volumes Per Logical Space Used |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used Percent |
volume_space_performance_tier_inactive_user_data¶
The size that is physically used in the performance tier of the volume and has a cold temperature. This parameter is only supported if the volume is in an aggregate that is either attached to object store or could be attached to an object store.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
performance_tier_inactive_user_data |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.performance-tier-inactive-user-data |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_space_performance_tier_inactive_user_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes by Inactive Data |
volume_space_performance_tier_inactive_user_data_percent¶
The size (in percent) that is physically used in the performance tier of the volume and has a cold temperature. This parameter is only supported if the volume is in an aggregate that is either attached to object store or could be attached to an object store.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
performance_tier_inactive_user_data_percent |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.performance-tier-inactive-user-data-percent |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_space_performance_tier_inactive_user_data_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Capacity % | timeseries | Top $TopResources Volumes by Inactive Data |
volume_space_physical_used¶
The size (in bytes) that is physically used in the volume.This differs from 'total-used' space by the space that is reserved for future writes.The value includes blocks in use by Snapshot copies.This field is valid only if the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
virtual_used |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.physical-used |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_space_physical_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: FlexGroup | Volume Table | table | FlexGroup Constituents in Cluster |
| ONTAP: Health | Volume | table | Volumes with Ransomware Issues (9.10+ Only) |
| ONTAP: Health | Volume | table | Volumes Move Issues |
| ONTAP: SVM | Volume Capacity | timeseries | Top $TopResources Volumes Per Physical Space Used |
| ONTAP: Volume | Volume Table | table | Volumes in Cluster |
| ONTAP: Volume | Capacity | timeseries | Top $TopResources Volumes Per Physical Space Used |
| ONTAP: Volume | Growth Rate | timeseries | Top $TopResources Volumes Per Growth Rate of Physical Used |
| ONTAP: Volume | Growth Rate | table | Top $TopResources Volumes by Physical Usage: Delta |
| ONTAP: Volume Deep Dive | Volume Capacity: $Volume | table | Volumes in Cluster |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used |
volume_space_physical_used_percent¶
The size (in percent) that is physically used in the volume.The percentage is based on volume size including the space reserved for Snapshot copies.This field is valid only if the volume is online.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume |
virtual_used_percent |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-get-iter |
volume-attributes.volume-space-attributes.physical-used-percent |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_space_physical_used_percent metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: SVM | Volume Capacity % | timeseries | Top $TopResources Volumes Per Physical Space Used |
| ONTAP: Volume | Capacity % | timeseries | Top $TopResources Volumes Per Physical Space Used |
| ONTAP: Volume Deep Dive | Per Volume Statistics | timeseries | Per Volume Space Used Percent |
volume_tags¶
Displays tags at the volume level.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | NA |
Harvest generated |
conf/rest/9.12.0/volume.yaml |
volume_top_clients_read_data¶
This metric measures the amount of data read by the top clients to a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/clients |
throughput.read |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/clients |
throughput.readUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_clients_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Clients | timeseries | Top $TopResources Volumes Clients by Read Throughput |
volume_top_clients_read_ops¶
This metric tracks the number of read operations performed by the top clients on a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/clients |
iops.read |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/clients |
iops.readUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_clients_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Clients | timeseries | Top $TopResources Volumes Clients by Read IOPs |
volume_top_clients_write_data¶
This metric measures the amount of data written by the top clients to a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/clients |
throughput.write |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/files |
throughput.writeUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_clients_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Clients | timeseries | Top $TopResources Volumes Clients by Write Throughput |
volume_top_clients_write_ops¶
This metric tracks the number of write operations performed by the top clients on a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/clients |
iops.write |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/clients |
iops.writeUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_clients_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Clients | timeseries | Top $TopResources Volumes Clients by Write IOPs |
volume_top_files_read_data¶
This metric measures the amount of data read from the files of a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/files |
throughput.read |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/files |
throughput.readUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_files_read_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Files | timeseries | Top $TopResources Volumes Files by Read Throughput |
volume_top_files_read_ops¶
This metric tracks the number of read operations performed on the files of a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/files |
iops.read |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/files |
iops.readUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_files_read_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Files | timeseries | Top $TopResources Volumes Files by Read IOPs |
volume_top_files_write_data¶
This metric measures the amount of data written to the top files of a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/files |
throughput.write |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/files |
throughput.writeUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_files_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Files | timeseries | Top $TopResources Volumes Files by Write Throughput |
volume_top_files_write_ops¶
This metric tracks the number of write operations performed on the files of a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/storage/volumes/*/top-metrics/files |
iops.write |
conf/rest/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes/*/top-metrics/files |
iops.writeUnit: Type: Base: |
conf/keyperf/9.15.0/volume.yaml |
The volume_top_files_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | Files | timeseries | Top $TopResources Volumes Files by Write IOPs |
volume_total_data¶
This metric represents the total amount of data that has been read from and written to a specific volume.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
bytes_read, bytes_writtenUnit: Type: Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.totalUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | volume |
read_data, write_dataUnit: Type: Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_total_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Cluster | Throughput | timeseries | Data |
| ONTAP: Datacenter | Performance | timeseries | Top $TopResources Throughput by Cluster |
volume_total_footprint¶
This field represents the total footprint in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
total_footprint |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
total-footprint |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_total_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Total Footprint |
volume_total_metadata_footprint¶
This field represents the total metadata footprint in bytes.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/private/cli/volume/footprint |
total_metadata_footprint |
conf/rest/9.14.0/volume.yaml |
| ZAPI | volume-footprint-get-iter |
volume_total_metadata_footprint |
conf/zapi/cdot/9.8.0/volume.yaml |
The volume_total_metadata_footprint metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Volume | FabricPool | table | Volumes Footprint |
| ONTAP: Volume | FabricPool | timeseries | Top $TopResources Volumes by Total Metadata Footprint |
volume_total_ops¶
Number of operations per second serviced by the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.totalUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
total_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_total_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by IOPs |
| ONTAP: cDOT | Cluster Metrics | timeseries | Top $TopResources Total IOPs by Cluster |
| ONTAP: cDOT | Volume Metrics | timeseries | Top $TopResources Volumes by IOPs |
| ONTAP: Cluster | Throughput | timeseries | IOPs |
| ONTAP: Datacenter | Performance | timeseries | Top $TopResources IOPs by Cluster |
| ONTAP: FlexGroup | Highlights | timeseries | Top $TopResources Constituents by Total IOPs |
| ONTAP: Node | Volume Performance | timeseries | Top $TopResources Volumes by IOPs |
| ONTAP: Volume | Highlights | stat | Top $TopResources Volumes by Total IOPs |
| ONTAP: Volume | Highlights | timeseries | Top $TopResources Volumes by Total IOPs |
| ONTAP: Volume | I/O Density | timeseries | Top $TopResources Volumes by IO Density (IOPs/TiB) |
| ONTAP: Volume | I/O Density | timeseries | Bottom $TopResources Volumes by IO Density (IOPs/TiB) |
| ONTAP: Volume by SVM | Highlights | table | Volume Performance for $SVM (Click volume for detailed drill-down) |
| ONTAP: Volume Deep Dive | Highlights | table | Volume Performance |
volume_write_data¶
Bytes written per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
bytes_writtenUnit: b_per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.throughput_raw.writeUnit: b_per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
write_dataUnit: b_per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_write_data metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Write Throughput |
| ONTAP: cDOT | Volume Metrics | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: FlexGroup | Highlights | timeseries | Top $TopResources Constituents by Average Throughput |
| ONTAP: FlexGroup | Volume Table | table | Top $TopResources Volumes by Write Throughput |
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Write Throughput |
| ONTAP: Node | Volume Performance | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: SVM | Volume Performance | timeseries | Top $TopResources Volumes by Write Throughput |
| ONTAP: Volume | Highlights | stat | Top $TopResources Volumes Total Throughput |
| ONTAP: Volume | Highlights | timeseries | Top $TopResources Volumes by Average Throughput |
| ONTAP: Volume | Volume Table | table | Top $TopResources Volumes by Write Throughput |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Write Throughput |
| ONTAP: Volume by SVM | Highlights | table | Volume Performance for $SVM (Click volume for detailed drill-down) |
| ONTAP: Volume Deep Dive | Highlights | table | Volume Performance |
| ONTAP: Volume Deep Dive | Highlights | stat | Max Write Op Size |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Write Throughput |
volume_write_latency¶
Average latency in microseconds for the WAFL filesystem to process write request to the volume; not including request processing or network communication time
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
write_latencyUnit: microsec Type: average Base: total_write_ops |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.latency_raw.writeUnit: microsec Type: average Base: volume_statistics.iops_raw.write |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
write_latencyUnit: microsec Type: average Base: write_ops |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_write_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Write Latency |
| ONTAP: FlexGroup | Volume Table | table | Top $TopResources Volumes by Write Latency |
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Write Latency |
| ONTAP: SVM | Volume Performance | timeseries | Top $TopResources Volumes by Write Latency |
| ONTAP: Volume | Volume Table | table | Top $TopResources Volumes by Write Latency |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Write Latency |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Write Latency |
volume_write_ops¶
Number of write operations per second to the volume
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/volume |
total_write_opsUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/volume.yaml |
| KeyPerf | api/storage/volumes |
statistics.iops_raw.writeUnit: per_sec Type: rate Base: |
conf/keyperf/9.15.0/volume.yaml |
| ZAPI | perf-object-get-instances volume |
write_opsUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/volume.yaml |
The volume_write_ops metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Aggregate | Volume Performance | timeseries | Top $TopResources Volumes by Write IOPs |
| ONTAP: FlexGroup | Volume Table | table | Top $TopResources Volumes by Write IOPS |
| ONTAP: FlexGroup | Volume WAFL Layer | timeseries | Top $TopResources Volumes by Write IOPs |
| ONTAP: SVM | Volume Performance | timeseries | Top $TopResources Volumes by Write IOPs |
| ONTAP: Volume | Volume Table | table | Top $TopResources Volumes by Write IOPS |
| ONTAP: Volume | Performance | timeseries | Top $TopResources Volumes by Write IOPs |
| ONTAP: Volume by SVM | Highlights | table | Volume Performance for $SVM (Click volume for detailed drill-down) |
| ONTAP: Volume Deep Dive | Highlights | table | Volume Performance |
| ONTAP: Volume Deep Dive | Highlights | stat | Max Write Op Size |
| ONTAP: Volume Deep Dive | Highlights | timeseries | Write IOPs |
vscan_labels¶
This metric provides information about Vscan
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/protocols/vscan |
Harvest generated |
conf/rest/9.12.0/vscan.yaml |
vscan_scan_latency¶
Average scan latency
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/vscan |
scan.latencyUnit: microsec Type: average Base: scan.requests |
conf/restperf/9.13.0/vscan.yaml |
| ZAPI | perf-object-get-instances offbox_vscan_server |
scan_latencyUnit: microsec Type: average Base: scan_latency_base |
conf/zapiperf/cdot/9.8.0/vscan.yaml |
The vscan_scan_latency metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Vscan | Connection Status Counters | timeseries | Top $TopResources Scanners by Scanner Latency |
vscan_scan_request_dispatched_rate¶
Total number of scan requests sent to the scanner per second
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/vscan |
scan.request_dispatched_rateUnit: per_sec Type: rate Base: |
conf/restperf/9.13.0/vscan.yaml |
| ZAPI | perf-object-get-instances offbox_vscan_server |
scan_request_dispatched_rateUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/vscan.yaml |
The vscan_scan_request_dispatched_rate metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Vscan | Connection Status Counters | timeseries | Top $TopResources Scanners by Scanner Requests Throughput |
vscan_scanner_stats_pct_cpu_used¶
Percentage CPU utilization on scanner calculated over the last 15 seconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/vscan |
scanner.stats_percent_cpu_usedUnit: none Type: raw Base: |
conf/restperf/9.13.0/vscan.yaml |
| ZAPI | perf-object-get-instances offbox_vscan_server |
scanner_stats_pct_cpu_usedUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/vscan.yaml |
The vscan_scanner_stats_pct_cpu_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Vscan | Scanner utilization | timeseries | Scanner CPU Utilization |
vscan_scanner_stats_pct_mem_used¶
Percentage RAM utilization on scanner calculated over the last 15 seconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/vscan |
scanner.stats_percent_mem_usedUnit: none Type: raw Base: |
conf/restperf/9.13.0/vscan.yaml |
| ZAPI | perf-object-get-instances offbox_vscan_server |
scanner_stats_pct_mem_usedUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/vscan.yaml |
The vscan_scanner_stats_pct_mem_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Vscan | Scanner utilization | timeseries | Scanner Mem Utilization |
vscan_scanner_stats_pct_network_used¶
Percentage network utilization on scanner calculated for the last 15 seconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/vscan |
scanner.stats_percent_network_usedUnit: none Type: raw Base: |
conf/restperf/9.13.0/vscan.yaml |
| ZAPI | perf-object-get-instances offbox_vscan_server |
scanner_stats_pct_network_usedUnit: none Type: raw Base: |
conf/zapiperf/cdot/9.8.0/vscan.yaml |
The vscan_scanner_stats_pct_network_used metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Vscan | Scanner utilization | timeseries | Scanner Network Utilization |
wafl_avg_msg_latency¶
Average turnaround time for WAFL messages in milliseconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
average_msg_latencyUnit: millisec Type: average Base: msg_total |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
avg_wafl_msg_latencyUnit: millisec Type: average Base: wafl_msg_total |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_avg_non_wafl_msg_latency¶
Average turnaround time for non-WAFL messages in milliseconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
average_non_wafl_msg_latencyUnit: millisec Type: average Base: non_wafl_msg_total |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
avg_non_wafl_msg_latencyUnit: millisec Type: average Base: non_wafl_msg_total |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_avg_repl_msg_latency¶
Average turnaround time for replication WAFL messages in milliseconds.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
average_replication_msg_latencyUnit: millisec Type: average Base: replication_msg_total |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
avg_wafl_repl_msg_latencyUnit: millisec Type: average Base: wafl_repl_msg_total |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_cp_count¶
Array of counts of different types of Consistency Points (CP).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
cp_countUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
cp_countUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
The wafl_cp_count metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Disk | Disk Utilization | timeseries | CP (Consistency Points) Counts |
wafl_cp_phase_times¶
Array of percentage time spent in different phases of Consistency Point (CP).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
cp_phase_timesUnit: percent Type: percent Base: total_cp_msecs |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
cp_phase_timesUnit: percent Type: percent Base: total_cp_msecs |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
The wafl_cp_phase_times metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | System Utilization |
wafl_memory_free¶
The current WAFL memory available in the system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
memory_freeUnit: mb Type: raw Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_memory_freeUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_memory_used¶
The current WAFL memory used in the system.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
memory_usedUnit: mb Type: raw Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_memory_usedUnit: mb Type: raw Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_msg_total¶
Total number of WAFL messages per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
msg_totalUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_msg_totalUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_non_wafl_msg_total¶
Total number of non-WAFL messages per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
non_wafl_msg_totalUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
non_wafl_msg_totalUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_read_io_type¶
Percentage of reads served from buffer cache, external cache, or disk.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
read_io_typeUnit: percent Type: percent Base: read_io_type_base |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
read_io_typeUnit: percent Type: percent Base: read_io_type_base |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
The wafl_read_io_type metric is visualized in the following Grafana dashboards:
| Dashboard | Row | Type | Panel |
|---|---|---|---|
| ONTAP: Node | Backend | timeseries | Reads From |
wafl_reads_from_cache¶
WAFL reads from cache.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_cacheUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_cacheUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_cloud¶
WAFL reads from cloud storage.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_cloudUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_cloudUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_cloud_s2c_bin¶
WAFL reads from cloud storage via s2c bin.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_cloud_s2c_binUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_cloud_s2c_binUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_disk¶
WAFL reads from disk.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_diskUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_diskUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_ext_cache¶
WAFL reads from external cache.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_external_cacheUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_ext_cacheUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_fc_miss¶
WAFL reads from remote volume for fc_miss.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_fc_missUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_fc_missUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_pmem¶
Wafl reads from persistent mmeory.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_pmemUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_reads_from_ssd¶
WAFL reads from SSD.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
reads_from_ssdUnit: none Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_reads_from_ssdUnit: none Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_repl_msg_total¶
Total number of replication WAFL messages per second.
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
replication_msg_totalUnit: per_sec Type: rate Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
wafl_repl_msg_totalUnit: per_sec Type: rate Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_total_cp_msecs¶
Milliseconds spent in Consistency Point (CP).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
total_cp_msecsUnit: millisec Type: delta Base: |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
total_cp_msecsUnit: millisec Type: delta Base: |
conf/zapiperf/cdot/9.8.0/wafl.yaml |
wafl_total_cp_util¶
Percentage of time spent in a Consistency Point (CP).
| API | Endpoint | Metric | Template |
|---|---|---|---|
| REST | api/cluster/counter/tables/wafl |
total_cp_utilUnit: percent Type: percent Base: cpu_elapsed_time |
conf/restperf/9.12.0/wafl.yaml |
| ZAPI | perf-object-get-instances wafl |
total_cp_utilUnit: percent Type: percent Base: cpu_elapsed_time |
conf/zapiperf/cdot/9.8.0/wafl.yaml |