Skip to content

FAQ

How do I migrate from Harvest 1.6 to 2.0?

There currently is not a tool to migrate data from Harvest 1.6 to 2.0. The most common workaround is to run both, 1.6 and 2.0, in parallel. Run both, until the 1.6 data expires due to normal retention policy, and then fully cut over to 2.0.

Technically, it’s possible to take a Graphite DB, extract the data, and send it to a Prometheus db, but it’s not an area we’ve invested in. If you want to explore that option, check out the promtool which supports importing, but probably not worth the effort.

How do I share sensitive log files with NetApp?

Email them to ng-harvest-files@netapp.com This mail address is accessible to NetApp Harvest employees only.

Multi-tenancy

Question

Is there a way to allow per SVM level user views? I need to offer 1 tenant per SVM. Can I limit visibility to specific SVMs? Is there an SVM dashboard available?

Answer

You can do this with Grafana. Harvest can provide the labels for SVMs. The pieces are there but need to be put together.

Grafana templates support the $__user variable to make pre-selections and decisions. You can use that + metadata mapping the user <-> SVM. With both of those you can build SVM specific dashboards.

There is a German service provider who is doing this. They have service managers responsible for a set of customers – and only want to see the data/dashboards of their corresponding customers.

Harvest Authentication and Permissions

Question

What permissions does Harvest need to talk to ONTAP?

Answer

Permissions, authentication, role based security, and creating a Harvest user are covered here.

ONTAP counters are missing

Question

How do I make Harvest collect additional ONTAP counters?

Answer

Instead of modifying the out-of-the-box templates in the conf/ directory, it is better to create your own custom templates following these instructions.

Capacity Metrics

Question

How are capacity and other metrics calculated by Harvest?

Answer

Each collector has its own way of collecting and post-processing metrics. Check the documentation of each individual collector (usually under section #Metrics). Capacity and hardware-related metrics are collected by the Zapi collector which emits metrics as they are without any additional calculation. Performance metrics are collected by the ZapiPerf collector and the final values are calculated from the delta of two consequent polls.

Tagging Volumes

Question

How do I tag ONTAP volumes with metadata and surface that data in Harvest?

Answer

See volume tagging issue and volume tagging via sub-templates

REST and Zapi Documentation

Question

How do I relate ONTAP REST endpoints to ZAPI APIs and attributes?

Answer

Please refer to the ONTAPI to REST API mapping document.

Sizing

How much disk space is required by Prometheus?

This depends on the collectors you've added, # of nodes monitored, cardinality of labels, # instances, retention, ingest rate, etc. A good approximation is to curl your Harvest exporter and count the number of samples that it publishes and then feed that information into a Prometheus sizing formula.

Prometheus stores an average of 1-2 bytes per sample. To plan the capacity of a Prometheus server, you can use the rough formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample

A rough approximation is outlined https://devops.stackexchange.com/questions/9298/how-to-calculate-disk-space-required-by-prometheus-v2-2

Topk usage in Grafana

Question

In Grafana, why do I see more results from topk than I asked for?

Answer

Topk is one of Prometheus's out-of-the-box aggregation operators, and is used to calculate the largest k elements by sample value.

Depending on the time range you select, Prometheus will often return more results than you asked for. That's because Prometheus is picking the topk for each time in the graph. In other words, different time series are the topk at different times in the graph. When you use a large duration, there are often many time series.

This is a limitation of Prometheus and can be mitigated by:

  • reducing the time range to a smaller duration that includes fewer topk results - something like a five to ten minute range works well for most of Harvest's charts
  • the panel's table shows the current topk rows and that data can be used to supplement the additional series shown in the charts

Additional details: here, here, and here

Where are Harvest container images published?

Harvest images are published to both NetApp's (cr.netapp.io) and Docker's (hub.docker.com) image registry. By default, cr.netapp.io is used.

How do I switch from DockerHub to NetApp's image registry (cr.netapp.io) or vice-versa?

Answer

Replace all instances of rahulguptajss/harvest:latest with cr.netapp.io/harvest:latest

  • Edit your docker-compose file and make those replacements or regenerate the compose file using the --image cr.netapp.io/harvest:latest option)

  • Update any shell or Ansible scripts you have that are also using those images

  • After making these changes, you should stop your containers, pull new images, and restart.

You can verify that you're using the cr.netapp.io images like so:

Before

docker image ls -a
REPOSITORY              TAG       IMAGE ID       CREATED        SIZE
rahulguptajss/harvest   latest    80061bbe1c2c   10 days ago    85.4MB <=== no prefix in the repository 
prom/prometheus         v2.33.1   e528f02c45a6   3 weeks ago    204MB       column means from DockerHub
grafana/grafana         8.3.4     4a34578e4374   5 weeks ago    274MB

Pull image from cr.netapp.io

docker pull cr.netapp.io/harvest
Using default tag: latest
latest: Pulling from harvest
Digest: sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae
Status: Image is up to date for cr.netapp.io/harvest:latest
cr.netapp.io/harvest:latest

Notice that the IMAGE ID for both images are identical since the images are the same.

docker image ls -a
REPOSITORY              TAG       IMAGE ID       CREATED        SIZE
cr.netapp.io/harvest    latest    80061bbe1c2c   10 days ago    85.4MB  <== Harvest image from cr.netapp.io
rahulguptajss/harvest   latest    80061bbe1c2c   10 days ago    85.4MB
prom/prometheus         v2.33.1   e528f02c45a6   3 weeks ago    204MB
grafana/grafana         8.3.4     4a34578e4374   5 weeks ago    274MB
grafana/grafana         latest    1d60b4b996ad   2 months ago   275MB
prom/prometheus         latest    c10e9cbf22cd   3 months ago   194MB

We can now remove the DockerHub pulled image

docker image rm rahulguptajss/harvest
Untagged: rahulguptajss/harvest:latest
Untagged: rahulguptajss/harvest@sha256:6ff88153812ebb61e9dd176182bf8a792cde847748c5654d65f4630e61b1f3ae

docker image ls -a
REPOSITORY             TAG       IMAGE ID       CREATED        SIZE
cr.netapp.io/harvest   latest    80061bbe1c2c   10 days ago    85.4MB
prom/prometheus        v2.33.1   e528f02c45a6   3 weeks ago    204MB
grafana/grafana        8.3.4     4a34578e4374   5 weeks ago    274MB

Ports

What ports does Harvest use?

Answer

The default ports are shown in the following diagram.

h

  • Harvest's pollers use ZAPI or REST to communicate with ONTAP on port 443
  • Each poller exposes the Prometheus port defined in your harvest.yml file
  • Prometheus scrapes each poller-exposed Prometheus port (promPort1, promPort2, promPort3)
  • Prometheus's default port is 9090
  • Grafana's default port is 3000

Snapmirror_labels

Why do my snapmirror_labels have an empty source_node?

Answer

Snapmirror relationships have a source and destination node. ONTAP however does not expose the source side of that relationship, only the destination side is returned via ZAPI/REST APIs. Because of that, the Prometheus metric named, snapmirror_labels, will have an empty source_node label.

The dashboards show the correct value for source_node since we join multiple metrics in the Grafana panels to synthesize that information.

In short: don't rely on the snapmirror_labels for source_node labels. If you need source_node you will need to do a similar join as the Snapmirror dashboard does.

See https://github.com/NetApp/harvest/issues/1192 for more information and linked pull requests for REST and ZAPI.

NFS Clients Dashboard

Why do my NFS Clients Dashboard have no data?

Answer

NFS Clients dashboard is only available through Rest Collector. This information is not available through Zapi. You must enable the Rest collector in your harvest.yml config and uncomment the nfs_clients.yaml section in your default.yaml file.

Note: Enabling nfs_clients.yaml may slow down data collection.

File Analytics Dashboard

Why do my File Analytics Dashboard have no data?

Answer

This dashboard requires ONTAP 9.8+ and the APIs are only available via REST. Please enable the REST collector in your harvest config. To collect and display usage data such as capacity analytics, you need to enable File System Analytics on a volume. Please see https://docs.netapp.com/us-en/ontap/task_nas_file_system_analytics_enable.html for more details.

Why do I have Volume Sis Stat panel empty in Volume dashboard?

Answer

This panel requires ONTAP 9.12+ and the APIs are only available via REST. Enable the REST collector in your harvest.yml config.