Pace is an open-source library of ready-to-run automation examples for NetApp, starting with ONTAP. Workflows are written across three styles — as imperative scripts that execute step by step, declarative playbooks that describe the desired outcome, or stateful blueprints that track and enforce infrastructure over time — making it easier to pick the right approach for each job.
Picking an automation style up front, without seeing each one in action, is hard. Pace places the three styles side by side, with worked examples, to make the trade-offs concrete.
Tasks are built in three styles, side by side. Underneath, they all talk to the same storage system — so the real differences are obvious: how much code you write, how safe it is to re-run, how easy it is to undo, and how much the tool handles for you versus how much you handle yourself.
Some styles give you full control over every step. Others let you describe what you want and figure out the rest. The same task, the same outcome — just different trade-offs in readability, flexibility, and maintenance. Seeing them next to each other makes those trade-offs concrete instead of theoretical.
Copy one folder, fill in your cluster details, run it. Nothing extra to install.
Here's one task — printing cluster info — written in all three styles. The tool used in each example is one representative of its style. Across the wider library, coverage varies — not every example exists in every style.
# Connect to ONTAP and print cluster version + nodes from ontap_client import OntapClient client = OntapClient.from_env() cluster = client.get("/cluster") nodes = client.get("/cluster/nodes") print(f"Cluster: {cluster['name']} (ONTAP {cluster['version']['full']})") for node in nodes["records"]: print(f" - {node['name']:30} state={node['state']}")
--- # Print ONTAP cluster info using netapp.ontap collection - hosts: ontap gather_facts: false tasks: - name: Get cluster info netapp.ontap.na_ontap_rest_info: hostname: "{{ ontap_host }}" username: "{{ ontap_user }}" password: "{{ ontap_pass }}" gather_subset: - cluster - cluster/nodes register: result - name: Show summary ansible.builtin.debug: msg: "{{ result.ontap_info.cluster.name }} ({{ result.ontap_info.cluster.version.full }})"
terraform { required_providers { netapp-ontap = { source = "NetApp/netapp-ontap" version = "~> 1.1" } } } data "netapp-ontap_cluster_info" "this" { cx_profile_name = "primary" } output "cluster_summary" { value = "${data.netapp-ontap_cluster_info.this.name} (ONTAP ${data.netapp-ontap_cluster_info.this.version.full})" }
The repo includes reusable prompts that cover the full loop — planning a new use case, generating a first draft, and reviewing the result. Paste them into your AI assistant of choice; the output is a starting point that still needs human review and testing.
Break a use case into the resources, dependencies, and edge cases worth handling — before any code is written.
Scaffold a first draft in the style you choose, following the project's structure, helpers, and conventions.
Surface idempotency, error handling, naming, and convention drift — a second pair of eyes before opening a PR.
Use clear commit messages, get one approving review, make sure the checks pass. That's the whole protocol — no matter which style you contribute to.
Make your own copy of the repo and branch off the main line.
Install the tools for the style you'll touch — the contribution guide walks through it.
One style is enough — script, playbook, or blueprint. Add more if you'd like.
Run the same checks locally that CI runs on every PR — lint, tests, formatting.
Clear commit message, one approving review, you're in.
Adding examples in another tool, or a use case we haven't covered yet? Welcome here.