Cheat Sheet NetApp ONTAP – Commands used in day-to-day operations
Working with NetApp ONTAP often means jumping between clusters, SVMs, LIFs, and shares while keeping an eye on performance and troubleshooting access.
Over time, certain commands become part of the daily routine, whether you’re checking CIFS configurations, verifying volume security, or confirming network failover groups.
To save time and avoid digging through documentation, I’ve pulled together a compact cheat sheet of ONTAP commands I use most frequently in day-to-day operations.
I will update this post on a regular basis.
- ONTAP Cluster
- Storage Virtual Machines (SVMs)
- CIFS (SMB) Shares
- List all CIFS Server (SVMs enabled for CIFS) and their NetBIOS Name
- List all CIFS Shares for a specific SVM
- Determine Owner and Permissions for a given CIFS Share (ONTAP share-level access)
- Determine Owner and Permissions for a given CIFS Share (File/Folder-level NTFS Permissions on the Volume itself set by the OS e.g. Windows Explorer or icacls)
- Displays the Security Style configured for a specific Volume (UNIX, NTFS or Mixed)
- Creating a new Volume and hidden CIFS Share
- Reverting a LIF to Its Home Port (here Management Interface LIF)
- From Share to Volume: Check Usage and Expand Capacity in ONTAP
- List CIFS shares on the SVM
- Determine which volume is mounted for your Share
- Check current Volume Usage
- Verify the aggregate has enough free space before extending Volume
- Extend the volume
- Check volume size and available space again
- Optional (Recommended): Enable autogrow
- CIFS and NFS Share Capacity = Volume Capacity
- Troubleshooting
- Links
ONTAP Cluster
The graceful shutdown of a ONTAP cluster shown below usually and hopefully is not a day-to-day operation, but in my case for my vSphere lab environment (running on three notebooks, my ESXi hosts) where I will run ONTAP as simulator, this is indeed a day-to-day operation because of shutting down the notebooks (ESXi hosts) when not needed to test.
More about setting up your own NetApp ONTAP 9 lab, you can also read my following post.
And more about how to install the ESXi image on notebooks they will just provide USB NICs as usual these days, you can read my following post.
Graceful Shutdown Procedure for NetApp ONTAP Cluster
Must be done from the Cluster Admin CLI (SSH or console)
!! Note !!
Just shutdown or powering off the simulator virtual machines by using the vSphere client can crash them, so first perform a graceful shutdown by using the Cluster Admin CLI like shown below.
Use this command for each node:
When using SSH first check which node owns the Cluster Management LIF (shown below) and then first shutdown the node which not owns it, otherwise you will get disconnected from the SSH console before you are able to shutdown the second node.
cluster01::> system node halt -node <node-name> -inhibit-takeover true -ignore-quorum-warnings true cluster01::> system node halt -node cluster01-01 -inhibit-takeover true -ignore-quorum-warnings true cluster01::> system node halt -node cluster01-02 -inhibit-takeover true -ignore-quorum-warnings true
If you’re shutting down a 2-node ONTAP simulator, you still must halt both nodes via CLI before powering off the VM in vSphere or your hypervisor.
We can now finally powering off the VM.

To reboot instead of shut down:
system node reboot -node * -inhibit-takeover true -ignore-quorum-warnings true
Determine which Node owns the Cluster Management LIF (IP Address)
Show Cluster Management LIF and Owner Node.
cluster01::> network interface show -role cluster-mgmt

To verify the node management LIFs.
cluster01::> network interface show -role node-mgmt

Check current Date and Time
cluster01::> cluster date show # Show NTP configuration cluster01::> cluster time-service ntp server show
Determine on which Node a specific SVM is currently hosted and running on
In NetApp ONTAP, an SVM (Storage Virtual Machine) can be hosted on any node in the cluster, but its data LIFs (logical interfaces) determine which node (and port) client traffic actually flows through.
We can check on which node a specific SVM is running on (i.e., where its LIFs are active).
Output will list all LIFs (data, management, intercluster, etc.) with Home Node (the node where the LIF is supposed to live), Current Node (the node where the LIF is actually running now) and Status (up/down).
The Current Node tells you on which node the SVM’s traffic is running.
cluster01::> network interface show -vserver svm_data

Restarting a Cluster Node
In NetApp ONTAP, restarting a cluster node is a sensitive operation because it affects the HA pair and potentially your storage availability. There are a couple of ways to restart a node safely.
Recommended for maintenance is a soft restart.
This triggers a graceful reboot, and HA will handle failover if necessary.
-node node1 → specify the node you want to reboot.
-reason → optional description that shows in logs.
# cluster01::> system node reboot -node node1 -reason "Maintenance reboot" cluster01::> system node show cluster01::> system node reboot -node cluster01-02 -reason "Maintenance reboot"

We can also force a reboot by running.
Only use if the node is unresponsive and normal reboot fails.
-force skips some checks and forces an immediate reboot.
cluster01::> system node reboot -node node1 -force -reason "Emergency reboot" cluster01::> system node reboot -node cluster01-02 -force -reason "Emergency reboot"
If we are logged into the node directly (console), we can also run
Equivalent to the command system node reboot but only works if you have direct access.
HA failover still works automatically.
cluster01::> reboot
Display detailed Information about each Node in the Cluster
The following command will display detailed information about each node in the cluster.
cluster01::> system node show -instance

Show Cluster Uptime (Node Uptime)
In NetApp ONTAP, to see the uptime of a node or the cluster, you use the following commands.
cluster01::> system node show

cluster01::> system node show -instance

Storage Virtual Machines (SVMs)
Display the CIFS/SMB Server configuration for a specific SVM
The following command shows the CIFS/SMB server configuration for the SVM named svm_data.
cluster01::> vserver cifs show -vserver svm_data

The CIFS server was joined to my AD and the default Computers container during enabling CIFS on the SVM by running the following command. More about in Part 4 under publishing CIFS/SMB shares.
cluster01::> vserver cifs create -vserver svm_data -cifs-server cifs-data02 -domain matrixpost-lab.net

For the CIFS/SMB share (NetBIOS/hostname) I was creating a DNS record in my lab environment which points to the IP address of the SVM we enabled CIFS on.

List all LIFs in the Cluster and show to which SVM they belong
This command will list all LIFs in the cluster but only display two pieces of information for each:
- Which SVM they belong to
- Their IP address
cluster01::> net int show -fields vserver, address

CIFS (SMB) Shares
CIFS (Common Internet File System) is actually Microsoft’s early SMB dialect (SMB 1.0).
When NetApp first integrated Windows file sharing (back in the Data ONTAP 7G / early Clustered ONTAP days), SMB was generally referred to as CIFS across the industry.
NetApp adopted the term CIFS server in their CLI, API, and documentation and still stick to this term, otherwise they needed to adjust thousands of scripts, tools, and habits already use commands like vserver cifs show.
Even though modern ONTAP supports SMB 2.x and SMB 3.x (and you almost never want SMB 1/CIFS anymore), the command set hasn’t been renamed.
List all CIFS Server (SVMs enabled for CIFS) and their NetBIOS Name
This command gives you a quick list of all CIFS server names in your cluster, showing the SVM they belong to and their configured CIFS/NetBIOS identity.
cluster01::> vserver cifs server show -fields cifs-server

List all CIFS Shares for a specific SVM
We can run the following command to show the currently existing CIFS shares on a specific SVM.
cluster01::> vserver cifs share show -vserver <svm> cluster01::> vserver cifs share show -vserver svm_data
c$ and ipc$ shares here are default administrative shares, very similar to those found on a Windows server.
They are pointing to the root of the SVM namespace, which is the root_vol_svm_data.

Determine Owner and Permissions for a given CIFS Share (ONTAP share-level access)
This command tells you how ONTAP controls who can even open the share in the first place, regardless of the underlying NTFS/UNIX permissions.
So users get either denied or allowed to access the share even before checking the native underlying NTFS/UNIX permissions set directly on the file system.
!! Note !!
Finally a user or group must be granted access on the ONTAP share-level (ONTAP config) and also directly on the NTFS ACLs/Unix permissions (metadata stored on the volume/disk shown in the section below) in order to access the cifs share successfully.
It shows the ONTAP share-level ACLs (the permissions applied to the SMB share object itself, ONTAP’s own configuration regardless of the native NTFS/UNIX permissions).
cluster01::> vserver cifs share access-control show -vserver svm_data -share share-01
By default, the share-level ACL gives full control to the standard group named Everyone.
Full control in the ACL means that all users in the domain and all trusted domains have full access to the share.Below for example, I already have changed this default share-level ACL to grant only to my enterprise admin and the matrix-cifs-share01 AD security group full control permissions on share-level access.

Determine Owner and Permissions for a given CIFS Share (File/Folder-level NTFS Permissions on the Volume itself set by the OS e.g. Windows Explorer or icacls)
This command tells you how ONTAP checks the NTFS/UNIX ACLs for the CIFS share mounted on the SVM by using the junction-path /cifs-data.
Reflects the file/folder-level NTFS (or UNIX) permissions on the volume itself. Those are the same permissions you’d see or set in Windows Explorer → Security tab or via icacls.
Below e.g. I was first listing all CIFS shares provided by the SVM named svm_data and then checking the owner and permissions for the CIFS share named share-01 by using its junction-path “/cifs-data”.
cluster01::> vserver security file-directory show -vserver svm_data -path "/cifs-data"

Below we will see our CIFS share mounted on a Windows host. The IP address in the UNC path with \\10.0.0.226 (IP address of the SVM named svm_data) will return the CIFS share they are mounted on the SVM.
In ONTAP, each SVM (Storage Virtual Machine) has its own namespace, which is essentially a virtual file system tree made up of Mounted volumes (like mount points) and Junction paths (like folders or subdirectories).
This allows ONTAP to present multiple volumes as a unified file system to clients.

Displays the Security Style configured for a specific Volume (UNIX, NTFS or Mixed)
The following command displays the security style configured for a specific volume (vol_data03 on SVM svm_data).
The security style determines how ONTAP applies file and directory permissions within that volume:
- NTFS → Uses Windows NTFS ACLs
- UNIX → Uses UNIX mode bits and NFS semantics
- Mixed → Accepts both NTFS and UNIX permissions, applies the most recent change
- Unified (for FlexGroup) → Newer style combining NTFS/UNIX semantics
cluster01::> volume show -vserver svm_data -volume vol_data03 -fields security-style

Creating a new Volume and hidden CIFS Share
Below I will create a new volume and hidden CIFS share by running the following commands.
The CIFS share I will be create as hidden share by using the $ character at the end of the share name. The rest is the same as for normal shares.
More about NTFS shares and permissions you will find here https://learn.microsoft.com/en-us/iis/web-hosting/configuring-servers-in-the-windows-web-platform/configuring-share-and-ntfs-permissions.
Share names ending with the $ character are hidden shares.
For ONTAP 9.7 and earlier, the admin$, ipc$, and c$ administrative shares are automatically created on every CIFS server and are reserved share names. Beginning with ONTAP 9.8, the admin$ share is no longer automatically created.
Source: https://docs.netapp.com/us-en/ontap/smb-admin/share-naming-requirements-concept.html
# I will first create a 500MB data volume named vol_data04 on aggr_data02 cluster01::> volume create -vserver svm_data -volume vol_data04 -aggregate aggr_data02 -size 500MB # Next I will need to mount this volume in the namespace of NetApp ONTAP (SVM) cluster01::> volume mount -vserver svm_data -volume vol_data04 -junction-path /cifs-data04 # I can now create a new CIFS/SMB share by using the new volume and its junction-path # !! Note !! By using the $ character at the end of the shares name like admin-tools$, this will make the share hidden from normal browse lists (e.g., it won’t show up in \\cifs-server\ when users browse shares). cluster01::> vserver cifs share create -vserver svm_data -share-name admin-tools$ -path /cifs-data04 # Finally I will check the creation of the new CIFS share cluster01::> vserver cifs share show -vserver svm_data

Below to show the difference behavior when using hidden shares, I was creating a further volume and CIFS share with the same name as previously but this time without the suffix $ at the end of the share name, so just admin-tools.
cluster01::> vserver cifs share create -vserver svm_data -share-name admin-tools -path /cifs-data05

When browsing to \\cifs-server\, just my last created CIFS share (the one not marked as hidden) but also named admin-tools will be shown up for the root path of the SVM.
Users still can connect to hidden CIFS shares directly by using the full UNC path (e.g. \\cifs-data02\admin-tools$), provided they have the right share + file system permissions.

To browse to a hidden share, here also named admin-tools (but with the $ character as suffix), we just need to enter the full path including the $ suffix for the shares name like shown below, access of course provided we have the right share + file system permissions as mentioned.
\\10.0.0.226\admin-tools$

Reverting a LIF to Its Home Port (here Management Interface LIF)
When a Logical Interface (LIF) runs on a different node or port, after a takeover, giveback, or manual migration, ONTAP marks it as “is-home = false.”
Using the network interface revert command, you can quickly return the LIF to its designated home node and home port.
This is especially important for the management interface, ensuring it always uses the intended and stable network path.

The command below provides a quick view of where the management LIF is currently running. It highlights whether the LIF is on its home node and port, making it easy to identify misplacements after failover or manual migrations.
In my case below, the management LIF is currently running on its home node and port shown in column is-home = true.
cluster01::> network interface show -lif cluster_mgmt -fields curr-node,is-home,curr-port

For testing purpose to manually move the cluster_mgmt LIF from cluster01-01 to cluster01-02, you can simply use the network interface migrate command and specify the target node and port:
Migrating the cluster_mgmt LIF will immediately disconnect your SSH or System Manager session, because your connection is tied to that very LIF.
That’s why we saw: client_loop: send disconnect: Connection reset
cluster01::> network interface migrate -vserver cluster01 -lif cluster_mgmt -destination-node cluster01-02 -destination-port e0c

After the migration, verify the new location:
This will show the LIF now running on cluster01-02, with
is-home = false, demonstrating a non-home placement.
cluster01::> network interface show -lif cluster_mgmt -fields curr-node,is-home,curr-port

After demonstrating a manual failover, you can return the cluster_mgmt LIF to its original node/port with:
cluster01::> network interface revert -vserver cluster01 -lif cluster_mgmt

You can verify the state with:
cluster01::> network interface show -lif cluster_mgmt -fields curr-node,curr-port,is-home

How Auto-Revert Interacts With Manual LIF Migrations
Auto-revert determines whether a LIF automatically returns to its home node and port once they become healthy again.
If auto-revert is enabled, any manual migration is temporary and ONTAP will move the LIF back to its home location automatically.
If auto-revert is disabled, the LIF remains on the manually selected node until it is moved again by an administrator.
You can check or change the setting with:
cluster01::> network interface show -lif cluster_mgmt -fields auto-revert

To enable or disable it run:
# enable auto-revert cluster01::> network interface modify -vserver cluster01 -lif cluster_mgmt -auto-revert true # disable auto-revert cluster01::> network interface modify -vserver cluster01 -lif cluster_mgmt -auto-revert false
From Share to Volume: Check Usage and Expand Capacity in ONTAP
Below we will see how to trace a CIFS/SMB share back to its backing volume, verify current capacity, and grow the volume if more space is required.
Since CIFS shares don’t have their own size limits, available space always depends on the mounted volume.
The steps below cover identifying the share path, locating the corresponding volume, checking free space, expanding it online, and validating the result afterwards.
List CIFS shares on the SVM
Identify the relevant share, in my case its share-01.
cluster01::> vserver cifs share show -vserver svm_data

Determine which volume is mounted for your Share
In my case which volume is mounted for the share-01 and its mount path /cifs-data.
/cifs-data is a NAS namespace mount point (junction path) inside the SVM.
It is not a Linux directory on the controller, and it is not a physical path on disk.
cluster01::> volume show -vserver svm_data -junction-path /cifs-data

Check current Volume Usage
cluster01::> volume show -vserver svm_data -volume vol_data03 -fields size,available,percent-used or cluster01::> df -h -vserver svm_data


Verify the aggregate has enough free space before extending Volume
cluster01::> storage aggregate show -aggregate aggr_data02 -fields size,available

Extend the volume
Add 500 MB more:
cluster01::> volume size -vserver svm_data -volume vol_data03 +500MB or using absolute size instead of increment: cluster01::> volume size -vserver svm_data -volume vol_data03 1GB

Check volume size and available space again
cluster01::> volume show -vserver svm_data -volume vol_data03 -fields size,available,percent-used


Optional (Recommended): Enable autogrow
ONTAP volume autogrow automatically increases the size of a volume when it becomes nearly full, preventing CIFS/NFS write failures without manual intervention.
With a defined maximum size, the volume can grow safely up to a limit without consuming the entire aggregate. This is ideal for unpredictable file share growth and keeps storage expansion online and seamless.
cluster01::> volume autosize vol_data03 -mode grow -maximum-size 100GB
CIFS and NFS Share Capacity = Volume Capacity
Both SMB/CIFS and NFS shares do not have their own size limits. They simply expose the capacity of the underlying ONTAP volume mounted at the junction path.
If the volume grows (manually or via autogrow), the available space automatically increases for both SMB and NFS clients without any share changes or outages.
Troubleshooting
When issues arise, having quick access to the right commands can save valuable time. This section serves as a troubleshooting cheat sheet, providing commonly used commands and their purpose in a concise format.
It’s designed as a practical reference to help identify problems faster, validate configurations, and verify system health without having to dig through extensive documentation.
Switching between admin mode and advanced mode
Advanced mode in ONTAP is a special CLI privilege level that exposes commands that are hidden or restricted in standard admin mode.
# enter advanced mode cluster01::> set -privilege advanced # exit advanced mode and switch back to admin mode cluster01::*> set -privilege admin
Checking and Troubleshooting DNS Configuration in NetApp ONTAP
DNS is a critical component in NetApp ONTAP environments, enabling name resolution for features such as Active Directory integration, LDAP authentication, AutoSupport, and secure certificate validation.
Each Storage Virtual Machine (SVM) in ONTAP maintains its own DNS configuration, and ensuring these settings are correct, and that the DNS servers are reachable, is essential for smooth operations.
Administrators can verify configured DNS servers, test name resolution, and check network connectivity directly from the ONTAP CLI using commands shown below.
This provides a straightforward way to confirm both configuration and reachability when troubleshooting connectivity issues.
List DNS Configuration for all SVMs
Each SVM has its own DNS configuration.
cluster01::> vserver services name-service dns show

Checking if DNS resolution is working
Checking if your ONTAP system can reach the configured DNS servers and can perform successful name resolution is an important step.
Use the built-in check command.
This tries to resolve a hostname (default example.<svm domain name>) using the configured DNS servers and shows success/failure.
cluster01::> vserver services name-service dns check -vserver svm_data

Below I was capturing the traffic on the DNS server when running the above build-in DNS check to see which hostname/FQDN it really tries to resolve.

By entering the advanced mode we can also check if DNS resolution will work for a specific hostname.
Advanced mode in ONTAP is a special CLI privilege level that exposes commands that are hidden or restricted in standard admin mode.
# enter advanced mode cluster01::> set -privilege advanced # checking if the hostname cifs-data02 can be resolved successful into an IPv4 address by using the DNS Server with the IP address 10.0.0.70 cluster01::*> vserver services access-check dns forward-lookup -vserver svm_data -hostname cifs-data02 -lookup-type ipv4 -name-servers 10.0.0.70 # exit advanced mode and switch back to admin mode cluster01::*> set -privilege admin

Verifying SVM Network Connectivity and DNS Reachability in ONTAP
To test whether the ONTAP system can reach a specific destination, such as a DNS server, we can use the network ping command below.
For this command we first need to determine the LIFs for the SVM by using the network interface show command, which finally will be used to send traffic through.
cluster01::> network interface show -vserver svm_data cluster01::> network ping -vserver svm_data -lif lif_data01 -destination 10.0.0.70

List Event Logs
The event log show command lets you view the event management system (EMS) logs, basically the system event log for your cluster. It’s the go-to command for checking cluster activity, errors, warnings, and informational events.
Due to limits in the buffer used to hold EMS messages viewable on running the cluster shell command event log show, you can see no more than 2048 entries for each node.
cluster01::> event log show

Just show error messages by using the -severity ERROR flag.
cluster01::> event log show -severity ERROR

To filter the amount of logs shown by the event log show command, we can e.g. use the -time flag as shown below. Here I want to show all logs raised starting from a specific date and time.
# determine current date/time cluster01::> cluster date show # show events after a specific time cluster01::> event log show -time >"09/11/2025 08:30:00" # show events before a specific time cluster01::> event log show -time <"09/11/2025 08:30:00"

List HA takeover/giveback related events
Check the logs for takeover and giveback related events.
Simulate ONTAP unfortunately does not support High Availability (CFO (controller failover)/SFO (storage failover))
# All HA-related messages cluster01::> event log show -message-name *ha*

# Only notices cluster01::> event log show -message-name *ha* -severity notice

To show the current or recent failover state run.
As mentioned Simulate ONTAP which I am using here unfortunately does not support High Availability (CFO/SFO)
cluster01::> storage failover show

This should actually be looking like this.
cluster01::> storage failover show
Takeover
Node Partner Possible State Description
-------------- -------------- -------- -------------------------------------
cluster01-01 cluster01-02 true Connected to cluster01-02
cluster01-02 cluster01-01 true Connected to cluster01-01
2 entries were displayed.
To manually initiate a takeover of the partner node’s storage we can run the following command.
This would cause the node named cluster01-02 to initiate a negotiated optimized takeover of its partner’s (cluster01-01) storage.
Source: https://docs.netapp.com/us-en/ontap-cli/storage-failover-takeover.html
In my case for my lab environment which is using Simulate ONTAP this is as mentioned unfortunately not supported and the cluster is in non-HA mode.
cluster01::> storage failover takeover -bynode cluster01-02 # to initiate an immediate takeover of its partner's storage, can cause temporary I/O errors for clients, so use with caution. cluster01::> storage failover takeover -bynode cluster01-02 -option immediate

Links
Storage virtualization overview
https://docs.netapp.com/us-en/ontap/concepts/storage-virtualization-concept.htmlManage CIFS shares
https://docs.netapp.com/us-en/ontap-restapi/ontap/protocols_cifs_shares_endpoint_overview.htmlLearn about automatic takeover and giveback in ONTAP clusters
https://docs.netapp.com/us-en/ontap/high-availability/ha_how_automatic_takeover_and_giveback_works.htmlstorage failover giveback
https://docs.netapp.com/us-en/ontap-cli/storage-failover-giveback.htmlcluster ha modify
https://docs.netapp.com/us-en/ontap-cli/cluster-ha-modify.html
