Deploying NetApp Cloud Volumes ONTAP (CVO) in Azure using NetApp Console (formerly BlueXP) – Part 3
In Part 3 of this series, the focus shifts to configuring the actual data services on the Cloud Volumes ONTAP system.
This includes creating Storage Virtual Machines (SVMs), provisioning volumes, and enabling access via NFS and CIFS/SMB.
In addition, this part also covers how the Azure networking side fits into the picture, including VNet peering with an existing hub network (which is connected to my on-prem lab), as well as the Internal Load Balancer (ILB) setup.
Since the NetApp Console (BlueXP) does not automatically configure Azure networking for newly created LIFs (Logical Interface), we’ll also look at what needs to be done manually.
For each newly created LIF (Logical Interface), this includes adding the required frontend IP configuration, health probe, and load-balancing rule in Azure.
By the end of this part, the system will be fully prepared to serve storage to workloads in Azure or on-prem when connected through VPN like shown in this part.
In Part 4 of the series, we shift the focus to security and take a closer look at how antivirus protection is implemented in Cloud Volumes ONTAP using ONTAP VSCAN.
- First Steps after Deploying Cloud Volumes ONTAP
- Verify System Status and Health
- Understanding the Aggregate Layout
- Create a Storage Virtual Machine (SVM)
- Create a Volume
- Publishing CIFS/SMB shares
- VNet Peering and Site-to-Site VPN Connectivity to the On-Prem Network
- Behind the Scenes of ONTAP HA: How SVMs and LIFs Survive a Node Failure
- Links
First Steps after Deploying Cloud Volumes ONTAP
Once the Cloud Volumes ONTAP HA system is successfully deployed, the environment is technically ready but does not yet provide any usable storage.
Before workloads can access the system, several foundational configuration steps need to be completed.
These steps focus on preparing the storage layer, network access, and data services such as NFS or SMB.
Verify System Status and Health
After the Cloud Volumes ONTAP system has started, the first step is to verify that the cluster is healthy and fully operational.
This can be done from the ONTAP CLI, the NetApp Console or directly in ONTAP System Manager, where both nodes should report a healthy state and the HA relationship should be active.
It’s important to ensure that no warnings are present and that all cluster services are running before proceeding with volume creation or data services configuration.
matrixcvo::> cluster show matrixcvo::> node show matrixcvo::> storage failover show matrixcvo::> system node show -fields health,uptime

By using the NetApp console we can navigate to Management -> Systems, select the highlighted Cloud Volume ONTAP system and click on Enter System as shown below.

Within the Information tab we will see the HA status health as shown below.

Or finally by using the ONTAP System Manager within Cluster -> Overview as shown below.

Understanding the Aggregate Layout
After deployment, the Cloud Volumes ONTAP system creates several aggregates automatically. As shown in the output below, each node has its own root aggregate (aggr0), which contains the ONTAP system volume and is used exclusively for internal operations.
These aggregates are not intended for user data.
In addition, a data aggregate (aggr1) is created, which is used for hosting user volumes. This aggregate is where NFS or SMB volumes will later be provisioned and is the primary storage pool for workloads
matrixcvo::> storage aggregate show

After the deployment, it’s a good idea to verify the available storage aggregates to ensure the system is ready for volume creation.
Aggregates can also be reviewed in the NetApp Console and in ONTAP System Manager, where they are displayed as Local Tiers.
This provides a quick overview of available capacity, ownership, and overall health before proceeding with further configuration.
By using the NetApp console we need to navigate to Storage -> Management -> Systems, here select our system and click on the right bar on Enter System.



By using the ONTAP System Manager we also need to navigate within the NetApp Console to Storage -> Management -> Systems, here select our system and click on the right bar for the System Manager on Open.

In the ONTAP System Manager the aggregates can be found under Storage → Tiers like shown below.

Create a Storage Virtual Machine (SVM)
A Storage Virtual Machine (SVM) is a core concept in NetApp ONTAP and is key to its multi-tenant, multi-protocol, and software-defined design.
A Storage Virtual Machine (SVM) is a secure, isolated namespace and management domain that provides access to storage resources (volumes, LIFs, etc.) within a NetApp cluster.
You can think of an SVM as a “virtual NAS/SAN controller” inside a physical ONTAP cluster.
Before creating any volumes, we must first create a Storage Virtual Machine (SVM). The SVM acts as the logical container for storage and defines which protocols (NFS, SMB, iSCSI) are available, along with networking, security, and access settings.
Only after an SVM exists can you create volumes, as volumes always belong to an SVM and inherit its protocol and access configuration.
Before creating any storage, I’ll first take a look at the freshly deployed Cloud Volumes ONTAP system to see whether a data SVM is already in place by running the following command.
matrixcvo::> vserver show

After deployment, the Cloud Volumes ONTAP system already contains a preconfigured data SVM shown below and named svm_matrixcvo.
This SVM is used to host storage volumes and provide access via NFS or SMB.
Since the SVM is already running and assigned to the data aggregate, no additional SVM creation is required before creating the first volume.
The already preconfigured data SVM we can also see in the ONTAP System Manager under Cluster -> Storage VMs.

To ensure clean separation of protocols, security contexts, and authentication domains, we need to create further several SVMs like a dedicated for cifs/SMB shares and a dedicated for NFS shares (exports).
We will now create a Storage Virtual Machine (SVM), then a volume on our new aggregate, and finally enabling access via CIFS (SMB) and/or NFS.
To create a new storage VM execute.
matrixcvo::> vserver create -vserver svm_matrix_cifs -aggregate aggr1 -rootvolume root_vol_svm_cifs -rootvolume-security-style ntfs

matrixcvo::> vserver show

The
vserver showcommand provides a short overview of the SVM, whilevserver show -instancedisplays the complete configuration including enabled protocols, security settings, and operational details. The instance view is especially useful for validation and troubleshooting.
matrixcvo::> vserver show -vserver svm_matrixcvo -fields allowed-protocols matrixcvo::> vserver show -vserver svm_matrixcvo -instance


Create a Management LIF for the SVM
After creating the SVM, its also best practices to create a Management LIF (Logical Interface). This LIF is used for administrative access to the SVM, including management via CLI, System Manager, and protocol configuration.
In ONTAP, an SVM does not technically require a dedicated management LIF in order to function , CIFS and other protocols can run entirely on data LIFs, and many on-prem environments work this way without issues.
However, separating management and data LIFs remains a best practice, especially in cloud deployments like Azure CVO, because it isolates administrative traffic, improves manageability, and aligns with network design patterns.
By default for the preconfigured SVM by the NetApp console (in my case
svm_matrixcvo) a SVM management LIF/IP and dedicated data LIFs were configured.Source: https://docs.netapp.com/us-en/storage-management-cloud-volumes-ontap/reference-networking-azure.html
It must be placed in the correct Azure subnet and associated with the proper network settings to ensure reliable connectivity, especially in HA deployments where networking and routing need to be precise from the start.
The Management LIF is created using the following command, which assigns a dedicated IP address (in my case I will use here 172.18.10.52) to the SVM, binds it to the correct node and port, and configures it without a data protocol since it is used purely for management purposes:
matrixcvo::> network interface create -vserver svm_matrix_cifs -lif svm_matrix_cifs_mgmt -role data -data-protocol none -home-node matrixcvo-01 -home-port e0a -address 172.18.10.52 -netmask 255.255.255.0

In ONTAP, only cluster and node interfaces have a dedicated management role. At the SVM level, a management IP is implemented as a data LIF with
data-protocol none, since ONTAP does not provide a separate management role for SVMs.Functionally, this allows the LIF to be used for administration and connectivity without serving client protocols such as CIFS or NFS.
matrixcvo::> network interface show -fields role,data-protocol

Before continuing, it’s important to verify whether a default route already exists for the new SVM, if missing create it.
If no default route is present, one must be created so the SVM can reach external services such as DNS servers and the Active Directory domain controllers, which is required for a successful CIFS/SMB configuration.
In Azure, the first usable IP address of every subnet is always reserved as the default gateway, implemented by Azure’s software-defined router (SDN). This gateway is not a VM, not visible, and not configurable, it’s Azure’s internal routing fabric.
Reserved by Azure:
172.18.10.0→ network address,172.18.10.1→ Azure default gateway,172.18.10.2–3→ Azure internal use, Usable for VMs: starting at172.18.10.4
matrixcvo::> network route show -vserver svm_matrix_cifs matrixcvo::> network route create -vserver svm_matrix_cifs -destination 0.0.0.0/0 -gateway 172.18.10.1

Azure Internal Load Balancer Configuration for the Management LIF
After creating the management LIF on the SVM, it is not reachable automatically, not even from within the same subnet. This also true for LIFs in general created for Cloud Volume ONTAP nodes in Azure.
Unlike on physical ONTAP systems or in a vSphere-based lab environment (where a newly created LIF immediately appears as an IP address on the virtual NIC), in Azure it behaves differently.
In Azure, the IP address of the LIF is not automatically assigned to the virtual network interface of the VM (node), which means the LIF is unreachable until the Azure Internal Load Balancer is configured accordingly.
While the NetApp console (formerly BlueXP) allocates IP addresses to the Azure NICs during deployment, the mapping of LIFs to these IPs is managed by ONTAP software, not by Azure networking API calls.
This is why, after creating a LIF, you must manually update the Azure Internal Load Balancer (ILB) by adding:
- a frontend IP configuration
- a health probe
- a load-balancing rule
Only after these components are in place will the LIF become reachable, even from within the same virtual network and subnet.
In a vSphere-based ONTAP deployment, this behavior is more transparent, as the LIF IP becomes visible directly on the VM’s network interface. In Azure, however, this mapping must be handled explicitly through the ILB.
The reason our vSphere Lab and physical NetApps “expose” LIFs directly, while Azure CVO does not, comes down to Layer 2 vs. Layer 3 networking control.
On-premises and vSphere environments operate on Layer 2, allowing ONTAP to dynamically “claim” any IP via ARP broadcasts that the underlying network simply trusts and passes through.
In contrast, Azure uses a Layer 3 Software Defined Network that ignores ARP and strictly drops traffic unless the IP is explicitly whitelisted and routed via the Azure fabric API. Consequently, CVO uses floating IPs and Route Tables to move traffic between nodes rather than relying on the vNIC to “expose” every individual address.

Below we will see the first VM’s (nodes) virtual network interface and its primary private IP address with 172.18.10.16.

When clicking above on the network interface, we will see all directly assigned private or public IP addresses.

Our previously created Management LIF (command below) with the IP address 172.18.10.52 is not assigned here.
matrixcvo::> network interface create -vserver svm_matrix_cifs -lif svm_matrix_cifs_mgmt -role data -data-protocol none -home-node matrixcvo-01 -home-port e0a -address 172.18.10.52 -netmask 255.255.255.0
Therefore the node management interface IP with 172.18.10.16 (shown above on the network interface of the first node) is assigned to the backend pool of the internal load balancer (ILB) in Azure as shown below.
Behind the scenes, the Azure Internal Load Balancer backend pool contains the node management interfaces of both CVO nodes.
All frontend IP configurations that you create for individual LIFs are ultimately routed through this backend pool.
Therefore, ONTAP LIFs are not reachable until they are mapped through an Azure Internal Load Balancer. Azure itself has no awareness of it until the corresponding ILB configuration is created.

When deploying Cloud Volumes ONTAP through the NetApp Console, the Azure Internal Load Balancer (ILB) is created automatically as part of the deployment and shown below in the dedicated resource group for CVO.
However, this initial configuration only covers the default system interfaces.
For every newly created LIF, the ILB must be manually extended with the corresponding frontend IP configuration, health probe, and load-balancing rule to make the LIF reachable as mentioned.

So first I will add the newly created management LIF and its assigned IP address 172.18.10.52 as new frontend IP configuration on the ILB within Settings -> Frontend IP configration, click on + Add.
Enter a name for the new frontend IP configuration, the IP version, the virtual network and subnet the IP is attached to, for the assignment select Static.
Finally enter the IP address of our new management LIF and for the availability zone leave Zone-redundant.

Next we need to create a new health probe for our new frontend IP configuration.
On the ILB within Settings -> Health Probes, click on + Add.

TCP 63004 is the NetApp CVO HA health-probe port.
This is how Azure knows which node owns the floating IP.
It is pure HA signaling.The probe checks whether the active CVO node is reachable on the expected port.
It is required so Azure knows which backend node is currently serving the LIF.

And finally we need to create new load balancing rule.
On the ILB within Settings -> Load Balancing rules, click on + Add.
Select the IP version, our previously created frontend IP address, our backend pool, check High availability ports, select our previously created health probe, for session persistance select None, Idle timeout leave 4 minutes and finally check Enable Floating IP.
High Availability Ports must be enabled so Azure forwards all traffic to the active CVO node. This is required because ONTAP uses multiple ports and the LIF is not bound to a single service port.
ONTAP handles session management itself, and persistence at the load balancer level is not required.
Enable Floating IP is required for ONTAP because the service IP (LIF) is not bound directly to the VM’s NIC and must be preserved end-to-end for correct routing and failover behavior.

Why we can now already ping 172.18.10.52 from e.g. the connector VM below which runs in the same subnet.
- The SVM management IP is bound to a real NetApp HA-managed interface
- Azure now considers the backend healthy
- Traffic is forwarded correctly
- ICMP reaches ONTAP

I can also reach the management IP from my on-prem DC.

Create the Data LIFs for the new SVM
Creating an SVM alone is not sufficient to provide data access. Each SVM requires at least one data LIF, which represents the network endpoint used by clients to access storage services such as NFS or SMB. The LIF defines the IP address, subnet, and network path used by the SVM.
Before creating a new data LIF, it is useful to check the existing LIF configuration, below of the preconfigured SVM (
svm_matrixcvo) to identify the correct network, port, and addressing scheme that should be reused for the new SVM.
So first, check which port is used by existing data LIFs:
In most CVO deployments this will be
e0a.In an ONTAP HA configuration, each SVM should have at least two data LIFs, one hosted on each node. This ensures high availability, as LIFs automatically fail over to the partner node in case of a node failure. Clients continue accessing data using the same IP addresses, providing seamless failover without manual intervention.
Although ONTAP can fail over data LIFs between nodes, this mechanism is intended for fault tolerance, not normal operation. Best practice is to configure at least one data LIF per node so that each node actively serves client traffic. This ensures optimal performance, predictable failover behavior, and full utilization of the HA architecture.
From a pure availability standpoint, a single data LIF is sufficient, as ONTAP will fail it over to the partner node in case of a node failure. However, using one data LIF per node is considered best practice because it enables load distribution, improves performance, and fully utilizes the HA architecture.
matrixcvo::> network interface show -vserver svm_matrixcvo

Create a new data LIF for the SVM using a free IP address in the same subnet.
We can first check which IP addresses are already in use by a specific subnet in our connected subscription by running:
PS> az login $SUBNET_ID= az network vnet subnet show -g <Ressource Group> --vnet-name <VNet Name> -n default --query id -o tsv PS> $SUBNET_ID= az network vnet subnet show -g cvo --vnet-name VNet-CVO -n default --query id -o tsv PS> az network nic list --query "[?ipConfigurations[?subnet.id=='$SUBNET_ID']].ipConfigurations[].privateIPAddress" -o table

So for my new data LIFs I will use
172.18.10.50and172.18.10.51which both are not in use so far.
matrixcvo::> network interface create -vserver svm_matrix_cifs -lif cifs_data_1 -role data -data-protocol cifs -home-node matrixcvo-01 -home-port e0a -address 172.18.10.50 -netmask 255.255.255.0

In HA setups, it is recommended to create one data LIF per node.
We should create two LIFs, one per node, with different names and IPs.
matrixcvo::> network interface create -vserver svm_matrix_cifs -lif cifs_data_2 -role data -data-protocol cifs -home-node matrixcvo-02 -home-port e0a -address 172.18.10.51 -netmask 255.255.255.0

After creating the data LIFs, the SVM now has one active data interface per node. This setup follows NetApp best practices for high availability.
matrixcvo::> network interface show -vserver svm_matrix_cifs

So far the new data LIF will remain unreachable until its IP address is added to the load balancer configuration, the same as priviously for our new management LIF and IP address.
Here tested directly from the connector VM which runs in the same subnet.

Below we will see the default frontend IP configuration on the ILB directly after deploying CVO by using the NetApp console and before we added previously our new management LIF.
- Cluster management IP → 172.18.10.5
Used for ONTAP cluster management. Accessed via: ONTAP CLI and System Manager. Not used for data traffic - dataAFIP → 172.18.10.6
Data LIF for node A. Used for NFS / SMB / iSCSI (depending on config). Client Access, Automatically failed over by the load balancer - dataBFIP → 172.18.10.7
Data LIF for node B. Used for NFS / SMB / iSCSI (depending on config). Client Access, Automatically failed over by the load balancer - svmFIP → 172.18.10.8
SVM management LIF. The management IP forsvm_matrixcvo:
Used for: SVM-scoped administration, CIFS/NFS configuration, Some API calls
This is not a data LIF, but an SVM-level endpoint.

So we also need to add our both new data LIF IP addresses to the ILB configuration.
matrixcvo::> network interface show -vserver *

After creating the data LIFs in ONTAP, the next step is to add the corresponding IP addresses to the Azure internal load balancer.

The configuration of the health probe for both data LIFs is the same as previously for the management LIF.

Also the configuration for the load balancing rules for each data LIF.

About creating a new storage virtual machine (SVM) in ONTAP you can also read my following post.
Create a Volume
After creating and configuring a SVM, the next step is to create a new volume that will be used to store and present data to clients.
The volume must be created on the correct SVM and mounted into its namespace so it can later be shared via SMB and/or NFS
A volume in NetApp ONTAP is a logical, mountable unit of storage that resides inside an aggregate, is served by a Storage Virtual Machine (SVM), and is accessible to clients via NFS, SMB, iSCSI, or FC.
Below we will create a 5 GB data volume called vol_data01 on aggr1.
matrixcvo::> volume create -vserver svm_matrix_cifs -volume vol_cifs_data01 -aggregate aggr1 -size 10GB

We can check the creation by running.
matrixcvo::> volume show -vserver svm_matrix_cifs matrixcvo::> volume show -volume vol_cifs_data01 -instance

Or by using the ONTAP System Manager and here within Storage -> Volumes.

And also by using the NetApp console under Storage -> Management -> Systems, here select our system and click on the right bar on Enter System.

Within the Volumes tab we will find our newly created volume named vol_data01.

Publishing CIFS/SMB shares
By default, Cloud Volumes ONTAP creates a data SVM with NFS and iSCSI enabled. While it is technically possible to enable CIFS/SMB on this same SVM and join it to an Active Directory domain, this approach is mainly recommended just for lab or test environments.
In production scenarios, best practice is to create a separate SVM for SMB workloads to ensure clean separation of protocols, security contexts, and authentication domains.
Therefore I created further above a dedicated SVM just for cifs/SMB shares and named svm_matrix_cifs.
First we ensure if CIFS is already enabled on the SVM by running the following command.
matrixcvo::> vserver cifs show -vserver svm_matrix_cifs

So far SMB/CIFS is not configured on this SVM.

So if it isn’t configured yet, we can enable it by running.
This requires the SVM to join an Active Directory domain.
vserver cifs create → Enables the CIFS (SMB) protocol service on the SVM
-vserver svm_matrix_cifs → The SVM where CIFS should be activated
-cifs-server cifs-matrix-cvo → The NetBIOS/hostname that Windows clients will see when they connect (\\cifs-matrix-cvo\share)
-domain matrixpost-lab.net → Joins this CIFS server to the specified Active Directory domainThe cifs-server named here cifs-matrix-cvo finally is the CIFS/SMB server object of the SVM. Think of it as: The Active Directory computer account that represents the SVM for SMB access.
matrixcvo::> vserver cifs create -vserver svm_matrix_cifs -cifs-server cifs-matrix-cvo -domain matrixpost-lab.net
For the CIFS/SMB share (NetBIOS/hostname) I will create a DNS record in my lab environment which points to the IP address of the SVM we enabled CIFS on.
cifs-matrix-cvo with the IP address 172.18.10.8 is our SVM running on CVO in Azure. Below you can also see my SVM named cifs-data02 on which cifs is enabled and running in my on-prem vSphere lab environment like shown in my following post https://blog.matrixpost.net/step-by-step-guide-part-4-how-to-build-your-own-netapp-ontap-9-lab-day-to-day-operations/.

I will also need to set a DNS server first on the Vserver’s DNS configuration.



We can configure it by using the CLI or the System Manager GUI. Below we will see by using the CLI.
matrixcvo::> vserver services dns show -vserver svm_matrix_cifs matrixcvo::> vserver services dns create -vserver svm_matrix_cifs -domains matrixpost-lab.net -name-servers 10.0.0.70
For my lab environment just one DNS server is fine.

matrixcvo::> vserver services dns show -vserver svm_matrix_cifs

Let’s try it again.
matrixcvo::> vserver cifs create -vserver svm_matrix_cifs -cifs-server cifs-matrix-cvo -domain matrixpost-lab.net
For the domain join and to create a AD machine account for this CIFS server we need to supply the name and password of a Windows account with sufficient privileges to add computers to the AD.

Our new CIFS server is joined to our AD and the default Computers container.

Looks good.
matrixcvo::> vserver cifs show -vserver svm_matrix_cifs


How it works finally that this SVM can reach and connect to my on-premise Active Directory and domain controller, we will see in the section below about VNet Peering and Site-to-Site VPN connection with my on-prem network.
We can now mount our previously created new volume named vol_cifs_data01 in the namespace of our newly created SVM named svm_matrix_cifs.
Mounting a volume in the namespace in NetApp ONTAP is a key concept, especially when working with NAS protocols like NFS and SMB.
In ONTAP, each SVM (Storage Virtual Machine) has its own namespace, which is essentially a virtual file system tree made up of Mounted volumes (like mount points) and Junction paths (like folders or subdirectories).
This allows ONTAP to present multiple volumes as a unified file system to clients.
matrixcvo::> volume mount -vserver svm_matrix_cifs -volume vol_cifs_data01 -junction-path /data
Warning: The export-policy “default” has no rules in it. The volume will therefore be inaccessible over NFS and CIFS protocol.
As we later just want to create CIFS/SMB shares on this volume, we can ignore it and confirm with yes, despite the CIFS protocol is mentioned but for default configurations a bit misleading.

By default, export-policy enforcement for CIFS/SMB is disabled in ONTAP, meaning export policies are not evaluated for CIFS access.
This also applies to the default export policy, so it can safely be ignored when using SMB shares.
Access control is handled exclusively through share permissions and NTFS ACLs unless export-policy enforcement is explicitly enabled.
ONTAP does provide the option to configure export-policy check for CIFS. When the CIFS option
"is-exportpolicy-enabled"istrueyou do need to create export-policy rules for CIFS protocol access.Source: https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/Are_export_policy_rules_necessary_for_CIFS_access
matrixcvo::> set advanced matrixcvo::*> vserver cifs options show -fields is-exportpolicy-enabled

We can verify the mounted volume by running:
matrixcvo::> volume show -vserver svm_matrix_cifs -fields volume,junction-path,state
The volume is now mounted under the SVM namespace and ready to be shared via SMB or NFS, completing the storage preparation for client access.


From now on SMB clientss can access it via via CIFS: \\<svm-lif-ip>\DataShare.
We can run the following command to show the currently existing CIFS shares on our SVM.
c$ and ipc$ shares here are default administrative shares, very similar to those found on a Windows server.
They are pointing to the root of the SVM namespace, which is the
root_vol_svm_cifs, not our newly mounted volume vol_cifs_data01.
matrixcvo::> vserver cifs share show -vserver svm_matrix_cifs

Below we can see the
root_vol_svm_cifsvolume which includes the above mentioned c$ and ipc$ shares (default administrative shares).

Every SVM in ONTAP requires a root volume, often named something like root_vol_svm_<custom name>. This root volume is mounted at / in the SVM’s namespace.
Our new volume vol_cifs_data01 is mounted at /data, but unless it’s explicitly shared, it’s not exposed via SMB.

We now need to create an SMB share for our newly created vol_cifs_data01 volume.
matrixcvo::> vserver cifs share create -vserver svm_matrix_cifs -share-name data01 -path /data

By running the command to list all CIFS shares on a specific SVM (Storage Virtual Machine) in NetApp again, we will now see our newly created data01 share.
matrixcvo::> vserver cifs share show -vserver svm_matrix_cifs

In the System Manager we will see the mount path or our new SMB share.

And finally we need to set the share permissions.
For my lab environment I will assign here just my enterprise admin with full control.
matrixcvo::> vserver cifs share access-control create -vserver svm_matrix_cifs -share data01 -user-or-group "MATRIXPOST\superuser" -permission Full_Control

We can verify them by running:
matrixcvo::> vserver cifs share show -vserver svm_matrix_cifs
By default, when ONTAP creates an SMB share: It assigns Everyone / Full Control at the share level. Actual security is enforced by NTFS permissions, not the share ACL.
By default, the share-level ACL gives full control to the standard group named Everyone.
Full control in the ACL means that all users in the domain and all trusted domains have full access to the share. You can control the level of access for a share-level ACL by using the Microsoft Management Console (MMC) on a Windows client or the ONTAP command line.
Source: https://docs.netapp.com/us-en/ontap/smb-admin/manage-smb-level-acls-concept.html

In the System Manager GUI we will also see our newly added share permissions.
Here we also see the full SMB path to finally mount the share on our clients.
Unfortunately, in the ONTAP CLI, we do have to manually piece together the full SMB share path information that the System Manager GUI shows automatically in a nice, human-readable way.

We can now mount our new SMB share when using an account we have authorized previously, in my case this is the enterprise admin named superuser.

For the folder we need to enter the mount path of our new SMB share shown above.

Looks good!

I am also able to create a folder and file on the new share.

Creating a CIFS share only controls access to the share itself. NTFS permissions are applied at the filesystem level and are independent of share permissions.
By default, ONTAP assigns full control to the local Administrators group, which is expected behavior for newly created NTFS volumes.
After creating the SMB share, the NTFS permissions still need to be adjusted. This is done by mounting the share from a Windows system and modifying the security settings directly via Windows Explorer, as NTFS permissions are managed at the filesystem level and not through ONTAP CLI. This approach ensures proper ownership, inheritance, and access control for users and groups.

VNet Peering and Site-to-Site VPN Connectivity to the On-Prem Network
In this section, I’ll show how the CVO virtual network is peered with an existing hub VNet, which is already connected to my on-premises environment via a site-to-site IPsec VPN.
This setup allows seamless connectivity between Azure and the on-prem vSphere lab, enabling direct access to CVO services from on-prem systems without exposing them publicly.
On the hub VNet I will add a new peering as shown below.

For the remote virtual network select our dedicated CVO virtual network and check Allow ‘VNet-HUB’ to access ‘VNet-CVO’ and Allow ‘VNet-HUB’ to receive forwarded traffic from ‘VNet-CVO’.

For the local virtual network check Allow ‘VNet-HUB’ to access ‘VNET-CVO’ and Allow ‘VNet-HUB’ to receive forwarded traffic from ‘VNet-CVO’.



Configuring pfSense for Site-to-Site Connectivity (Azure & On-Prem)
In this section, I’ll cover the network configuration required when using pfSense instead of Azure’s native VPN Gateway for site-to-site connectivity.
About deploying pfSense in Azure you can also read my following post.
Since pfSense does not automatically handle route propagation like the Azure VPN Gateway, a few manual steps are required.
This includes adjusting the FRR BGP configuration in Azure to advertise the new CVO virtual network, updating the on-prem pfSense firewall rules to allow traffic from the CVO subnet, and adding static routes on the Azure pfSense instance so traffic can be forwarded correctly.
On the pfSense appliance running in Azure I will first add the Cloud Volume ONTAP virtual network subnet where the Agent (Connector VM) and both ONTAP nodes (VMs) are running.


On my on-premise pfSense appliance I can see that the Cloud Volume ONTAP subnet is already successfully advertised to my on-prem environment and pfSense.

So I will test from an on-prem virtual machine running in my vSphere lab environment if I can connect to the Agent (Connector VM).

So far I can’t reach the Agent (Connector VM) from my on-prem environment.

The attached NSG to the network interface of the connector VM will allow ICMP traffic already.
Although the on-prem network is external to Azure, traffic arriving via a site-to-site VPN is treated as VirtualNetwork traffic by Azure. This means it matches the default
AllowVNetInBoundNSG rule, which allows all protocols, including ICMP. As a result, ICMP traffic such as ping is permitted unless explicitly blocked by a custom NSG rule or firewall.

So next I will check if the Agent (Connector VM) has the route to my on-prem network 10.0.0.0/24, which is not the case and missing so far.


The pfSense appliance running in Azure have the following route table where also the on-prem network with 10.0.0.0/24 is listed.

This route is advertised by the on-prem pfSense appliance to the pfSense appliance running in Azure by using BGP.

I will also allow all inbound traffic on the Azure pfSense from the CVO subnet 172.18.10.0/24.

Nevertheless I can’t reach the pfSense appliance on its internal network interface with the IP 172.17.240.250 from the connector VM.

The Agent (Connector VM) has because of the VNet peering, already the route to the virtual network where the Azure pfSense appliance is running.

But the pfSense appliance in Azure so far doesn’t have a route to the virtual network for the CVO deployment with its subnet 172.18.10.0/24.

So I will add this route on the Azure pfSense appliance.

Unfortunately I still can’t ping Azure pfSense’s internal network interface and its IP address 172.17.240.250.

The reason for is that the route on the Agent (Connector VM) for the pfSense’s network interface and IP address 172.17.240.0/24 is missing.


For testing purpose I will add the route manually and temporary on the connector VM.
In Azure, the first usable IP address of every subnet is always reserved as the default gateway, implemented by Azure’s software-defined router (SDN). This gateway is not a VM, not visible, and not configurable, it’s Azure’s internal routing fabric.
Reserved by Azure:
172.18.10.0→ network address,172.18.10.1→ Azure default gateway,172.18.10.2–3→ Azure internal use, Usable for VMs: starting at172.18.10.4
$ sudo ip route add 172.17.240.0/24 via 172.18.10.1 dev eth0

Testing if the Agent (Connector VM) can now ping the pfSense appliance running in Azure. Looks good now!

And vice versa testing if pfSense can ping the connector VM.
Looks also good.

To provide these routes to all virtual machines without configuring them manually on each system in the CVO network, I will create a new route table, adding the required routes and associating it to the CVO subnet.

Propagate gateway routes in my case I can leave because I am not using the Azure VPN Gateway to connect my on-premise environment and instead using my pfSense appliance which will not advertise its routes by default to Azure’s internal routing fabric.



Adding a new route.

Below I will add a new route which shows the next hop (172.18.10.1 Azure default gateway) to route traffic to my hub virtual network resp. its perimeter subnet (172.17.240.0/24) in which my pfSense appliance (its internal network interface and IP 172.17.240.250) is running and establishing a site-to-site VPN tunnel with my on-prem network.
In Azure, thefirst usable IP address ofevery subnet is always reserved as the default gateway, implemented by Azure’s software-defined router (SDN). This gateway is not a VM, not visible, and not configurable, it’s Azure’s internal routing fabric.
Reserved by Azure:
172.18.10.0→ network address,172.18.10.1→ Azure default gateway,172.18.10.2–3→ Azure internal use, Usable for VMs: starting at172.18.10.4


This new route table we now need to associate with our CVO virtual network and subnet in which the CVO nodes (VMs) and connector VM is running.

Selecting the CVO virtual network and subnet.


I will also have to add a further route to this new route table which will show the route to my on-premise network 10.0.0.0/24.
The next hop for my on-prem network is the internal network interface and IP
172.17.240.250of my pfSense appliance which as mentioned is establishing a site-to-site VPN tunnel with my on-prem network.For the next hop type we need to select virtual appliance.


Below the we can see all routes the connector VM (OS) itself is aware of.
In Azure, every subnet automatically receives a default route pointing to the first usable IP address (e.g.
172.18.10.1), which represents Azure’s software-defined gateway.Virtual machines always forward traffic to this gateway, while actual routing decisions are handled by Azure using user-defined routes (UDRs).
In our setup, traffic destined for the on-prem network is forwarded via a UDR to the pfSense VM in the hub VNet, which then routes it through the site-to-site VPN.

But how does our Cloud Volume ONTAP systems (nodes and connector) know where to send traffic for on-prem networks?
This is where our previously created User Defined Routes (UDRs) come into play.

CVO does NOT know about pfSense directly
CVO only knows: “send traffic to my default gateway”
Azure routing decides what happens nextThe route table associated with the CVO subnet tells Azure that traffic for
10.0.0.0/24must be forwarded to172.17.240.250(internal NIC of pfSense).So the routing logic finally is, CVO sends traffic to its default gateway (
172.18.10.1), Azure checks the UDR attached to the subnet and Azure’s internal routing fabric forwards packets destined to my on-prem network10.0.0.0/24by sending it to the definded next hop: Virtual appliance →172.17.240.250(pfSense)
CVO (172.18.10.x) ↓ Azure default gateway (172.18.10.1) ↓ UDR attached to subnet: 10.0.0.0/24 → next hop: 172.17.240.250 ↓ pfSense VM (Hub VNet) ↓ Site-to-Site VPN ↓ On-prem network
Testing if the connector VM can already ping my on-prem virtual machine with the IP address 10.0.0.70.
The connector VMs IP address is
172.18.10.4.

And vice versa, pinging the connector VM from my on-prem virtual machine (here my domain controller).

Next I will test if I can ping all IP addresses of the CVO cluster from on-premise.
To determine these IP addresses first, I will run the following command which lists all IP addresses (LIFs) in the cluster.
matrixcvo::> network interface show

Below I will check if can ping the SVM on which I want to enable cifs/smb on and therefore need to join it to my on-prem Active Directory. Looks good.

On the CVO subnet threre is no NSG assigned to as we previously checked and therefore no traffic is filtered on the virtual network and subnet itself.

Instead a NSG is attached directly to the network interface of the nodes (VMs) like shown below. The same as for the connector VM but with different by default configured inbound/outbound rules by the NetApp console during deployment of the CVO.

Inbound traffic for VNet to VNet and Azure load balancer to Any is not filtered and allowed in general.


Outbound traffic is not filtered and allowed in general.

Behind the Scenes of ONTAP HA: How SVMs and LIFs Survive a Node Failure
In an ONTAP HA pair, failover does not involve copying data or synchronizing configurations at the time of failure.
Both nodes already have access to the same storage and share a synchronized cluster configuration, including SVM definitions and network interfaces.
When a node fails, its partner immediately takes ownership of the affected aggregates and logically migrates the data LIFs to itself.
Because LIFs are logical objects and the data is already accessible on both nodes, the failover process is fast and does not require any data movement.
Clients simply reconnect to the same IP addresses, making the failover largely transparent. This design ensures high availability while avoiding the overhead of real-time replication during an outage.
The preconfigured SVM by the NetApp console when deploying CVO in Azure, also has two data LIFs (data_1 and data_2).
When a Cloud Volumes ONTAP system is deployed, NetApp automatically creates iSCSI LIFs on the default SVM, even if iSCSI is not actively used.
These LIFs are created to ensure protocol readiness and proper HA design, allowing block services to be enabled later without reconfiguration. The presence of iSCSI LIFs does not mean that iSCSI is active or exposed.
In ONTAP, all storage access, including block storage via iSCSI or FC, is provided through an SVM. Just like SMB or NFS, iSCSI and FC requires its own SVM with dedicated LIFs. The SVM acts as the protocol endpoint, while the underlying data remains accessible to both nodes in an HA pair. This design allows consistent management and high availability across all protocols.
matrixcvo::> network interface show -vserver svm_matrixcvo

The “Allowed Protocols” field in an SVM indicates which protocols the SVM is permitted to use, not which ones are actively running or supported by the platform.
In Cloud Volumes ONTAP,
Fibre Channelappears as an allowed protocol because ONTAP uses a unified code base, but FC is not operationally supported due to the lack of physical FC hardware in cloud environments.


In Part 4 of the series, we shift the focus to security and take a closer look at how antivirus protection is implemented in Cloud Volumes ONTAP using ONTAP VSCAN.
Links
Plan your Cloud Volumes ONTAP configuration in Azure
https://docs.netapp.com/us-en/storage-management-cloud-volumes-ontap/task-planning-your-config-azure.htmlLaunch Cloud Volumes ONTAP in Azure
https://docs.netapp.com/us-en/storage-management-cloud-volumes-ontap/task-deploying-otc-azure.htmlLaunch a Cloud Volumes ONTAP HA pair in Azure
https://docs.netapp.com/us-en/storage-management-cloud-volumes-ontap/task-deploying-otc-azure.html#launch-a-cloud-volumes-ontap-ha-pair-in-azureSupported configurations for Cloud Volumes ONTAP in Azure
https://docs.netapp.com/us-en/cloud-volumes-ontap-relnotes/reference-configs-azure.htmlLicensing for Cloud Volumes ONTAP
https://docs.netapp.com/us-en/storage-management-cloud-volumes-ontap/concept-licensing.htmlAzure region support
https://bluexp.netapp.com/cloud-volumes-global-regionsLearn about NetApp Console agents
https://docs.netapp.com/us-en/console-setup-admin/concept-agents.htmlLearn about data tiering with Cloud Volumes ONTAP in AWS, Azure, or Google Cloud
https://docs.netapp.com/us-en/storage-management-cloud-volumes-ontap/concept-data-tiering.htmlLearn about ONTAP LIF compatibility with port types
https://docs.netapp.com/us-en/ontap/networking/lif_compatibility_with_port_types.htmlLearn about managing ONTAP SMB share-level ACLs
https://docs.netapp.com/us-en/ontap/smb-admin/manage-smb-level-acls-concept.html
Tags In
Related Posts
Follow me on LinkedIn
