Deploying and Operating Azure Kubernetes Service (AKS) – A Practical Guide – Part 2 – AKS Architecture and Components in Azure
After covering the core Kubernetes concepts in Part 1, we now take a closer look at how Kubernetes is implemented in Microsoft Azure using Azure Kubernetes Service (AKS).
While Kubernetes itself is platform-agnostic, each cloud provider introduces its own architecture, integrations, and operational model. Understanding these Azure-specific components is essential for designing, deploying, and operating production-ready workloads in AKS.
In this part, we focus on the architecture of AKS, its core components, and how they integrate with Azure services such as networking, identity, and load balancing.
In Part 3 we will walk through the deployment of an AKS cluster using both the Azure Portal and the Azure CLI. The focus is on a practical, reproducible setup that can be used as a foundation for further exploration in the next parts of this series.
AKS Architecture Overview
Azure Kubernetes Service (AKS) provides a managed Kubernetes environment where the control plane is operated by Azure, while you manage the worker nodes and the workloads running on them.
An AKS cluster is split into two main layers:
- Managed Control Plane (Azure-managed)
- Node Pools (customer-managed worker nodes)
The control plane is hosted and fully managed by Azure. It includes core Kubernetes components such as the API server, scheduler, and controller manager. Azure ensures high availability, patching, and scaling of these components, removing a significant operational burden.
The worker nodes are organized into node pools, which are deployed into your Azure subscription. These nodes run as virtual machines and are responsible for hosting your containerized applications.
Each node pool can be configured independently, allowing you to:
- use different VM sizes
- separate workloads (e.g., system vs user workloads)
- optimize for performance or cost
AKS also integrates deeply with Azure networking. Each node receives an IP address within an Azure Virtual Network (VNet), and depending on the networking model, pods can also receive routable IP addresses within the same network.
In addition, AKS leverages Azure-native services such as:
- Azure Load Balancer for exposing services
- Azure Disks and Azure Files for persistent storage
- Microsoft Entra ID (Azure AD) for identity and access management
Understanding how AKS maps Kubernetes concepts onto Azure infrastructure is key to designing scalable, secure, and production-ready environments.
Control Plane in AKS (Managed by Azure)
In Azure Kubernetes Service (AKS), the control plane is fully managed by Microsoft Azure. This is one of the main advantages of using AKS compared to running Kubernetes manually on virtual machines.
The control plane includes the core Kubernetes components responsible for managing the cluster, such as:
- API Server – the central entry point for all cluster operations
- Scheduler – assigns pods to worker nodes
- Controller Manager – ensures the desired state is maintained
In a traditional self-managed Kubernetes setup, you would need to deploy, configure, secure, and maintain these components yourself. This includes handling tasks such as patching, scaling, backups, and ensuring high availability.
With AKS, Azure takes over these responsibilities. The control plane is deployed in a Microsoft-managed subscription and is not directly accessible by the user. Azure automatically handles:
- high availability and redundancy
- patching and upgrades of control plane components
- scaling and reliability of the Kubernetes API
From an operational perspective, this significantly reduces complexity and allows you to focus on your workloads rather than the underlying infrastructure.
However, even though the control plane is abstracted away, it is still important to understand how it behaves.
All interactions with the cluster, whether using
kubectl, the Azure CLI, or the Azure Portal, are processed through the Kubernetes API server.
In short, AKS offloads the operational burden of managing the control plane, while still exposing the full Kubernetes API for managing your workloads.
Node Pools and VM Architecture
In Azure Kubernetes Service (AKS), the worker nodes of a cluster are organized into node pools. A node pool is a group of virtual machines that share the same configuration and are used to run containerized workloads.
Each node in a node pool is an Azure virtual machine, and together they form the compute layer of your AKS cluster. These nodes are deployed into your Azure subscription and are fully visible and manageable like standard VMs.
AKS distinguishes between two main types of node pools:
- System node pool – Used to host critical Kubernetes system components such as CoreDNS and kube-proxy. This node pool is required and must always be present.
- User node pool – Used to run your application workloads. You can create multiple user node pools to separate workloads based on requirements.
Node pools provide flexibility in how workloads are deployed and managed. For example, you can:
- use different VM sizes for different workloads
- separate production and non-production workloads
- run specialized workloads (e.g., GPU-based or high-memory applications)
- apply scaling policies independently per node pool
When a pod is scheduled, the Kubernetes scheduler selects a suitable node based on resource availability and constraints. These constraints can include node labels, taints, and tolerations, which allow fine-grained control over workload placement.
From an Azure perspective, node pools are tightly integrated with Virtual Machine Scale Sets (VMSS). This enables features such as:
- automatic scaling of nodes
- rolling upgrades
- improved fault tolerance
Scaling in AKS can happen at two levels:
- Pod scaling (horizontal scaling via replicas or autoscaler)
- Node scaling (cluster autoscaler adjusts the number of VMs in a node pool)
This combination allows AKS to dynamically adapt to changing workloads while optimizing resource usage and cost.
In short, node pools provide the flexible compute foundation of AKS, allowing you to tailor your cluster to different workloads and scaling requirements.
AKS Networking (Azure CNI vs Kubenet)
Networking is one of the most important aspects of running Kubernetes in Azure, as it directly impacts connectivity, scalability, and integration with existing infrastructure.
In AKS, there are two primary networking models:
- Azure CNI (Container Networking Interface)
- Kubenet
Each model defines how IP addresses are assigned to pods and how they communicate within and outside the cluster.
Azure CNI
Azure CNI supports two networking modes: a flat network model and a newer overlay model.
Azure CNI (Flat Model)
With the classic Azure CNI model, every pod receives an IP address from the Azure Virtual Network (VNet). This means that pods are directly routable within your network, just like virtual machines.
Key characteristics:
- Pods get IPs from the VNet subnet
- Full integration with Azure networking (NSGs, routing, peering)
- Direct communication between pods, VMs, and on-premises networks
This model is ideal for enterprise environments where:
- seamless integration with existing networks is required
- direct connectivity to on-premises systems (e.g., via VPN or ExpressRoute) is needed
- advanced network controls and security policies are important
However, Azure CNI requires careful IP planning, as each pod consumes an IP address from the subnet.
Azure CNI Overlay
With Azure CNI Overlay, pods are assigned IP addresses from a private overlay network that is separate from the Azure VNet.
Key characteristics:
- Pods use a private, non-VNet IP range
- Pod-to-pod communication is handled via an overlay network (encapsulation)
- NAT is used for communication outside the cluster (via the node IP)
- Significantly reduced IP consumption in the VNet
This model provides better scalability and simplifies IP planning, making it the preferred choice for many modern AKS deployments.
Kubenet
With Kubenet, pods are assigned IP addresses from an internal Kubernetes-managed address space, separate from the Azure VNet.
Key characteristics:
- Pods use a private, non-VNet IP range
- Network Address Translation (NAT) is used for external communication
- Lower IP consumption within the VNet
This model is simpler and more IP-efficient, making it suitable for smaller environments or scenarios where VNet integration is less critical.
However, Kubenet has limitations:
- more complex routing for advanced scenarios
- less direct integration with Azure networking features
Azure CNI Overlay vs Kubenet
At first glance, Azure CNI Overlay and Kubenet appear similar, as both assign pods IP addresses from a private, non-VNet address space and use NAT for outbound communication.
However, the key difference lies in how pod-to-pod communication across nodes is handled. Kubenet relies on Azure route tables (UDR) to route traffic between nodes, which can introduce scalability limits and additional operational complexity. In contrast, Azure CNI Overlay uses encapsulated networking (overlay), removing the need for route tables and enabling better scalability and simpler operation.
Additionally, Azure CNI Overlay provides tighter integration with Azure networking features and is considered the more modern and preferred approach for new deployments.
Choosing the Right Model
In most production environments, Azure CNI is the preferred choice, especially when integrating AKS into existing enterprise networks.
Kubenet can still be useful for:
- development or test environments
- scenarios with limited IP address space
In short, the choice between Azure CNI and Kubenet determines how your workloads are connected, making it one of the most important design decisions when deploying AKS.
Integration with Azure Load Balance
In Azure Kubernetes Service (AKS), exposing applications to external or internal clients is tightly integrated with the Azure Load Balancer.
When you create a Kubernetes service of type LoadBalancer, AKS automatically provisions and configures an Azure Load Balancer in your subscription. This abstracts the complexity of manually setting up load balancing and allows you to expose applications with minimal effort.
Depending on your configuration, AKS can create:
- Public Load Balancer – Exposes applications to the internet using a public IP address
- Internal Load Balancer (ILB) – Exposes applications only within the Azure Virtual Network or connected environments (e.g., via VPN or ExpressRoute)
The Azure Load Balancer distributes incoming traffic across multiple pod instances, ensuring high availability and scalability. It works in combination with Kubernetes services, which define the backend pods receiving the traffic.
From a technical perspective:
- The load balancer forwards traffic to node-level endpoints
- Kubernetes (via kube-proxy) ensures traffic is routed to the correct pods
- Health probes are automatically configured to monitor service availability
AKS also supports advanced scenarios such as:
- assigning static public IP addresses
- using internal load balancers for private applications
- integrating with ingress controllers for HTTP/HTTPS routing
This tight integration allows Kubernetes services to seamlessly leverage Azure-native load balancing capabilities without requiring manual configuration.
In short, AKS uses Azure Load Balancer to bridge Kubernetes services with the outside world, providing scalable and highly available access to your applications.
Storage Integration (Azure Disks / Azure Files)
While containers are typically stateless, many real-world applications require persistent storage, for example databases, file shares, or application state. Kubernetes addresses this requirement through persistent storage abstractions, which are tightly integrated with Azure services in AKS.
In AKS, persistent storage is primarily provided through:
- Azure Disks
- Azure Files
Azure Disks
Azure Disks provide block-level storage and are typically used for stateful workloads such as databases.
Key characteristics:
- Attached to a single node at a time (ReadWriteOnce)
- High performance and low latency
- Backed by managed disks (Standard SSD, Premium SSD, etc.)
When a pod uses an Azure Disk, Kubernetes ensures that the disk is attached to the node where the pod is running. If the pod is rescheduled to another node, the disk is detached and reattached automatically.
This makes Azure Disks ideal for:
- databases (e.g., SQL, PostgreSQL)
- applications requiring high IOPS and consistent performance
Azure Files
Azure Files provide shared file storage accessible via SMB or NFS, allowing multiple pods to access the same data simultaneously.
Key characteristics:
- Supports multi-node access (ReadWriteMany)
- Fully managed file share service
- Accessible from multiple pods across different nodes
Azure Files is well suited for:
- shared application data
- content repositories
- legacy applications requiring file shares
Kubernetes Abstractions
Kubernetes uses objects such as:
- Persistent Volumes (PV)
- Persistent Volume Claims (PVC)
These abstractions decouple storage from the underlying infrastructure. In AKS, storage classes are used to dynamically provision Azure Disks or Azure Files based on application requirements.
Integration with Azure
AKS handles the provisioning and lifecycle of storage resources automatically. When a PVC is created, Azure resources (disk or file share) are dynamically provisioned and attached to the cluster.
This tight integration allows you to leverage Azure’s scalable and durable storage services without manually managing infrastructure.
In short, AKS integrates seamlessly with Azure storage services, enabling stateful workloads while maintaining the flexibility of Kubernetes abstractions.
Identity & Access (Microsoft Entra ID / RBAC)
Security and access control are critical aspects of operating Kubernetes in production. In AKS, identity and access management are tightly integrated with Microsoft Entra ID (formerly Azure Active Directory) and Kubernetes Role-Based Access Control (RBAC).
This integration allows you to manage access to your AKS cluster using centralized identities and familiar Azure security concepts.
Microsoft Entra ID Integration
AKS can be integrated with Microsoft Entra ID, enabling authentication using Azure identities such as users, groups, and service principals.
Key benefits:
- centralized identity management
- integration with existing enterprise identity systems
- support for multi-factor authentication (MFA)
- seamless integration with Azure CLI and portal
When a user accesses the cluster (e.g., via kubectl), authentication is handled through Entra ID. This ensures that only authorized users can interact with the Kubernetes API.
Kubernetes RBAC
Once authenticated, access is controlled using Kubernetes RBAC.
RBAC defines what actions a user or service is allowed to perform within the cluster. It uses roles and role bindings to grant permissions.
Key concepts:
- Role / ClusterRole – Defines a set of permissions
- RoleBinding / ClusterRoleBinding – Assigns permissions to users or groups
This allows fine-grained control, such as:
- restricting access to specific namespaces
- limiting actions (e.g., read-only vs full control)
- delegating responsibilities to different teams
Managed Identities
AKS also supports managed identities, which allow pods and cluster components to securely access Azure resources without storing credentials.
For example, a pod can use a managed identity to:
- access Azure Storage
- retrieve secrets from Azure Key Vault
- interact with other Azure services
This improves security by eliminating the need for hardcoded credentials.
More about Azure Managed Identities you will also find in my following post.
Integration with Azure RBAC
In addition to Kubernetes RBAC, AKS can also integrate with Azure RBAC for Kubernetes, allowing you to control cluster access using Azure roles.
This provides a unified access model across Azure resources and Kubernetes.
In short, AKS combines Kubernetes RBAC with Microsoft Entra ID to provide a secure, scalable, and enterprise-ready access model.
Summary and What’s Next
In this part, we explored how Kubernetes is implemented in Azure through Azure Kubernetes Service (AKS), focusing on its architecture and integration with Azure-native services.
We covered the key building blocks of AKS, including the managed control plane, node pools and their underlying VM architecture, networking models, and how AKS integrates with Azure Load Balancer for exposing applications. In addition, we looked at persistent storage options using Azure Disks and Azure Files, as well as identity and access management through Microsoft Entra ID and Kubernetes RBAC.
Understanding these components and how they interact is essential for designing, deploying, and operating reliable Kubernetes workloads in Azure.
In Part 3, we will move from architecture to hands-on implementation by deploying an AKS cluster using both the Azure Portal and the Azure CLI, followed by connecting to the cluster and verifying its basic functionality.
Links
What is Azure Kubernetes Service (AKS)?
https://learn.microsoft.com/en-us/azure/aks/what-is-aks
Tags In
Related Posts
Follow me on LinkedIn
