Running pfSense on both ends of your hybrid network, in Azure and on-premises, gives you full control over IPSec connectivity without relying on Azure’s managed VPN Gateway service.

This approach is ideal for labs and cost-conscious setups, since you won’t pay for an always-on Azure VPN tunnel when testing or experimenting.

With pfSense handling both endpoints, you can securely link Azure VNets with your local infrastructure, fine-tune encryption, and even shut down the Azure side when not in use, saving costs while keeping your environment flexible and production-ready.

This post summarizes the key configuration steps required to set up a site-to-site IPSec VPN between pfSense in Azure and on-prem, replacing Azure’s VPN Gateway with a fully functional and cost-effective alternative.


Deploying pfSense in Azure

About how to deploy pfSense in Azure, you can read my following post.

When setting up pfSense in a home lab behind NAT, you can also read my following post

Azure Network Security Groups (NSGs) – Required Rules

To ensure smooth IPSec tunnel operation and traffic flow through pfSense, we must configure NSGs for both the WAN and Perimeter (LAN) subnets.

The WAN subnet’s NSG (pfSense’s WAN NIC attached to) needs inbound rules allowing UDP 500 (IKE), UDP 4500 (NAT-T), and optionally ESP (protocol 50) to permit VPN negotiation and data exchange with the on-prem pfSense.

Regarding allowing optionally ESP (protocol 50), in Azure, the public IP assigned to a NIC is not directly attached to the VM, but handled by Azure’s software-defined NAT layer. Because of this design, pfSense’s WAN interface only sees its private IP, and the public IP is transparently translated by Azure’s virtual network stack.

As a consequence, native ESP (IP protocol 50) traffic cannot pass directly, since it cannot traverse Azure’s NAT layer. Instead, NAT-T encapsulation (UDP 4500) is always used for IPSec, making UDP 500/4500 the only relevant inbound ports to allow in the WAN NSG.


The Perimeter subnet’s NSG (pfSense’s LAN/Perimeter NIC attached to), meanwhile, should allow inbound and outbound traffic between Azure VNets and on-prem networks, enabling routed LAN-to-LAN communication once the tunnel is up.

Allow IPSec and BGP Traffic

In pfSense, when using BGP for dynamic routing, make sure to allow BGP (TCP 179) and any other required traffic on the IPSec firewall rules tab.

By default, IPSec traffic is blocked until explicitly permitted, so to ensure dynamic routing and data flow between both networks, add rules to allow all protocols or at least the desired subnets and ports that should traverse the tunnel.


About how to configure BGP on pfSense, you can read my following post.

Routing Configuration on pfSense (Azure Side)

Since VNet peering routes are not automatically advertised to virtual appliances, pfSense in Azure won’t automatically learn the spoke subnets.

To ensure proper connectivity within Azure, we must also manually add static routes on pfSense pointing to each spoke subnet via the LAN or Perimeter interface.

This allows pfSense to reach all Azure spoke networks locally, and later, once BGP is enabled, these routes can be dynamically advertised to the on-prem pfSense over the IPSec tunnel.

Links

How network security groups filter network traffic
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-group-how-it-works

Azure network security groups overview
https://learn.microsoft.com/en-us/azure/virtual-network/network-security-groups-overview