Deploying pfSense in Google Cloud – A Step-by-Step Guide to Your Own Cloud Firewall
Running pfSense in Google Cloud Platform (GCP) is a powerful way to build your own fully controllable network gateway, firewall, or NAT appliance, far beyond what GCP’s managed load balancers or Cloud NAT can offer.
Unlike commercial NGFW appliances such as Fortinet, Check Point, or Palo Alto, which are available in the GCP Marketplace but come with additional licensing costs, pfSense provides a free and open-source alternative that can still deliver advanced firewalling, routing, and VPN capabilities.
From a pure network security and routing perspective, pfSense (and its FreeBSD underpinnings) can absolutely hold its own against most commercial NGFWs.
The difference isn’t in firewall strength, it’s mainly in enterprise polish, automation, and integrated cloud management that vendors charge for.
In this post, we’ll walk through how to deploy pfSense as a virtual machine in GCP, attach it to multiple VPC networks (trusted and untrusted zones), and configure it as a central gateway for routed and NATed traffic.
We will also learn how to handle GCP-specific quirks like MTU adjustments, ILB health checks, and outbound NAT behavior, all the small but essential details that make pfSense work reliably in a cloud environment.

- Download pfSense
- Uploading the re-compressed and renamed pfSense tar.gz File to Google Cloud (Bucket)
- Creating the pfSense Image in GCP
- Bootstrapping pfSense: Creating the Temporary Installer Instance (by using our previously created pfSense Image disk)
- Install pfSense
- Create a Snapshot of the Disk we installed pfSense
- Spin up the Production pfSense VM instance
- Adjusting the MTU in pfSense to Match Google Cloud’s Network Fabric
- pfSense Configuration (Web UI)
- Configure pfSense to reply to ILB Health Checks
- Configuring Outbound NAT for VPC Workloads
- Allow Web UI Access from the Internet
- Troubleshooting
- Links
Download pfSense
We first need to download pfSense from one of the following links.
https://www.pfsense.org/download
https://atxfiles.netgate.com/mirror/downloads
We then first need to extract the downloaded *.iso.gz file to get the *.img file and then compress back to *.tar.gz. which is supported by Google Cloud to create an image from.
After extracting the file we first need to rename the image file inside.

We need to rename the extracted image file into disk.raw (remove also the extension .img) as required to later create a new image in Google Cloud from.

To compress the file again we can use on Windows the 7-Zip tool. Right click on the *.img file and select Add to Archive. For the Archive format select tar.

We first compress the installer file to tar, so for the archive format select tar below.

Then add the created *.tar file to archive and select the for the Archive format gzip.

We will now have our required *.tar.gz file and inside the image named disk.raw as required to create a new image in Google Cloud from.
Your image source must use the .tar.gz extension and the file inside the archive must be named disk.raw.
Source: https://cloud.google.com/compute/docs/import/import-existing-image#create_image_file

Uploading the re-compressed and renamed pfSense tar.gz File to Google Cloud (Bucket)
To deploy pfSense in Google Cloud, we first need to upload the installer image (re-compressed and renamed to match GCP’s import format) into a Cloud Storage bucket.
The bucket acts as a staging location from which GCP’s image import service can create a custom bootable disk image.
Once uploaded, the pfSense image can be imported into Compute Engine → Custom Images and used to launch new firewall instances.
Upload the file to Google Cloud -> Cloud Storage -> Bucket.


Creating the pfSense Image in GCP
After uploading the pfSense installer image to our Cloud Storage bucket, we can create a new custom image in Compute Engine → Images by selecting “Create Image” and choosing Cloud Storage file as the source.
GCP will automatically import and convert the uploaded pfSense installer into a bootable disk image.
This image doesn’t contain a ready-to-run pfSense system, it simply acts as a bootable installer disk, much like mounting an ISO in a hypervisor.

Enter a name for the new image and select for the source here Cloud Storage file.

Select the bucket we have uploaded pfSense and select our uploaded *tar.gz file.
Your image source must use the .tar.gz extension and the file inside the archive must be named disk.raw.
Source: https://cloud.google.com/compute/docs/import/import-existing-image#create_image_file

We need to select our correct region to later be able to use this image. Finally click on Create.

The new image will be created.


Finally successfully created.
!! Note !!
In case we didn’t have previously first renamed the downloaded and extracted image file into disk.raw (remove also the extension .img) before re-compressing to *.tar.gz, we are not able to successfully create the new image below.

Bootstrapping pfSense: Creating the Temporary Installer Instance (by using our previously created pfSense Image disk)
Next, launch a temporary VM using this image as the boot disk and attach a second empty disk where pfSense will actually be installed.
Once the installation is complete, we will shut down the VM and take a snapshot of the second disk.
That snapshot is finally our real, deployable pfSense image, the one we will use to spin up production firewall instances in GCP, not the “installer-only” one that just boots into setup.

On the OS and storage tab click on Change.

Switch to the Custom images tab and search for our previously created pfSense image. Select the pfSense image. The rest leave on its default settings and click Select.
This disk will be just our bootable installer disk as mentioned, much like mounting an ISO in a hypervisor. Next we add a second empty disk where pfSense will finally be installed on.

Click on Add new disk. This will be our disk we will install pfSense on as mentioned.

Enter a name for the disk. The recommended disk size for pfSense is 16 GB, I will finally choose 20 GB. The rest we can leave on the default settings. Click on Save.

Because this VM instance will just be used to create the disk with the pfSense installation (our final custom pfSense image), the rest of the settings we can also leave on its default settings and doesn’t really matter, including just one network interface.
Later when creating our final pfSense VM instances we will assign at least two network interfaces (trusted Hub VPC and untrusted HUB VPC).

To perform the installation we need to access the serial console of the VM instance. So also enable the serial console on this temporary VM instance.

Install pfSense
Once the temporary VM is running, the pfSense installer will automatically boot from the custom image disk as the primary boot disk.
Connect to the serial console and click on enter.

Accept the copyright and distribution notice.

Select Install pfSense.

For the partitioning choose between the ZFS and UFS filesystem.
UFS is the traditional, lightweight option that’s simple and fast — ideal for most cloud or virtualized deployments.
ZFS, while more feature-rich with advanced integrity checks and snapshots, adds overhead and is generally only recommended if you need its specific features or plan to run pfSense on larger, persistent storage.
More about both when using pfSense you will also find in my following post https://blog.matrixpost.net/pfsense-boot-loop-after-power-outage/.

Click on Install.

For the virtual device (disk) I will use stripe.
In our GCP setup, using RAID or mirrored ZFS pools inside pfSense makes no real sense and would just add unnecessary overhead.
GCP Persistent Disks (standard or SSD) are already replicated at the storage layer, Google automatically maintains redundancy and durability behind the scenes.
Creating a RAID or mirrored ZFS setup inside the VM would only add CPU and I/O overhead without improving reliability or performance.
Therefore, a simple ZFS stripe (single-disk vdev) or UFS on a single disk is fully sufficient for pfSense in GCP.

Select the second attached disk as the installation target, this is where pfSense will actually be installed.
Press the space key to select our newly attached disk where we want to install pfSense on.

Confirm by pressing yes.

The installation starts.

After the installation completes and pfSense has written its system files to the second disk, shut down the VM to prepare that disk for snapshotting and image creation.
So select Shell.


To shutdown pfSense and our VM instance, run:
# poweroff

Create a Snapshot of the Disk we installed pfSense
We will now create a snapshot of the disk on which we installed pfSense previously. By doing this we will have a real, deployable pfSense image we can later use to spin up production pfSense VM instances in GCP

For the location select a region you can access later.


Next we can create our final VM instance with pfSense running on.
Our temporary installer VM instance we can now delete.
Spin up the Production pfSense VM instance
With the final pfSense image created from our installed disk snapshot, we can now launch your production pfSense VM instance.
Simply create a new Compute Engine VM and select this custom image as the boot disk, this time it will boot directly into a fully installed pfSense system, not the installer.
From here, we can attach additional NICs for our trusted and untrusted VPCs, enable IP forwarding, and begin configuring pfSense as our central gateway or firewall in GCP.
For Machine configuration enter a name and region.

Under OS and storage click on Change.

Switch to the Snapshots tab and search for our previously created snapshot (taken of the added new disk we installed pfSense on) and select it.

Click on Select.


For the network we I will attach two NICs which both are associated with Hub VPCs (trusted and untrusted). Further I will enable IP forwarding.

The NIC associated with the trusted Hub VPC will just have a custom internal private IPv4 address assigned to.

The NIC associated with the untrusted Hub VPC will also have assigned a external public IPv4 address for internet access. For the internal private IPv4 address I will use a custom IP address.
By assigning an external public IPv4 address to the second NIC (connected to the untrusted Hub VPC), pfSense can now send outbound traffic directly to the Internet through GCP’s default Internet gateway for the untrusted Hub VPC subnet (10.0.100.1).
In GCP, every VPC subnet automatically includes a default Internet gateway that provides a route to 0.0.0.0/0.
However, only VM instances with an external public IP address assigned to one of their interfaces can actually use this gateway to reach the Internet, all others remain isolated to internal traffic.

Allow access to the Cloud APIs and finally click on Create.

Click on the VM instance to enable the serial console to perform the initial configuration of pfSense.

Connect to the serial console. We can already see pfSense’s configuration wizard. First we need to assign the network interfaces as usual.

Select the WAN interface, in my case this is nic1 (in pfSense/FreeBSD this is vtnet0).

The LAN interface then is vtnet0.

Confirm the network interface assignment.


Next enter option 2 to set the network interfaces IP addresses.

Select the interface you want to set the IP address for.

For Configure IPv4 address LAN or WAN interface via DHCP select for both Y.
For Google Cloud Platform (GCP) Virtual Machine (VM) instances, the best practice is not to manually assign a static IP address inside the operating system (OS), even if you are using a reserved static IP set via gcloud or the web console.
The network interface inside the VM’s OS should always be configured for DHCP.
GCP’s networking layer is responsible for assigning the internal IP address you specify (either ephemeral or a reserved static internal IP) to the VM’s network interface. The VM’s OS receives this IP address via the DHCP server run by Google Cloud’s VPC network.
Manually setting a static IP inside the guest OS can lead to an IP address conflict with the underlying GCP network management, especially if you change the network configuration (like the internal IP) in the GCP console or gcloud.

Finally both are configured via DHCP.

Adjusting the MTU in pfSense to Match Google Cloud’s Network Fabric
Google Cloud VPCs use an MTU of 1460 bytes by default, although higher values such as 1500 can be configured.
On most Linux distributions we can check the MTU set for the NICs by running the commands below. For FreeBSD we can use ifconfig.
$ ip addr show or $ ip link

When pfSense (running on FreeBSD) keeps its default MTU of 1500 while the VPC operates at 1460, network traffic will fail entirely due to packet handling inconsistencies in the VirtIO (vtnet) driver.
Setting the pfSense interface MTU to 1460 ensures that packets conform to Google Cloud’s VPC limits and is required for network connectivity to function correctly in this environment.
So enter the pfSense shell to adjust the MTU to 1460.

We can first check the actual MTU by running the ifconfig command.

To adjust the MTU to 1440 for both interfaces (LAN + WAN), run:
To change these MTU values persistantly, we will later use the WebUI where we can set the MTU directly under Interfaces.
ifconfig vtnet0 mtu 1460 ifconfig vtnet1 mtu 1460

To test if we can ping our pfSense already successfully, we need to temporary disable the firewall.
We can first check the current state by running.
pfctl -s info | grep Status

To disable it run.
pfctl -d To re-enable it run pfctl -e

Looks good and I am able to ping the pfSense VM already on its LAN interface and IP.
We also need to allow inbound ICMP traffic on the corresponding VPC firewall (trusted Hub VPC in my case).

Next we can do the final configuration of pfSense by using the web console.
pfSense Configuration (Web UI)
I will finally access the pfSense Web UI from a Windows VM running in my on-prem vSphere lab environment and connected through a HA VPN tunnel with GCP.
The default admin password is pfsense.

I will use Google’s DNS Server for name resolution.

As already set and mentioned, in GCP we should use inside the guest OS usually DHCP.
For Google Cloud Platform (GCP) Virtual Machine (VM) instances, the best practice is not to manually assign a static IP address inside the operating system (OS), even if you are using a reserved static IP set via gcloud or the web console.
GCP’s networking layer is responsible for assigning the internal IP address you specify (either ephemeral or a reserved static internal IP) to the VM’s network interface. The VM’s OS receives this IP address via the DHCP server run by Google Cloud’s VPC network.


Outbound internet is also already working because we set DHCP for the WAN interface, the external public IPv4 address assigned in the GCP web console will allow to use the default internet gateway for the untrusted Hub VPC as mentioned further above.
That outbound internet access is working we can see in pfSense already because it is able to determine the available latest version.

Further we can also check internet access either directly from the shell or the diagnostic utilities in the Web UI.

By using the Web UI under Diagnostics -> Ping.

We will now first allow ICMP and web traffic for the subnet we will use to access the Web UI. So far the firewall is still disable (pfctl -d command we run previously), but when changing e.g. the MTU to make it persistent, after saving the changes the firewall would be automatically enabled again and we may will lost access depending from which source we access the Web UI.
By default access from the LAN subnets is allowed already, in my case I need to add my on-prem network which I will use to access the Web UI.

Next I will set the MTU for both network interfaces to 1460 (instead its default 1500) as further above mentioned.
Select Interfaces -> WAN / LAN.


Do the same for the WAN interface.
We can check all routes configured on pfSense finally by running the netstat command in pfSense’s shell.
Below we can see already its default gateway set on the WAN interface to access the Internet.
netstat -rn


Configure pfSense to reply to ILB Health Checks
In the following post I was showing in general how we can achieve outbound internet access for GCP VM instances.
I was also showing how to set a custome route in a Spoke VPC subnet to point to a NGFW appliance as next hop running the Hub VPC.
We cannot directly route traffic from a spoke VPC to the internal IP of an appliance VM in a hub VPC via VPC Network Peering (or by using the Network Connectivity Center (NCC)).
To overcome this we can use an Internal Passthrough Network Load Balancer (ILB).
The Internal Passthrough Network Load Balancer (ILB) acting as a next hop is the standard and correct way to overcome this limitation and enable a centralized firewall appliance architecture in a hub and spoke topology.
The internal passthrough ILB in GCP delivers packets to the backend using the ILB frontend IP (VIP) as the destination.
pfSense initially drops or ignores this traffic because it doesn’t own that IP.
Adding the ILB VIP as an IP Alias on the LAN interface (via Interfaces → Virtual IPs) makes pfSense respond to those packets, allowing the ILB health checks to succeed.
In my case the ILB frontend IP is 172.28.10.100. In pfSense we can add the ILB frontend IP as Virtual IP under Firewall -> Virtual IPs. For the type select IP Alias.

We can select here between different types like also Proxy ARP.
In pfSense, an IP Alias adds an additional IP address that the firewall truly owns and responds to for ARP, ICMP, and TCP/UDP traffic.
A Proxy ARP entry, on the other hand, only makes pfSense answer ARP requests for that IP but does not actually bind or respond to packets destined to it.
Therefore, an IP Alias is required when pfSense itself must handle the traffic, as in the case of a GCP Internal Passthrough Load Balancer.

Below when running ifconfig we can see that our IP Alias (ILB frontend IP) with the IP 172.28.10.100 was added as additional IP address to the LAN interface.

In a Google Cloud Internal Passthrough Load Balancer (ILB), health checks are not terminated or proxied by the ILB itself. Instead, Google’s regional health checker systems (with IP ranges 35.191.0.0/16 and 130.211.0.0/22 for west-europe) send probe packets directly to the backend VM.
The key detail is that these probes keep the destination IP address equal to the ILB’s frontend IP (VIP), not the backend’s actual interface IP (in my case pfSense LAN 172.28.10.150 actually).
So in pfSense (or any backend VM), the packet trace or firewall log will show entries like below.
Because the ILB operates in passthrough mode, the packet is routed directly to one of the backend instances in the same subnet, preserving both source and destination IPs.
That’s why the backend VM (pfSense) must explicitly own or handle the ILB frontend IP, otherwise, it will receive but never respond to those packets, causing the health check to fail.
So far the packets also will be dropped because not allowed by a firewall rule.

So also need to allow the IP address range of the ILB Health Checker inbound TCP 80 traffic.

Further we need to make sure, that pfSense can route reply traffic successfully to Google’s regional health checker systems. Therefore we need to add static routes in pfSense using the LAN gateway to reach these systems.

After adding the ILB VIP as an IP Alias, allowing traffic from Google’s regional health checker systems and allowing inbound traffic on pfSense for, the health check in GCP should be finally successful.

For the health check I will use http.
Although pfSense’s WebGUI is configured to use HTTPS by default, it also keeps an HTTP listener active on port 80 for compatibility.
This allows GCP’s Internal Load Balancer health checks to successfully connect using a simple HTTP (TCP/80) probe, even when the main web interface is secured via HTTPS.

Configuring Outbound NAT for VPC Workloads
The default option is Automatic Outbound NAT which automatically performs NAT from internal interfaces, such as LAN, to external interfaces, such as WAN.
Switching to Hybrid Outbound NAT lets you extend NAT to these additional networks while keeping the defaults for directly connected interfaces.
So below to enable successful outbound internet connections from my Spoke VPC (172.30.10.0/24), I need to add the mapping manually as highlighted. For the Interface select the WAN interface, for the source select the Spoke VPC address range, for the NAT address select WAN address and for the destination select Any (to allow all outbound internet access). Leave the port empty to allow all.

In Google Cloud, outbound (egress) traffic is allowed by default at the VPC firewall level.
Therefore, pfSense’s WAN interface in the untrusted Hub VPC doesn’t need an explicit “allow egress” rule, it can freely initiate Internet connections and forward NATed traffic from the spoke networks.
To block outbound access, we can create an explicit egress deny rule with priority 0 and destination 0.0.0.0/0.
Allow Web UI Access from the Internet
To make the pfSense WebGUI accessible from the Internet via the external IP assigned to the WAN NIC in the untrusted VPC subnet, we must allow ingress TCP 443 on that subnet in GCP and permit HTTPS traffic on pfSense’s WAN interface to its WAN address.
This configuration enables remote administrative access directly over the Internet.
However, exposing the WebGUI publicly is highly insecure and strongly discouraged, it’s far safer to access it through a VPN, bastion host, or private management network.
Allowing ingress (inbound) HTTP TCP 443 traffic on the unstrusted Hub VPC.
For the target we can filter the scope by just applying to specific intstances using a specific network tag.

Allowing inbound HTTPS TCP 443 traffic on pfSense’s WAN interface and its WAN address.


Troubleshooting
Adding Routes temporary
Show all routes in FreeBSD.
root: netstat -rn

Adding routes in the shell but just temporary.
route add -net <network> <gateway> route add -net 10.0.0.0/24 172.28.10.1

We can set the routes persistently by using the Web UI within System -> Routing -> Static Routes.

Disabling the pfSense Firewall completely and temporary
root: pfctl -d

More about pfSense you will also find here https://blog.matrixpost.net/?category=pfsense.
Links
Installing pfSense on Google Cloud Platform
https://blog.kylemanna.com/cloud/pfsense-on-google-cloud/pfSense with multiple interfaces in GCP
https://www.youtube.com/watch?v=cMCqZ4nd6lspfSense – Outbound NAT
https://docs.netgate.com/pfsense/en/latest/nat/outbound.html
Tags In
Related Posts
Latest posts
Deploying pfSense in Google Cloud – A Step-by-Step Guide to Your Own Cloud Firewall
Follow me on LinkedIn
