Google Cloud NAT is just 3 gcloud commands and a shell script
The original requirement: No public IP address
When you instantiate a virtual machine on Google Cloud, you have the option of giving it a public IP address in addition to its private IP address. It seems perfectly reasonable to not give the VM a public IP if it is never expected to be accessed from the open internet.
Eliminating that attack surface is A Good Thing (TM).
No IP address = No inbound traffic
Interestingly, any VM that has not been assigned a public IP address will also not be able to access the outside Internet.
This particular feature threw me off the first time I encountered it, but it makes sense if you think about the networking: Without a NAT-ing server, there’s no route to the outside.
This is not specific to GCP: Even AWS has the same behavior.
Logical solution: NAT server
It seems pretty obvious that solution is to create a NAT server and routing to redirect all outgoing packets through that server.
In the past, Google had documentation explaining how to set up a virtual machine NAT gateway and documentation to set up the routing rules for any VM that did not have a public IP address.
Google had even provided the documentation to a very elegant solution: You could attach user-defined tags to any VM and set up a route that would apply to any VM that had that tag.
This would replace the need for running any
route commands on any VM. Instead, just give the VM a tag when creating it, and GCP would take care of the rest.
Where’s the documentation?
We had set up a NAT gateway instance back in 2019 by referring to Google Cloud documentation. It definitely worked, because we have a working NAT gateway instance in our own GCP project.
Now in 2021 when looking for the same documentation to help a customer set this up for themselves, I got … “Cloud NAT”.
Results like these are not uncommon. My first assumption was that Cloud NAT is probably what Google created to package up NAT servers into a user friendly product.
As I’ve come to expect out of Google, the documentation for Cloud NAT is wonderful: Overview, design, architecture, examples, the works!
I should start using this immediately… but as I am prone to do with any cloud product, let’s check the pricing first.
What’s the price?
To simplify, the pricing page states:
total cost for running the gateway = hourly cost for the NAT gateway + cost per GB of data that is processed by the gateway + egress costs for any traffic leaving the network
In comparison, the old “NAT gateway VM” way would have been:
total cost of running the gateway = hourly cost of VM + egress costs for any traffic leaving the network
Let’s compare prices
The NAT instance that we used since the beginning was an
f1-micro instance that costs $0.0076 per hour in the US-West region. We have used it for routing external traffic for up to 300 VMs. To be fair, most are not at all downloading or uploading much.
To get the same out of Cloud NAT, with an average 20 VMs and with an average 1 MBps per VM, that would mean: Hourly cost of NAT gateway = $0.0014 * 20 = $0.028
Cost per GB of data = $0.045 * (20 / 1024) = $0.00087890625
Total = $0.02887890625 per hour
I’m going to keep egress costs out of the equation, because we can consider them to be equal in both cases.
This makes the NAT gateway at least
0.02887890625/0.0076 = 3.8 times cheaper.
A clear winner: NAT gateway instance
In fact, is it ever more cost effective to use Cloud NAT? If we consider the cost / GB processed as negligible, then Cloud NAT is worth it only when there are 5 or less VMs using it:
A simple NAT gateway instance is the clear winner by a wide margin: It is 4 times cheaper starting from the 5th VM without a public IP address.
Let’s just set up and use a NAT gateway instance.
So where’s the documentation for that NAT gateway instance?
NAT Gateway instance: The Hard Way
Luckily, we kept decent documentation on how we set up our own NAT Gateway.
If Google doesn’t provide, at least we can. Here’s how:
Step 1: Create a barebones Linux VM capable of routing
This doesn’t need to be anything fancy. A plain Debian Buster
f1.micro instance with a 10 GB boot disk works just fine.
Just make sure that you assign an ephemeral public IP address to it and note down the name you give the VM. For simplicity, I’ve named it
This can be simplified by a single gcloud command:
gcloud beta compute instances create nat-gateway --machine-type=f1-micro --network default --can-ip-forward --zone <us-west1-b> --image=debian-10-buster-v20210701 --image-project=debian-cloud --tags nat
Make sure to add your SSH keys to the VM after creation.
Step 2: Set up the routes and IP forwarding in the VM
Boot up the
nat-gateway and ssh into it. Create a file
/etc/rc.local if it doesn’t exist and add the following code into it:
#!/bin/bash set -x # Turn on IP forwarding sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" # Turn on the route sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
This script is executed every time the machine boots up.
nat-gateway to enable its functionality.
Step 3: Set up the route in the GCP routing table
gcloud compute routes create no-ip-internet-route --network default --destination-range 0.0.0.0/0 --next-hop-instance nat-gateway --next-hop-instance-zone us-west1-b --tags no-ip --priority 800
You can also do this from the UI.
Step 4: Set up tags for every VM that needs routing
gcloud compute instances add-tags <existing-instance> --tags no-ip
Step 5: There is no step 5
That’s pretty much it actually. It isn’t all that hard.
Google could have kept the documentation to create and use a NAT gateway instance.
Cloud NAT has its place in the world I guess, but a script and 3 gcloud commands can replace it with a cheaper alternative.
NAT gateway VM: The lazy way
If you don’t like mucking about with IP tables and init scripts, send me a message at firstname.lastname@example.org and I’ll share a VM instance template that you can instantiate in your own Google Cloud project.
It’ll cost you a dollar in monopoly money.
(For legal reasons, that’s a joke.)