Want to know more about this blog?

Tom de Brouwer will happily answer all your questions.

In a series of blog posts we will focus on some of the best practices we use within Merapar to evolve our DevOps practices we have built around the GCP platform. Why? Because we think it’s fun to share knowledge and to learn from others in the industry.


In some projects we still prefer the “old” school database; PostgreSQL still has its role to play. Especially in the “modern” cloud world, where cloud providers like GCP, make the management and configuration around these instances very simple. 

We tend to follow security best practices which define no manual access to resources. Every time we start off with a new project, no developer, you do not get access to the database. When you need something to be done, use your database migration scripting.

This is a very good starting point, however in the real world we do understand you sometimes need access to the database. It might be to revert this failed database migration script, verify any unexpected database performance issues, or data integrity issues. We have all been across them. 

So, how to get secure, auditable access to our Cloud SQL database?


Our solution is to have a secure connection to the database, traditionally you can do this with a VPN connection. However this is costly, from both infrastructure and maintenance perspective.

The main components in this solution are:

  • Identity Aware Proxy to connect securely to a compute instance using your IAM identity
  • Compute Instance which acts as a socks5 proxy / bastion (https://en.wikipedia.org/wiki/SOCKS)
  • Cloud SQL instance with IAM authentication enabled
  • Developer laptop with GCP tools (gcloud cli) installed, SSH and Cloud SQL Auth Proxy

This is depicted in the diagram below:

As described in a previous blog: “How to secure your ssh connections using an Identity Aware Proxy on GCP”, we have configured a bastion host with in front an identity aware proxy.

By using the following command we can setup an SSH connection to the bastion instance:

Now, in this case we do not want a normal SSH connection, we want a Socks proxy, to do this we need to use the following commands:

This does:

  1. “--” tells gcloud cli to pass the arguments forward to the ssh client.
  2. “-N '' adds that we do not want to execute a command, we simply want to start forwarding a port.
  3. “-D” adds a local port to bind to, in this example we use port 8123.
  4. (optionally) “-f” to tell the SSH daemon to go to the background.

Any command which is now send on local port 8123 is forward to our remote instance (note that depending on your authentication the SSH connection will time-out after 1 hour of inactivity)

For the next step we will need the cloud sql proxy, use the following commands to download it (for Mac) and to make sure it’s executable.

Now we can start glueing things together, we can instruct cloud_sql_proxy to use the connection we set up to the bastions using the socks5 protocol.
While doing this we provide the database to connect to and which token to use (or alternatively which service account file to use).
Before starting the cloud sql proxy, we request an access token, as GCP might not be available through the socks5 proxy.

Note that the access token is only valid for an hour. It’s anyhow a good practice to stop the connection once done, but after an hour you won’t be authenticated anymore.

After this command, we can use our normal database clients to connect to our database using localhost and port 8124. As we need to provide the database and our IAM use account to use. This will only succeed if the user was added to the database, see https://cloud.google.com/sql/docs/postgres/authentication

For example:

Keep a look out for our next knowledge share blog - How to bootstrap your GCP organisation using terraform!