The following prerequisites must be met in order to follow along with Consuming Cloud SQL on Tanzu Application Platform (TAP) with Config Connector.

The gcloud CLI

You need to have the gcloud CLI installed and authenticated.

A Kubernetes cluster

In this example we went standard GKE cluster with the Config Connector pre-installed.

It is recommended to install the latest stable version of the Operator (1.71.0 is known to work with this specific use case).

GCP_PROJECT='<GCP project ID>'
CLUSTER_NAME='<GKE cluster name>'

# The Google Cloud Service Account to be used by the Config Connector

# The cluster's node count
# We suggest to start at 6 nodes to host all the TAP systems and to ensure
# the (automatically provisioned and managed) control plane is also scaled
# accordingly.

# The namespace you want to deploy the Config Connector / service instance
# objects into

# In this example we deploy a zonal cluster, thus you need to provide the
# zone you want your cluster to land in

# For Cloud NAT we need to provide the region we want to deploy the router
# to, this needs to be the region the zonal cluster resides in

# Will be used for the name of the Cloud NAT router and the NAT config we
# deploy on it

gcloud container --project "${GCP_PROJECT}" \
    clusters create "${CLUSTER_NAME}" \
    --zone "${ZONE}" \
    --release-channel "regular" \
    --machine-type "e2-standard-4" \
    --disk-type "pd-standard" \
    --disk-size "70" \
    --metadata disable-legacy-endpoints=true \
    --num-nodes "${NODE_COUNT}" \
    --node-labels "${LABELS}" \
    --logging=SYSTEM \
    --monitoring=SYSTEM \
    --enable-ip-alias \
    --enable-network-policy \
    --addons ConfigConnector,HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
    --workload-pool="${GCP_PROJECT}" \
    --labels "${LABELS}"

gcloud iam service-accounts create \
    "${SA_NAME}" \
    --description "${LABELS}"

gcloud projects add-iam-policy-binding "${GCP_PROJECT}" \
    --member="serviceAccount:${SA_NAME}@${GCP_PROJECT}" \

gcloud iam service-accounts add-iam-policy-binding \
    "${SA_NAME}@${GCP_PROJECT}" \
    --member="serviceAccount:${GCP_PROJECT}[cnrm-system/cnrm-controller-manager]" \

Configure a stable egress IP

By default egress traffic from pods will get their source IP translated to the node’s public IP (SNAT) on the way out. Thus, when we need to configure allowed ingress networks for a Cloud SQL instance, we’d need to add each node of the cluster. Everytime the cluster scales or nodes get repaved, their public IP would change and we would need to make sure to keep the list of authorized networks up to date.

To make this easier we will: - turn off SNAT on the nodes, so egress traffic is not translated to the node’s public IP - deploy a Cloud NAT service, which then handles the source IP translation and gives us a stable egress IP

Configure the ip-masq-agent

Each cluster comes with a DaemonSet ip-masq-agent in the kube-system namespace. By deploying a configuration for this service and restarting the DaemonSet, we can turn off SNAT for egress traffic.

cat <<'EOF' | kubectl -n kube-system create cm ip-masq-agent --from-file=config=/dev/stdin

kubectl -n kube-system rollout restart daemonset ip-masq-agent

With this config none of the outbound traffic is translated to the node’s public IP.

Note: You can also set specfic destination network CIDRs in nonMasqueradeCIDRs for which the SNAT on the nodes should be turned off. In that case, any traffic’s source IP will still be translated to the node’s public IP, except if the destination is explicitly configured in that list.

Set up a Cloud NAT service

After we’ve turned off SNAT on the nodes, we will employ a Cloud NAT service.

Conceptually this does the same thing as the SNAT on the nodes. However, the difference is, that we don’t translate to a node’s public IP address, but rather to a reserved IP address that is explicitly used by the Cloud NAT router. Therefore this IP is stable as long as this Cloud NAT router exists and all traffic originating from any pod, regardless which node it resides on, will get its source IP translated to that stable IP.

gcloud compute routers create "${NAT_NAME}-router" --region "${REGION}" --network default
gcloud compute routers nats create "${NAT_NAME}-config" \
    --router-region "${REGION}" \
    --router "${NAT_NAME}-router" \
    --auto-allocate-nat-external-ips \

A Tanzu Application Platform installation on the cluster (v1.2.0+).

Tanzu Application Platform (v1.2.0 or newer) and Cluster Essentials (v1.2.0 or newer) have to be installed on the kubernetes cluster.

Note: To check if you have an appropriate version, please run the following:

kubectl api-resources | grep secrettemplate

This command should return the SecretTemplate API. If it does not, ensure Cluster Essentials for VMware Tanzu (v1.2.0 or newer) is installed.

Configure the Config Connector

cat <<EOF | kubectl apply -f -
kind: ConfigConnector
  mode: cluster
  googleServiceAccount: "${SA_NAME}@${GCP_PROJECT}"

kubectl create namespace "${SI_NAMESPACE}"

kubectl annotate namespace "${SI_NAMESPACE}" "${GCP_PROJECT}"

kubectl wait -n cnrm-system --for=condition=Ready pod --all

gcloud services enable

Get the NAT IP(s) for egress from the cluster

gcloud compute routers get-status "${NAT_NAME}-router" --region "${REGION}" --format=json \
  | jq -r '.result.natStatus[].autoAllocatedNatIps[]'

This IP(s) will later be used for allowing access to the CloudSQL instance from the cluster.

check-circle-line exclamation-circle-line close-line
Scroll to top icon