Prerequisites

The following prerequisites must be met in order to follow along with Consuming Cloud SQL on Tanzu Application Platform (TAP) with Config Connector.

The gcloud CLI

You need to have the gcloud CLI installed and authenticated.

A Kubernetes cluster

In this example we went standard GKE cluster with the Config Connector pre-installed.

It is recommended to install the latest stable version of the Operator (1.71.0 is known to work with this specific use case).

GCP_PROJECT='<GCP project ID>'
LABELS='<label1=value1,label2=value2,...>'
CLUSTER_NAME='<GKE cluster name>'

# The Google Cloud Service Account to be used by the Config Connector
SA_NAME="${CLUSTER_NAME}-sa"

# The cluster's node count
# We suggest to start at 6 nodes to host all the TAP systems and to ensure
# the (automatically provisioned and managed) control plane is also scaled
# accordingly.
NODE_COUNT=6

# The namespace you want to deploy the Config Connector / service instance
# objects into
SI_NAMESPACE="service-instances"

# In this example we deploy a zonal cluster, thus you need to provide the
# zone you want your cluster to land in
ZONE='europe-west6-b'

# For Cloud NAT we need to provide the region we want to deploy the router
# to, this needs to be the region the zonal cluster resides in
REGION='europe-west6'

# Will be used for the name of the Cloud NAT router and the NAT config we
# deploy on it
NAT_NAME="${REGION}-nat"

gcloud container --project "${GCP_PROJECT}" \
    clusters create "${CLUSTER_NAME}" \
    --zone "${ZONE}" \
    --release-channel "regular" \
    --machine-type "e2-standard-4" \
    --disk-type "pd-standard" \
    --disk-size "70" \
    --metadata disable-legacy-endpoints=true \
    --num-nodes "${NODE_COUNT}" \
    --node-labels "${LABELS}" \
    --logging=SYSTEM \
    --monitoring=SYSTEM \
    --enable-ip-alias \
    --enable-network-policy \
    --addons ConfigConnector,HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver \
    --workload-pool="${GCP_PROJECT}.svc.id.goog" \
    --labels "${LABELS}"

gcloud iam service-accounts create \
    "${SA_NAME}" \
    --description "${LABELS}"

gcloud projects add-iam-policy-binding "${GCP_PROJECT}" \
    --member="serviceAccount:${SA_NAME}@${GCP_PROJECT}.iam.gserviceaccount.com" \
    --role="roles/editor"

gcloud iam service-accounts add-iam-policy-binding \
    "${SA_NAME}@${GCP_PROJECT}.iam.gserviceaccount.com" \
    --member="serviceAccount:${GCP_PROJECT}.svc.id.goog[cnrm-system/cnrm-controller-manager]" \
    --role="roles/iam.workloadIdentityUser"

Configure a stable egress IP

By default egress traffic from pods will get their source IP translated to the node’s public IP (SNAT) on the way out. Thus, when we need to configure allowed ingress networks for a Cloud SQL instance, we’d need to add each node of the cluster. Everytime the cluster scales or nodes get repaved, their public IP would change and we would need to make sure to keep the list of authorized networks up to date.

To make this easier we will: - turn off SNAT on the nodes, so egress traffic is not translated to the node’s public IP - deploy a Cloud NAT service, which then handles the source IP translation and gives us a stable egress IP

Configure the ip-masq-agent

Each cluster comes with a DaemonSet ip-masq-agent in the kube-system namespace. By deploying a configuration for this service and restarting the DaemonSet, we can turn off SNAT for egress traffic.

cat <<'EOF' | kubectl -n kube-system create cm ip-masq-agent --from-file=config=/dev/stdin
nonMasqueradeCIDRs:
- 0.0.0.0/0
EOF

kubectl -n kube-system rollout restart daemonset ip-masq-agent

With this config none of the outbound traffic is translated to the node’s public IP.

Note: You can also set specfic destination network CIDRs in nonMasqueradeCIDRs for which the SNAT on the nodes should be turned off. In that case, any traffic’s source IP will still be translated to the node’s public IP, except if the destination is explicitly configured in that list.

Set up a Cloud NAT service

After we’ve turned off SNAT on the nodes, we will employ a Cloud NAT service.

Conceptually this does the same thing as the SNAT on the nodes. However, the difference is, that we don’t translate to a node’s public IP address, but rather to a reserved IP address that is explicitly used by the Cloud NAT router. Therefore this IP is stable as long as this Cloud NAT router exists and all traffic originating from any pod, regardless which node it resides on, will get its source IP translated to that stable IP.

gcloud compute routers create "${NAT_NAME}-router" --region "${REGION}" --network default
gcloud compute routers nats create "${NAT_NAME}-config" \
    --router-region "${REGION}" \
    --router "${NAT_NAME}-router" \
    --auto-allocate-nat-external-ips \
    --nat-all-subnet-ip-ranges

A Tanzu Application Platform installation on the cluster (v1.2.0+).

Tanzu Application Platform (v1.2.0 or newer) and Cluster Essentials (v1.2.0 or newer) have to be installed on the kubernetes cluster.

Note: To check if you have an appropriate version, please run the following:

kubectl api-resources | grep secrettemplate

This command should return the SecretTemplate API. If it does not, ensure Cluster Essentials for VMware Tanzu (v1.2.0 or newer) is installed.

Configure the Config Connector

cat <<EOF | kubectl apply -f -
apiVersion: core.cnrm.cloud.google.com/v1beta1
kind: ConfigConnector
metadata:
  name: configconnector.core.cnrm.cloud.google.com
spec:
  mode: cluster
  googleServiceAccount: "${SA_NAME}@${GCP_PROJECT}.iam.gserviceaccount.com"
EOF

kubectl create namespace "${SI_NAMESPACE}"

kubectl annotate namespace "${SI_NAMESPACE}" "cnrm.cloud.google.com/project-id=${GCP_PROJECT}"

kubectl wait -n cnrm-system --for=condition=Ready pod --all

gcloud services enable serviceusage.googleapis.com

Get the NAT IP(s) for egress from the cluster

gcloud compute routers get-status "${NAT_NAME}-router" --region "${REGION}" --format=json \
  | jq -r '.result.natStatus[].autoAllocatedNatIps[]'

This IP(s) will later be used for allowing access to the CloudSQL instance from the cluster.

check-circle-line exclamation-circle-line close-line
Scroll to top icon