This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

This topic describes how to configure a Google Cloud Platform (GCP) load balancer for a Kubernetes cluster deployed by VMware Tanzu Kubernetes Grid Integrated Edition.

Overview

A load balancer is a third-party device that distributes network and application traffic across resources. You can use a load balancer to access a TKGI-deployed cluster from outside the network using the TKGI API and kubectl. Using a load balancer can also prevent individual network components from being overloaded by high traffic.

You can configure GCP load balancers only for TKGI clusters that are deployed on GCP.

To configure a GCP load balancer, follow the procedures below:

  1. Create a GCP Load Balancer
  2. Create a DNS Entry
  3. Create the Cluster
  4. Configure Load Balancer Back End
  5. Create a Network Tag
  6. Create Firewall Rules
  7. Access the Cluster

To reconfigure a cluster load balancer, follow the procedures in Reconfigure Load Balancer below.

Prerequisites

The procedures in this topic have the following prerequisites:

  • To complete these procedures, you must have already configured a load balancer to access the TKGI API. For more information, see Creating a GCP Load Balancer for the TKGI API.
  • The version of the TKGI CLI you are using must match the version of the Tanzu Kubernetes Grid Integrated Edition tile that you are installing.

Configure GCP Load Balancer

Follow the procedures in this section to create and configure a load balancer for TKGI-deployed Kubernetes clusters using GCP. Modify the example commands in these procedures to match your Tanzu Kubernetes Grid Integrated Edition installation.

Create a GCP Load Balancer

To create a GCP load balancer for your TKGI clusters, do the following:

  1. Navigate to the Google Cloud Platform console.
  2. In the sidebar menu, select Network Services > Load balancing.
  3. Click Create a Load Balancer.
  4. In the TCP Load Balancing pane, click Start configuration.
  5. Click Continue. The New TCP load balancer menu opens.
  6. Give the load balancer a name. For example, my-cluster.
  7. Click Frontend configuration and configure the following settings:
    1. Click IP.
    2. Select Create IP address.
    3. Give the IP address a name. For example, my-cluster-ip.
    4. Click Reserve. GCP assigns an IP address.
    5. In the Port field, enter 8443.
    6. Click Done to complete front end configuration.
  8. Review your load balancer configuration and click Create.

Create a DNS Entry

To create a DNS entry in GCP for your TKGI cluster, do the following:

  1. From the GCP console, navigate to Network Services > Cloud DNS.

  2. Select the DNS zone for your domain. To retrieve your zone name, select the zone you used when you created the TKGI API DNS entry. See the Create a DNS Entry section in Creating a GCP Load Balancer for the TKGI API.

  3. Click Add record set.

  4. Under DNS Name, enter a subdomain for the load balancer. For example, if your domain is example.com, enter my-cluster in this field to use my-cluster.example.com as your TKGI cluster load balancer hostname.

  5. Under Resource Record Type, select A to create a DNS address record.

  6. Enter a value for TTL and select a TTL Unit.

  7. Enter the GCP-assigned IP address you created in Create a Load Balancer above.

  8. Click Create.

Create the Cluster

To create a cluster, follow the steps input Create a Kubernetes Cluster section of Creating Clusters. Use the TKGI cluster hostname from the above step as the external hostname when you run the tkgi create-cluster command.

Configure Load Balancer Back End

To configure the back end of the load balancer, do the following:

  1. Record the ID for your control plane node VMs by doing one of the following:

    • Complete Identify Kubernetes Cluster Control Plane VMs in Creating Clusters
    • Complete the following procedure:

    • Log in to TKGI by running the following command:

      ```
      tkgi login -a TKGI-API -u USERNAME -k
      ```
      
      Where:  
      
      * `TKGI-API` is the domain name for the TKGI API that you entered in **Ops Manager** > **Tanzu Kubernetes Grid Integrated Edition** > **TKGI API** > **API Hostname (FQDN)**. For example, `api.tkgi.example.com`.  
      * `USERNAME` is your user name.  
      

    Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

    1. Locate the control plane node IP addresses by running the following command:

      ```
      tkgi cluster CLUSTER-NAME
      ```
      Where `CLUSTER-NAME` is the unique name for your cluster.<br><br>
      
      From the output of this command, record the value of **Kubernetes Master IP(s)**. This value lists the IP addresses of all control plane node VMs in the cluster.
      
    2. Navigate to the Google Cloud Platform console.

    3. From the sidebar menu, navigate to Compute Engine > VM instances.
    4. Filter the VMs using the network name you provided when you deployed Ops Manager on GCP.
    5. Record the IDs of the control plane node VMs associated with the IP addresses you recorded in the above step. The above IP addresses appear under the Internal IP column.
  2. In the Google Cloud Platform console, from the sidebar menu, navigate to Network Services > Load balancing.
  3. Select the load balancer that you created for the cluster and click Edit.
  4. Click Backend configuration and configure the following settings:
    1. Select all the control plane node VMs for your cluster from the dropdown.

      Warning: If control plane VMs are recreated for any reason, such as a stemcell upgrade, you must reconfigure the load balancer to target the new control plane VMs. For more information, see the Reconfigure Load Balancer section below.

    2. Specify any other configuration options you require and click Update to complete back end configuration.

      Note: For clusters with multiple control plane node VMs, health checks on port 8443 are recommended.

Create a Network Tag

To create a network tag, do the following:

  1. In the Google Cloud Platform sidebar menu, select Compute Engine > VM instances.
  2. Filter to find the control plane instances of your cluster. Type master in the Filter VM Instances search box and press Enter.
  3. Click the name of the control plane instances. The VM instance details menu opens.
  4. Click Edit.
  5. Click in the Network tags field and type a human-readable name in lowercase letters. Press Enter to create the network tag.
  6. Scroll to the bottom of the screen and click Save.

Create Firewall Rules

To create firewall rules, do the following:

  1. In the Google Cloud Platform sidebar menu, select VPC Network > Firewall Rules.
  2. Click Create Firewall Rule. The Create a firewall rule menu opens.
  3. Give your firewall rule a human-readable name in lower case letters. For ease of use, you may want to align this name with the name of the load balancer you created in Create a GCP Load Balancer.
  4. In the Network menu, select the VPC network on which you have deployed the Tanzu Kubernetes Grid Integrated Edition tile.
  5. In the Direction of traffic field, select Ingress.
  6. In the Action on match field, select Allow.
  7. Confirm that the Targets menu is set to Specified target tags and enter the tag you made in Create a Network Tag in the Target tags field.
  8. In the Source filter field, choose an option to filter source traffic.
  9. Based on your choice in the Source filter field, specify IP addresses, Subnets, or Source tags to allow access to your cluster.
  10. In the Protocols and ports field, choose Specified protocols and ports and enter the port number you specified in Create a GCP Load Balancer, prepended by tcp:. For example: tcp:8443.
  11. Specify any other configuration options you require and click Done to complete front end configuration.
  12. Click Create.

Access the Cluster

To complete cluster configuration, do the following:

  1. From your local workstation, run tkgi get-credentials CLUSTER-NAME.

    Where CLUSTER-NAME is the unique name for your cluster. For example:

    $ tkgi get-credentials tkgi-example-cluster
    Fetching credentials for cluster tkgi-example-cluster. Context set for cluster tkgi-example-cluster.
    The tkgi get-credentials command creates a local kubeconfig that enables you to manage the cluster. For more information about the tkgi get-credentials command, see Retrieving Cluster Credentials and Configuration.

    Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

  2. Run kubectl cluster-info to confirm you can access your cluster using the Kubernetes CLI.

See Managing Tanzu Kubernetes Grid Integrated Edition for information about checking cluster health and viewing cluster logs.

Reconfigure Load Balancer

If Kubernetes control plane node VMs are recreated for any reason, you must reconfigure your cluster load balancers to point to the new control plane VMs. For example, after a stemcell upgrade, BOSH recreates the VMs in your deployment.

To reconfigure your GCP cluster load balancer to use the new control plane VMs, do the following:

  1. Locate the VM IDs of the new control plane node VMs for the cluster. For information about locating the VM IDs, see Identify Kubernetes Cluster Control Plane VMs in Creating Clusters.
  2. Navigate to the GCP console.
  3. In the sidebar menu, select Network Services > Load balancing.
  4. Select your cluster load balancer and click Edit.
  5. Click Backend configuration.
  6. Click Select existing instances.
  7. Select the new control plane VM IDs from the dropdown. Use the VM IDs you located in the first step of this procedure.
  8. Click Update.
check-circle-line exclamation-circle-line close-line
Scroll to top icon