This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

This topic describes how to configure, deploy, and expose basic workloads in VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).



Overview

A load balancer is a third-party device that distributes network and application traffic across resources. Using a load balancer can prevent individual network components from being overloaded by high traffic.

Note: The procedures in this topic create a dedicated load balancer for each workload. If your cluster has many apps, a load balancer dedicated to each workload can be an inefficient use of resources. An ingress controller pattern is better suited for clusters with many workloads.

To create a dedicated load balancer for a workload:

  1. Review the Prerequisites.
  2. Deploy a workload:
  3. Expose the workload. Refer to the following TKGI documentation topics for additional information about deploying and exposing workloads:

Note: This topic references standard Kubernetes primitives. If you are unfamiliar with Kubernetes primitives, review the Kubernetes Workloads and Services, Load Balancing, and Networking documentation before following the procedures below.



Prerequisites

The prerequisites for using a load balancer with TKGI vary depending on your environment:


vSphere without NSX Prerequisites

If you use vSphere without NSX, you can choose to configure your own external load balancer or expose static ports to access your workload without a load balancer. See Deploy Workloads without a Load Balancer below.


GCP, AWS, Azure, and vSphere with NSX Prerequisites

If you use Google Cloud Platform (GCP), Amazon Web Services (AWS), Azure, or vSphere with NSX integration, your cloud provider can configure a public-cloud external load balancer for your workload. See either Deploy Workloads on vSphere with NSX or Deploy Workloads on GCP, AWS, or Azure, Using a Public-Cloud External Load Balancer below.


AWS Prerequisites

If you use AWS, you can also expose your workload using a public-cloud internal load balancer.

Perform the following steps before you create a load balancer:

  1. In the AWS Management Console, create or locate a public subnet for each availability zone (AZ) that you are deploying to. A public subnet has a route table that directs internet-bound traffic to the internet gateway.

  2. On the command line, run tkgi cluster CLUSTER-NAME, where CLUSTER-NAME is the name of your cluster.

  3. Record the unique identifier for the cluster.

  4. In the AWS Management Console, tag each public subnet based on the table below, replacing CLUSTER-UUID with the unique identifier of the cluster. Leave the Value field empty.

    Key Value
    kubernetes.io/cluster/service-instance_CLUSTER-UUID empty

    Note: AWS limits the number of tags on a subnet to 100.

After completing these steps, follow the steps below in Deploy AWS Workloads Using an Internal Load Balancer.



Deploy Workloads on vSphere with NSX

If you use vSphere with NSX, follow the steps below to deploy and expose basic workloads using the NSX load balancer:


Configure Your Workload

To expose a static port on your workload, perform the following steps:

  1. Open the Kubernetes service configuration file for your workload in a text editor.

  2. To expose the workload through a load balancer, confirm that the Service object is configured to be type: LoadBalancer.

  3. To deactivate load balancer SNAT mode, add the following annotations tag to the services metadata section of the manifest:

      annotations:
          ncp/transparent-lb: True
    

    For example:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        name: nginx
      name: nginx
      annotations:
          ncp/transparent-lb: True
    spec:
      ports:
        - port: 80
      selector:
        app: nginx
      type: LoadBalancer
    ---
    

    Note: You can deactivate load balancer SNAT for only Layer 4 cluster load balancers with auto-scaling deactivated in a single-tier Policy API topology. To deactivate auto-scaling, see cni_configurations Extensions Parameters in Creating and Managing Network Profiles (NSX Only).

  4. Confirm that the Kubernetes service configuration of the workload is set to type: LoadBalancer.

  5. Confirm that the type property of the Kubernetes service for each workload is similarly configured.

Note: For an example of a fully configured Kubernetes service, see the type: LoadBalancer configuration for the nginx app example in the kubo-ci repository in GitHub.

For more information about configuring the LoadBalancer Service type see Type LoadBalancer in the Service section of the Kubernetes documentation.


Deploy and Expose Your Workload

To deploy and expose your workload:

  1. To deploy the service configuration for your workload:

    kubectl apply -f SERVICE-CONFIG
    

    Where SERVICE-CONFIG is your workload’s Kubernetes service configuration.
    For example:

    $ kubectl apply -f nginx.yml
    

    This command creates three pod replicas, spanning three worker nodes.

  2. Deploy your applications, deployments, ConfigMaps, persistent volumes, secrets, and any other configurations or objects necessary for your applications to run.

  3. Wait until your cloud provider has created and connected a dedicated load balancer to the worker nodes on a specific port.


Access Your Workload

To access a workload:

  1. To determine the load balancer IP address and port number of your exposed workload, run the following command:

    kubectl get svc SERVICE-NAME
    

    Where SERVICE-NAME is the specified service name of your workload configuration.
    For example:

    $ kubectl get svc nginx
    
  2. Retrieve the external IP address and port of the load balancer from the returned listing.

  3. To access the app, run the following command:

    curl http://EXTERNAL-IP:PORT
    

    Where:

    • EXTERNAL-IP is the IP address of the load balancer.
    • PORT is the port number.

    Note: Run this command on a server with network connectivity and visibility to the IP address of the worker node.



Deploy Workloads on GCP, AWS, or Azure, Using a Public-Cloud External Load Balancer

If you use GCP, AWS, or Azure, follow the steps below to deploy and expose basic workloads using a load balancer configured by your cloud provider:


Configure Your Workload

To expose a static port on your workload, perform the following steps:

  1. Open the Kubernetes service configuration file for your workload in a text editor.

  2. To expose the workload through a load balancer, confirm that the Service object is configured to be type: LoadBalancer.

    For example:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        name: nginx
      name: nginx
    spec:
      ports:
        - port: 80
      selector:
        app: nginx
      type: LoadBalancer
    ---
    
  3. Confirm that the Kubernetes service configuration of the workload is set to type: LoadBalancer.

  4. Confirm that the type property of the Kubernetes service for each workload is similarly configured.

Note: For an example of a fully configured Kubernetes service, see the type: LoadBalancer configuration for the nginx app example in the kubo-ci repository in GitHub.

For more information about configuring the LoadBalancer Service type see Type LoadBalancer in the Service section of the Kubernetes documentation.


Deploy and Expose Your Workload

To deploy and expose your workload:

  1. To deploy the service configuration for your workload:

    kubectl apply -f SERVICE-CONFIG
    

    Where SERVICE-CONFIG is your workload’s Kubernetes service configuration.
    For example:

    $ kubectl apply -f nginx.yml
    

    This command creates three pod replicas, spanning three worker nodes.

  2. Deploy your applications, deployments, ConfigMaps, persistent volumes, secrets, and any other configurations or objects necessary for your applications to run.

  3. Wait until your cloud provider has created and connected a dedicated load balancer to the worker nodes on a specific port.


Access Your Workload

To access a workload:

  1. To determine the load balancer IP address and port number of your exposed workload, run the following command:

    kubectl get svc SERVICE-NAME
    

    Where SERVICE-NAME is the specified service name of your workload configuration.
    For example:

    $ kubectl get svc nginx
    
  2. Retrieve the external IP address and port of the load balancer from the returned listing.

  3. To access the app, run the following command:

    curl http://EXTERNAL-IP:PORT
    

    Where:

    • EXTERNAL-IP is the IP address of the load balancer.
    • PORT is the port number.

    Note: Run this command on a server with network connectivity and visibility to the IP address of the worker node.



Deploy AWS Workloads Using an Internal Load Balancer

If you use AWS, follow the steps below to deploy, expose, and access basic workloads using an internal load balancer configured by your cloud provider.


Configure Your Workload

To expose a static port on your workload, perform the following steps:

  1. Open the Kubernetes service configuration file for your workload in a text editor.

  2. To expose the workload through a load balancer, confirm that the Service object is configured to be type: LoadBalancer.

  3. In the services metadata section of the manifest, add the following annotations tag:

    annotations:
          service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    

    For example:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        name: nginx
      annotations:
            service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
      name: nginx
    spec:
      ports:
        - port: 80
      selector:
        app: nginx
      type: LoadBalancer
      ---
    
  4. Confirm that the Kubernetes service configuration for the workload is set to type: LoadBalancer.

  5. Confirm that the annotations and type properties of the Kubernetes service for each workload are similarly configured.

Note: For an example of a fully configured Kubernetes service, see the type: LoadBalancer configuration for the nginx app example in the kubo-ci repository in GitHub.

For more information about configuring the LoadBalancer Service type see Type LoadBalancer in the Service section of the Kubernetes documentation.


Deploy and Expose Your Workload

To deploy and expose your workload:

  1. To deploy the service configuration for your workload:

    kubectl apply -f SERVICE-CONFIG
    

    Where SERVICE-CONFIG is the Kubernetes service configuration of your workload.
    For example:

    $ kubectl apply -f nginx.yml
    

    This command creates three pod replicas, spanning three worker nodes.

  2. Deploy your applications, deployments, ConfigMaps, persistent volumes, secrets, and any other configurations or objects necessary for your applications to run.

  3. Wait until your cloud provider has created and connected a dedicated load balancer to the worker nodes on a specific port.


Access Your Workload

To access a workload:

  1. To determine the load balancer IP address and port number of your exposed workload, run the following command:

    kubectl get svc SERVICE-NAME
    

    Where SERVICE-NAME is the specified service name of your workload configuration.

    For example:

    $ kubectl get svc nginx
    
  2. Retrieve the external IP and port of the load balancer from the returned listing.

  3. To access the app, run the following command:

    curl http://EXTERNAL-IP:PORT
    

    Where:

    • EXTERNAL-IP is the IP address of the load balancer.
    • PORT is the port number.

    Note: Run this command on a server with network connectivity and visibility to the IP address of the worker node.



Deploy Workloads for a Generic External Load Balancer

In this approach, you will expose access to your workloads with a generic external load balancer, such as F5.

Using a generic external load balancer requires a static port in your Kubernetes cluster. To do this we must expose your workloads with a NodePort.

Follow the steps below to deploy and access basic workloads using a generic external load balancer:


Configure Your Workload

To expose a static port on your workload, perform the following steps:

  1. Open the Kubernetes service configuration file for your workload in a text editor.

  2. To expose the workload without a load balancer, confirm that the Service object is configured to be type: NodePort.
    For example:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        name: nginx
      name: nginx
    spec:
      ports:
        - port: 80
      selector:
        app: nginx
      type: NodePort
    ---
    
  3. Confirm that the Kubernetes service configuration of the workload is set to type: NodePort.

  4. Confirm that the type property of the Kubernetes service for each workload is similarly configured.

Note: For an example of a fully configured Kubernetes service, see the type: LoadBalancer configuration for the nginx app example in the kubo-ci repository in GitHub.

For more information about configuring the NodeP{ort Service type see Type NodePort in the Service section of the Kubernetes documentation.


Deploy and Expose Your Workload

To deploy and expose your workload:

  1. To deploy the service configuration for your workload:

    kubectl apply -f SERVICE-CONFIG
    

    Where SERVICE-CONFIG is the Kubernetes service configuration of your workload.

    For example:

    $ kubectl apply -f nginx.yml
    

    This command creates three pod replicas, spanning three worker nodes.

  2. Deploy your applications, deployments, ConfigMaps, persistent volumes, secrets, and all other configurations or objects necessary for your applications to run.

  3. Wait until your cloud provider has connected your worker nodes on a specific port.


Access Your Workload

To access the workload:

  1. Retrieve the IP address for a worker node with a running app pod.

    Note: If you deployed more than four worker nodes, some worker nodes might not contain a running app pod. Select a worker node that contains a running app pod.

    You can retrieve the IP address for a worker node with a running app pod in one of the following ways:

    • On the command line, run the following command:
    kubectl get nodes -L spec.ip
    
    • On the Ops Manager command line, run the following command to find the IP address:
    bosh vms
    

    This IP address will be used when configuring your external load balancer.

  2. To see a listing of port numbers, run the following command:

    kubectl get svc SERVICE-NAME
    

    Where SERVICE-NAME is the specified service name of your workload configuration.

    For example:

    $ kubectl get svc nginx
    
  3. Find the node port number in the 3XXXX range. You use this port number when configuring your external load balancer.

  4. Configure your external load balancer to map your application Uri to the IP and port number that you collected above. Refer to your load balancer documentation for instructions.



Deploy Workloads without a Load Balancer

If you do not use an external load balancer, you can configure your service to expose a static port on each worker node. The following steps configure your service to be reachable from outside the cluster at http://NODE-IP:NODE-PORT:


Configure Your Workload

To expose a static port on your workload, perform the following steps:

  1. Open the Kubernetes service configuration file for your workload in a text editor.

  2. To expose the workload without a load balancer, confirm that the Service object is configured to be type: NodePort.
    For example:

    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        name: nginx
      name: nginx
    spec:
      ports:
        - port: 80
      selector:
        app: nginx
      type: NodePort
    ---
    
  3. Confirm that the Kubernetes service configuration of the workload is set to type: NodePort.

  4. Confirm that the type property of the Kubernetes service for each workload is similarly configured.

Note: For an example of a fully configured Kubernetes service, see the type: LoadBalancer configuration for the nginx app example in the kubo-ci repository in GitHub.

For more information about configuring the NodeP{ort Service type see Type NodePort in the Service section of the Kubernetes documentation.


Deploy and Expose Your Workload

To deploy and expose your workload:

  1. To deploy the service configuration for your workload:

    kubectl apply -f SERVICE-CONFIG
    

    Where SERVICE-CONFIG is the Kubernetes service configuration of your workload.
    For example:

    $ kubectl apply -f nginx.yml
    

    This command creates three pod replicas, spanning three worker nodes.

  2. Deploy your applications, deployments, ConfigMaps, persistent volumes, secrets, and any other configurations or objects necessary for your applications to run.

  3. Wait until your cloud provider has connected your worker nodes on a specific port.


Access Your Workload

To access the workload:

  1. Retrieve the IP address for a worker node with a running app pod.

    Note: If you deployed more than four worker nodes, some worker nodes might not contain a running app pod. Select a worker node that contains a running app pod.

    You can retrieve the IP address for a worker node with a running app pod in one of the following ways:

    • On the command line, run the following command:
    kubectl get nodes -L spec.ip
    
    • On the Ops Manager command line, run the following command to find the IP address:
    bosh vms
    
  2. To see a listing of port numbers, run the following command:

    kubectl get svc SERVICE-NAME
    

    Where SERVICE-NAME is the specified service name of your workload configuration.

    For example:

    $ kubectl get svc nginx
    
  3. Find the node port number in the 3XXXX range.

  4. To access the app, run the following command:

    curl http://NODE-IP:NODE-PORT
    

    Where:

    • NODE-IP is the IP address of the worker node.
    • NODE-PORT is the node port number.

    Note: Run this command on a server with network connectivity and visibility to the IP address of the worker node.

check-circle-line exclamation-circle-line close-line
Scroll to top icon