Managing NSX Advanced Load Balancer (vSphere)

This topic contains procedures for managing NSX Advanced Load Balancer (ALB) for Tanzu Kubernetes Grid on vSphere, to load-balance traffic to workload clusters and serve as a VIP endpoint for the management cluster. NSX Advanced Load Balancer is formerly known as Avi Vantage.

For the one-time procedure of installing NSX Advanced Load Balancer after upgrading to Tanzu Kubernetes Grid v1.6, see Install NSX Advanced Load Balancer After Tanzu Kubernetes Grid Upgrade (vSphere).

To configure L7 ingress on your workload clusters, see Configuring L7 Ingress with NSX Advanced Load Balancer.

NSX ALB in the Management Cluster

If NSX ALB is enabled in your Tanzu Kubernetes Grid deployment, the load balancer service is automatically enabled in the management cluster.

About Configuring NSX Advanced Load Balancer in Workload Clusters

With NSX ALB networking, each workload cluster has an Avi Kubernetes Operator (AKO) configuration that sets its virtual IP networks, service engine (SE) groups, and L7 ingress. The default AKO configuration for each workload cluster is set in the management cluster, by an AKODeploymentConfig object named install-ako-for-all.

To customize the AKO configuration for a workload cluster, you follow the process outlined below. This is just a high-level description; see the other sections in this topic for complete, step-by-step examples.

  1. In the management cluster, create or edit an AKODeploymentConfig object definition to include:

    • Customization settings under spec, for example:

      • VIP network settings under spec.dataNetwork
      • A Service Engine Group setting under spec.serviceEngineGroup
      • L7 ingress settings under spec.extraConfigs
    • A spec.clusterSelector.matchLabels block with label-value pairs that determine which clusters the custom AKODeploymentConfig applies to.

    Note: Do not edit the default AKODeploymentConfig object, install-ako-for-all.

  2. Apply the custom configuration to the workload cluster by running kubectl label cluster to give the cluster a label-value pair that matches one of the spec.clusterSelector.matchLabels label-value pairs in the custom AKODeploymentConfig object.

Example NSX Advanced Load Balancer Configuration File

You configure NSX Advanced Load Balancer for Tanzu Kubernetes Grid through a AKODeploymentConfig custom resource (CR).

Here is an example AKODeploymentConfig definition. You can customize all of the settings in this file.

apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
  name: ADC-OBJECT-NAME
spec:
  adminCredentialRef:
    name: avi-controller-credentials
    namespace: tkg-system-networking
  certificateAuthorityRef:
    name: avi-controller-ca
    namespace: tkg-system-networking
  cloudName: Default-Cloud
  clusterSelector:
    matchLabels:
      LABEL-NAME: “LABEL-VALUE”
  controller: 10.186.39.152
  dataNetwork:
    cidr: NETWORK-CIDR
    name: NETWORK-NAME
  controlPlaneNetwork:
    cidr: NETWORK-CIDR
    name: NETWORK-NAME
  extraConfigs:
    cniPlugin: antrea
    disableStaticRouteSync: true
    ingress:
      defaultIngressController: false
      disableIngressClass: true
  serviceEngineGroup: SEG-NAME

Where:

  • ADC-OBJECT-NAME is a unique name you choose for the AKODeploymentConfig object.
  • LABEL-NAME is the label you choose to match an AKODeploymentConfig to a workload cluster.
  • LABEL-VALUE is the label value you defined for the LABEL-NAME in the AKODeploymentConfig object in the previous step.
  • SEG-NAME is the name of the SE group you want to assign to a workload cluster.
  • NETWORK-CIDR is the CIDR range of the network you want to assign to your workload clusters’ load balancers.
  • NETWORK-NAME is the name of the network you want to assign to a workload cluster’s load balancers.

    Note: All specifications under extraConfigs are required for the L7 ingress for workload clusters AKODeploymentConfig.

Once you have the AKODeploymentConfig configuration yaml, you can use it to create the configuration object in the management cluster and then add a label to your workload cluster that matches AKODeploymentConfig.spec.clusterSelector.matchLabels:

For more information about the AKO, see Install NSX Advanced Load Balancer.

  1. Set the context of kubectl to your management cluster by running:

    kubectl config use-context MY-MGMT-CLUSTER@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  2. Create a yaml file for your AKODeploymentConfig object following the template below for L7 ingress settings.

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: test-node-network-list
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          test-node-network-list: "l7-net"
      controller: 10.185.43.245
      dataNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      controlPlaneNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      extraConfigs:
        disableStaticRouteSync: false                               # required
        ingress:
          disableIngressClass: false                                # required
          nodeNetworkList:                                          # required
            - cidrs:
                - 10.185.32.0/20
              networkName: VM Network
          serviceType: ClusterIP                                    # required
          shardVSSize: MEDIUM                                       # required
      serviceEngineGroup: Default-Group
    
  3. Apply the configuration in the yaml file to the management cluster:

    kubectl apply -f FILENAME.yaml
    

    Where FILENAME is the name of AKODeploymentConfig file.

  4. Assign a label to workload clusters that matches one of the clusterSelector.matchLabels definitions, to specify the custom L7 configuration:

    kubectl label cluster WORKLOAD-CLUSTER LABEL-NAME="LABEL-VALUE"
    

    Where:

    • WORKLOAD-CLUSTER is the name of your workload cluster.
    • LABEL-NAME is a label name that you set under matchLabels in the AKO configuration.
    • LABEL-VALUE is a label value that you set in the AKO configuration.

    For example, to set workload cluster wc-1 to use the AKO configuration defined above:

    kubectl label cluster wc-1 test-node-network-list="l7-net"
    

Create Multiple NSX Advanced Load Balancer Configurations for Different Workload Clusters

You can use AKODeploymentConfig to create multiple AKO configurations in your management cluster. When you add a label to your workload cluster that matches AKODeploymentConfig.spec.clusterSelector.matchLabels, you deploy the AKO configurations you want in your workload clusters.

For more information about AKO, see Install NSX Advanced Load Balancer After Tanzu Kubernetes Grid Upgrade (vSphere).

  1. Set the context of kubectl to your management cluster:

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    
  2. Create a yaml file with multiple AKODeploymentConfig object specifications, one for each AKO configuration, separated by ---.

    The example below shows the AKODeploymentConfig file that has two SE groups, with the labels seg-1 and seg-2, ready to be assigned to different workload clusters:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
    name: install-ako-use-seg-1
    spec:
    clusterSelector:
      matchLabels:
        service_engine_group: "seg-1"
    serviceEngineGroup: seg-1
    
    ---
    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
    name: install-ako-use-seg-2
    spec:
    clusterSelector:
      matchLabels:
        service_engine_group: "seg-2"
    serviceEngineGroup: seg-2
    
    
  3. Apply the configuration in the yaml file to the management cluster:

    kubectl apply -f FILENAME.yaml
    

    Where FILENAME is the name of your AKODeploymentConfig file.

  4. Assign labels to workload clusters by running:

    kubectl label cluster WORKLOAD-CLUSTER LABEL-NAME="LABEL-VALUE"
    

    Where:

    • WORKLOAD-CLUSTER is the name of your workload cluster.
    • LABEL-NAME is a label name that you set under matchLabels in the AKO configuration, for example, service_engine_group, vip_network, or enable_l7.
    • LABEL-VALUE is a label value that you set in the AKO configuration.

    For example, the following commands deploy SE group seg-1 in workload clusters 1 and 2, and SE group seg-2 in workload clusters 3 and 4:

    kubectl label cluster wc-1 service_engine_group="seg-1"
    kubectl label cluster wc-2 service_engine_group="seg-1"
    kubectl label cluster wc-3 service_engine_group="seg-2"
    kubectl label cluster wc-4 service_engine_group="seg-2"
    

Modify the NSX ALB configurations

To modify the AKO configurations in the management cluster:

  1. Set the context of kubectl to the context of your management cluster. For example:

    kubectl config use-context mgmt-cluster-admin@mgmt-cluster
    

    In this example, mgmt-cluster is the name of the management cluster.

  2. Modify the NSX ALB configuration by editing the AKODeploymentConfig object:

    kubectl edit adc install-ako-for-management-cluster
    
  3. In the text editor that pops up, update the configuration, and save the changes. Do not update the spec.clusterSelector value in the configuration file.

To modify the AKO configurations in the workload clusters:

  1. In the Tanzu CLI, run the following command to switch the context to the workload cluster:

    kubectl config use-context *WORKLOAD-CLUSTER-CONTEXT*
    
  2. Modify the NSX ALB configuration by editing the AKODeploymentConfig object:

    kubectl edit adc install-ako-for-all
    
  3. In the text editor that pops up, update the configuration, and save the changes.

Update the User Credentials of Avi Controller

Avi Controller user credentials expire periodically. Update the credentials by editing the avi-controller-credentials secret in the management cluster.

Before performing this task, ensure that you have the management cluster context and the new base64 encoded Avi Controller credentials. For more information on obtaining the management cluster context, see Retrieve Tanzu Kubernetes Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the management cluster:

    kubectl config use-context MANAGEMENT-CLUSTER-CONTEXT
    
  2. Run the following command to update the avi-controller-credentials value under tkg-system-networking namespace:

    kubectl edit secret avi-controller-credentials -n tkg-system-networking
    
  3. Within your default text editor that pops up, update the password and the username fields with your new base64 encoded credentials data.

  4. Save the changes.

Update the Avi Certificate

Tanzu Kubernetes Grid authenticates to the Avi Controller by using certificates. When these certificates near expiration, update them by using the Tanzu CLI. You can update the certificates in an existing workload cluster, or in a management cluster for use by new workload clusters. Newly-created workload clusters obtain their Avi certificate from their management cluster.

Update the Avi Certificate in an Existing Workload Cluster

Updating the Avi certificate in an existing workload cluster is performed through the workload cluster context in the Tanzu CLI. Before performing this task, ensure that you have the workload cluster context and the new base64 encoded Avi certificate details. For more information on obtaining the workload cluster context, see Retrieve Workload Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the workload cluster:

    kubectl config use-context *WORKLOAD-CLUSTER-CONTEXT*
    
  2. Run the following command to update the avi-secret value under avi-system namespace:

    kubectl edit secret avi-secret -n avi-system
    

    Within your default text editor that pops up, update the certificateAuthorityData field with your new base64 encoded certificate data.

  3. Save the changes.

Update the Avi Certificate in a Management Cluster

Workload clusters obtain their Avi certificates from their management cluster. This procedure updates the Avi certificate in a management cluster. The management cluster then includes the updated certificate in any new workload clusters that it creates.

Before performing this task, ensure that you have the management cluster context and the new base64 encoded Avi certificate details. For more information on obtaining the management cluster context, see Retrieve Workload Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the management cluster:

    kubectl config use-context MANAGEMENT-CLUSTER-CONTEXT
    
  2. Run the following command to update the avi-controller-ca value under tkg-system-networking namespace:

    kubectl edit secret avi-controller-ca -n tkg-system-networking
    

    Within your default text editor that pops up, update the certificateAuthorityData field with your new base64 encoded certificate data.

  3. Save the changes.

Add a Service Engine Group for NSX Advanced Load Balancer

The NSX Advanced Load Balancer Essentials Tier has limited high-availability (HA) capabilities. To distribute the load balancer services to different service engine groups (SEGs), create additional SEGs on the Avi Controller, and create a new AKO configuration object (AKODeploymentConfig object) in a YAML file in the management cluster. Alternatively, you can update an existing AKODeploymentConfig object in the management cluster with the name of the new SEG.

  1. In the Avi Controller UI, go to Infrastructure > Service Engine Groups, and click CREATE to create the new SEG.

    Create Service Engine Group

  2. Create the service engine group as follows:

    Create Service Engine Group - Basic

    Note: The Essential Tier only supports service engine groups with Active/Standby HA mode; it does not support Active/Active HA mode. If the default N+M HA mode is used, it will be automatically changed to Active/Standby HA mode.

  3. In a terminal, do the following depending on whether you want to create a new AKODeploymentConfig object for the new SEG or update an existing AKODeploymentConfig object:

    • Create a new AKODeploymentConfig object:

      1. Run the following command to open the text editor.

        vi FILE_NAME
        

        Where FILE_NAME is the name of the AKODeploymentConfig YAML file that you want to create.

      2. Add the AKO configuration details in the file. The following is an example:

          apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
          kind: AKODeploymentConfig
          metadata:
            name: install-ako-for-all
          spec:
            adminCredentialRef:
              name: avi-controller-credentials
              namespace: tkg-system-networking
            certificateAuthorityRef:
              name: avi-controller-ca
              namespace: tkg-system-networking
            cloudName: Default-Cloud
            controller: 10.184.74.162
            dataNetwork:
              cidr: 10.184.64.0/20
              name: VM Network
            extraConfigs:
              cniPlugin: antrea
              disableStaticRouteSync: true
              ingress:
                defaultIngressController: false
                disableIngressClass: true
            serviceEngineGroup: SEG-1
        
      3. Save the file, and exit the text editor.

      4. Run the following command to apply the new configuration:

        kubectl apply -f FILE_NAME
        

        Where FILE NAME is the name of the YAML file that you have created.

    • Update an existing AKODeploymentConfig object:

      1. Run the following command to open the AKODeploymentConfig object:

        kubectl edit adc ADC_NAME
        

        Where ADC_NAME is the name of the AKODeploymentConfig object in the YAML file.

      2. Update the SEG name in the text editor that pops up.

      3. Save the file, and exit the text editor.

  4. Run the following command to verify that the new configuration is present in the management cluster:

    kubectl get adc ADC_NAME -o yaml
    

    Where ADC_NAME is the name of the AKODeploymentConfig object in the YAML file.

    In the file, verify that the adc.spec.serviceEngineGroup field displays the name of the new SE group.

  5. Switch the context to the workload cluster by using the kubectl utility.

  6. Run the following command to view the AKO deployment information:

    kubectl get cm avi-k8s-config -n avi-system -o yaml
    

    In the output, verify that the SE group has been updated.

  7. Run the following command to verify that AKO is running:

    kubectl get pod -n avi-system
    

Change a Cluster’s Control Plane HA Provider to NSX Advanced Load Balancer

In Tanzu Kubernetes Grid, you can change the control plane high availability (HA) provider in a cluster from Kube-VIP to NSX Advanced Load Balancer. The control plane HA provider must be the same in a management cluster and it is in its workload clusters. If a management cluster has NSX Advanced Load Balancer as its control plane HA provider, new workload clusters that it creates will automatically have the same HA provider.

Prerequisites

Ensure that:

Procedure

  1. Add the control plane virtual IP address to the Avi Static IP Pool:

    1. In the Avi Controller UI, go to Infrastructure > Networks.
    2. Select the network that the cluster uses, and click Edit.
    3. Add the control plane virtual IP address to Static IP Address Pool, and click Save.
  2. In the Tanzu CLI, set the context of kubectl to your management cluster:

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  3. Verify that the cluster has control plane endpoint annotation:

    kubectl annotate --overwrite cluster CLUSTER-NAME -n CLUSTER-NAMESPACE tkg.tanzu.vmware.com/cluster-controlplane-endpoint="VIP"
    

    Where:

    • CLUSTER-NAME is the name of the cluster.
    • CLUSTER-NAMESPACE is the namespace that you use for the cluster.
    • VIP is the virtual IP address of the control plane endpoint.
  4. (Management clusters only) In the AKO operator secret, set Avi as the control plane HA provider:

    1. Retrieve the configuration values string from the AKO operator secret, decode it, and dump it into a YAML file:

      kubectl get secret CLUSTER_NAME-ako-operator-addon -n tkg-system --template="{{index .data \"values.yaml\" | base64decode}}" > values.yaml
      
    2. Modify the configuration values file to set avi_control_plane_ha_provider to true:

      yq e -i '.akoOperator.config.avi_control_plane_ha_provider=true' values.yaml
      

      The following is an example of the modified configuration values file:

      #@data/values
      #@overlay/match-child-defaults missing_ok=True
      ---
      akoOperator:
        avi_enable: true
        namespace: tkg-system-networking
        cluster_name: ha-mc-1
        config:
          avi_disable_ingress_class: true
          avi_ingress_default_ingress_controller: false
          avi_ingress_shard_vs_size: ""
          avi_ingress_service_type: ""
          avi_ingress_node_network_list: '""'
          avi_admin_credential_name: avi-controller-credentials
          avi_ca_name: avi-controller-ca
          avi_controller: 10.161.107.63
          avi_username: admin
          avi_password: Admin!23
          avi_cloud_name: Default-Cloud
          avi_service_engine_group: Default-Group
          avi_data_network: VM Network
          avi_data_network_cidr: 10.161.96.0/19
          avi_ca_data_b64: LS0tLS1CRUd[...]BVEUtLS0tLQ==
          avi_labels: '""'
          avi_disable_static_route_sync: true
          avi_cni_plugin: antrea
          avi_management_cluster_vip_network_name: VM Network
          avi_management_cluster_vip_network_cidr: 10.161.96.0/19
          avi_control_plane_endpoint_port: 6443
          avi_control_plane_ha_provider: true
          ```
      
      
    3. Re-encode the configuration values file into a base64-encoded string:

      cat values.yaml | base64
      

      Record the base64-encoded string from the command output.

    4. Open the AKO operator secret specification in an editor:

      kubectl edit secret CLUSTER NAME-ako-operator-addon -n tkg-system
      
    5. Replace the data.values.yaml field with the new base64-encoded configuration values string that you recorded. Save the file.

  5. Before proceeding, confirm both of the following:

    • A service exists in the cluster’s namespace of type LoadBalancer and with a name of the form CLUSTER-NAMESPACE-CLUSTER-NAME-control-plane.
    • In the Avi Controller UI Applications > Virtual Services you see a virtual service listed with Name CLUSTER-NAMESPACE-CLUSTER-NAME-control-plane as above and a Health score code that is not red or grey.
  6. Delete the Kube-VIP pods on all of the cluster’s control plane VMs:

    1. Establish an SSH connection to the control plane VM:

      ssh -i PRIVATE-KEY capv@IP-ADDRESS
      

      Where:

      • PRIVATE-KEY is the private key that is paired with the public key, which is configured in the clusterconfig.yaml file.
      • IP-ADDRESS is the IP address of the control plane VM. You can see this listed in vCenter or retrieve it by running kubectl get vspheremachine -o yaml.
    2. Remove the kube-vip.yaml file:

      rm /etc/kubernetes/manifest/kube-vip.yaml
      
    3. Terminate the SSH connection to the control plane VM:

      exit
      
    4. Wait for a few minutes, and run the following command to ensure that all the Kube-VIP pods are deleted from the system:

      kubectl get po -A | grep "kube-vip"
      

      Ensure that the output of this command does not include any Kube-VIP pods.

  7. Open the KCP (Kubernetes Control Plane) object specification in an editor:

    kubectl edit kcp CLUSTER-NAME-control-plane -n CLUSTER-NAMESPACE
    

    Where:

    • CLUSTER-NAME is the name of the cluster.
    • CLUSTER-NAMESPACE is the namespace that you use for the cluster.
  8. In the KCP object specification, delete the entire files block under spec.kubeadmConfigSpec, and save the file.

    The following is an example of the KCP object specification after it is updated:

    spec:
    infrastructureTemplate:
      apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
      kind: VSphereMachineTemplate
      name: ha-mc-1-control-plane
      namespace: tkg-system
    kubeadmConfigSpec:
      clusterConfiguration:
        apiServer:
          extraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          timeoutForControlPlane: 8m0s
        controllerManager:
          extraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
        dns:
          imageRepository: projects.registry.vmware.com/tkg
          imageTag: v1.8.0_vmware.5
          type: CoreDNS
        etcd:
          local:
            dataDir: /var/lib/etcd
            extraArgs:
              cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
            imageRepository: projects.registry.vmware.com/tkg
            imageTag: v3.4.13_vmware.15
        imageRepository: projects.registry.vmware.com/tkg
        networking: {}
        scheduler:
          extraArgs:
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
      initConfiguration:
        localAPIEndpoint:
          advertiseAddress: ""
          bindPort: 0
        nodeRegistration:
          criSocket: /var/run/containerd/containerd.sock
          kubeletExtraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          name: '{{ ds.meta_data.hostname }}'
      joinConfiguration:
        discovery: {}
        nodeRegistration:
          criSocket: /var/run/containerd/containerd.sock
          kubeletExtraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          name: '{{ ds.meta_data.hostname }}'
      preKubeadmCommands:
      - hostname "{{ ds.meta_data.hostname }}"
      - echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
      - echo "127.0.0.1   localhost" >>/etc/hosts
      - echo "127.0.0.1   {{ ds.meta_data.hostname }}" >>/etc/hosts
      - echo "{{ ds.meta_data.hostname }}" >/etc/hostname
      useExperimentalRetryJoin: true
      users:
      - name: capv
        sshAuthorizedKeys:
        - ssh-rsa AAAAB3NzaC1yc2[...]kx21vUu58cj
        sudo: ALL=(ALL) NOPASSWD:ALL
    replicas: 1
    rolloutStrategy:
      rollingUpdate:
        maxSurge: 1
      type: RollingUpdate
    version: v1.23.8+vmware.1
    
    

    The system triggers a rolling upgrade after KCP is edited. Wait until this upgrade is completed. A new control plane VM based on AVI is created and the corresponding Kubernetes objects are updated to use the new VM.

Configure Separate VIP Networks for Management and Workload Clusters Where NSX ALB Provides Control Plane HA

In the previous releases of Tanzu Kubernetes Grid, the endpoint virtual IP addresses of the clusters were assigned from the same VIP network. The users were not able to customize this configuration. In Tanzu Kubernetes Grid v1.6, you can configure the network to separate the endpoint VIP network of the cluster from the external IP network of the load balancer service and the ingress service in the cluster. This feature lets you ensure the security of the clusters by providing you an option to expose the endpoint of your management or the workload cluster and the load balancer service and ingress service in the cluster, in different networks.

You can configure this feature by:

  • Creating a management cluster configuration YAML file that contains the VIP network parameters.
  • Creating a new AkoDeploymentConfig custom resource (CR) that contains the VIP network parameters for the workload cluster.

The following diagram describes a sample network topology that separates:

  • The endpoint of the management cluster to the AVI MGMT Control Plane VIP network
  • The services (load balancer or ingress) in the management cluster to the AVI MGMT Data Plane VIP network
  • The endpoint of the workload clusters to the AVI Workload Control Plane VIP network
  • The services (load balancer or ingress) in the workload clusters to the AVI Workload Data Plane VIP network

VIP network separation

Prerequisites

Configure Separate VIP Networks and Service Engine Groups through the Management Cluster Configuration File

To expose the endpoints or the services in the management cluster and the workload clusters in different VIP networks, separate the VIP networks between the management cluster and the workload clusters.

Create a management cluster configuration YAML file. Add the following parameters, fill in the fields with the relevant information, and Deploy the management cluster by using a configuration file.

  • For the workload cluster data plane VIP network:

    • AVI_DATA_NETWORK:
    • AVI_DATA_NETWORK_CIDR:
  • For workload cluster control plane VIP network:

    • AVI_CONTROL_PLANE_NETWORK:
    • AVI_CONTROL_PLANE_NETWORK_CIDR:
  • For management cluster data plane VIP network:

    • AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME:
    • AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR:
  • For management cluster control plane VIP network:

    • AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME:
    • AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR:
  • For the workload cluster SEG:

    • AVI_SERVICE_ENGINE_GROUP:
  • For the management cluster SEG:

    • AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP:

Note: In Tanzu Kubernetes Grid v1.6, the endpoint VIP virtual services of all the clusters are hosted on the same SEG. Currently, you cannot specify an SEG for the endpoint VIP virtual service of a cluster.

After you create a management cluster with this feature, you will find that:

  • The endpoint VIP of your management cluster is from AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME and AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR

  • The external IP address of the load balancer service and ingress service in your management cluster is from AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME and AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR:

  • The endpoint VIP of your workload cluster is from AVI_CONTROL_PLANE_NETWORK and AVI_CONTROL_PLANE_NETWORK_CIDR

  • The external IP address of the load balancer service and ingress service in your workload cluster is from AVI_DATA_NETWORK and AVI_DATA_NETWORK

Now, you can configure and control the exposed VIP networks of your clusters in NSX ALB-enabled Tanzu Kubernetes Grid.

Configure Separate VIP Networks and Service Engine Groups in Different Workload Clusters

To expose the endpoints or the services in the management cluster and the workload clusters in different VIP networks, separate the VIP networks between the management cluster and the workload clusters.

The following diagram describes a sample network topology that separates:

  • The endpoint of the first workload cluster to the AVI Workload Control Plane VIP network 1 network
  • The services (load balancer or ingress) in the first workload cluster to the AVI Workload Data Plane VIP network 1 network
  • The endpoint of the second workload cluster to the AVI Workload Control Plane VIP network 2 network
  • The services (load balancer or ingress) in the second workload cluster to the AVI Workload Data Plane VIP network 2 network

VIP network separation in different workload clusters

  1. In the management cluster, create AkoDeploymentConfig CRs with the cluster selector, and specify the networks that are used in the workload cluster control plane and data plane. As shown in this example, create two AkoDeploymentConfig CRs to match the workload clusters that are used in the development environment and workload clusters that are used in the production environment:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: install-ako-for-dev-cluster
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      controller: 1.1.1.1
      cloudName: Default-Cloud
      controllerVersion: 20.1.7
      serviceEngineGroup: Default-Group
      clusterSelector:             # match workload clusters with dev-cluster: "true" label
        matchLabels:
          dev-cluster: "true"
      controlPlaneNetwork:         # clusters' endpoint VIP come from this VIP network 
        cidr: 10.10.0.0/16
        name: avi-dev-cp-network
      dataNetwork:                 # clusters' services external IP come from this VIP network
        cidr: 20.20.0.0/16
        name: avi-dev-dp-network
    
    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: install-ako-for-prod-cluster
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      controller: 1.1.1.1
      cloudName: Default-Cloud
      controllerVersion: 20.1.7
      serviceEngineGroup: Default-Group
      clusterSelector:             # match workload clusters with prod-cluster: "true" label
        matchLabels:
          prod-cluster: "true"
      controlPlaneNetwork:         # clusters' endpoint VIP come from this VIP network 
        cidr: 30.30.0.0/16
        name: avi-prod-cp-network
      dataNetwork:                 # clusters' services external IP come from this VIP network
        cidr: 40.40.0.0/16
        name: avi-prod-dp-network
    
    kubectl --context={mgmt kubeconfig} apply -f install-ako-for-dev-cluster.yaml
    kubectl --context={mgmt kubeconfig} apply -f install-ako-for-prod-cluster.yaml
    
  2. Create the workload clusters by using the cluster-config.yaml file. For more information, see Deploy Tanzu Kubernetes Clusters.

  3. In the AVI_LABELS field in the YAML file, specify the values that you used in the clusterSelector field in the akoDeploymentConfig CR that you created. For example:

    • Create the workload clusters in the development environment with the following details:
    AVI_CONTROL_PLANE_HA_PROVIDER:  true
    AVI_LABELS: '{"dev-cluster": "true"}'
    
    • Create the workload clusters in the production environment with the following details:
    AVI_CONTROL_PLANE_HA_PROVIDER:  true
    AVI_LABELS: '{"prod-cluster": "true"}'
    
    tanzu cluster create dev-cluster -f dev-cluster-config.yaml
    tanzu cluster create prod-cluster -f prod-cluster-config.yaml
    

After the workload clusters are created, you will find that:

  • The endpoint of the dev-cluster is using the VIP from the avi-dev-cp-network/10.10.0.0/16 network.
  • The external IP addresses of the services in the dev-cluster are using VIP from the avi-dev-dp-network/20.20.0.0/16 network.
  • The endpoint of the prod-cluster is using the VIP from the avi-prod-cp-network/30.30.0.0/16 network.
  • The external IP addresses of the services in the prod-cluster are using the VIP from the avi-prod-dp-network/40.40.0.0/16 network.
check-circle-line exclamation-circle-line close-line
Scroll to top icon