This topic contains procedures for managing NSX Advanced Load Balancer (ALB) for Tanzu Kubernetes Grid on vSphere, to load-balance traffic to Tanzu Kubernetes (workload) clusters and serve as a VIP endpoint for the management cluster. NSX Advanced Load Balancer is formerly known as Avi Vantage.

For the one-time procedure of installing NSX Advanced Load Balancer (ALB) after upgrading to Tanzu Kubernetes Grid v1.4, see Install NSX Advanced Load Balancer After Tanzu Kubernetes Grid Upgrade (vSphere).

To configure L7 ingress on your workload clusters, see Configuring L7 Ingress with NSX Advanced Load Balancer.

About Configuring NSX Advanced Load Balancer in Workload Clusters

With NSX ALB networking, each workload cluster has an Avi Kubernetes Operator (AKO) configuration that sets its virtual IP networks, service engine (SE) groups, and L7 ingress. The default AKO configuration for each workload cluster is set in the management cluster, by an AKODeploymentConfig object named install-ako-for-all.

To customize the AKO configuration for a workload cluster, you follow the process outlined below. This is just a high-level description; see the other sections in this topic for complete, step-by-step examples.

  1. In the management cluster, create or edit an AKODeploymentConfig object definition to include:

    • Customization settings under spec, for example:

      • VIP network settings under spec.dataNetwork
      • A Service Engine Group setting under spec.serviceEngineGroup
      • L7 ingress settings under spec.extraConfigs
    • A spec.clusterSelector.matchLabels block with label-value pairs that determine which clusters the custom AKODeploymentConfig applies to.

    Note: Do not edit the default AKODeploymentConfig object, install-ako-for-all.

  2. Apply the custom configuration to the workload cluster by running kubectl label cluster to give the cluster a label-value pair that matches one of the spec.clusterSelector.matchLabels label-value pairs in the custom AKODeploymentConfig object.

Example NSX Advanced Load Balancer Configuration File

You configure NSX Advanced Load Balancer for Tanzu Kubernetes Grid through a AKODeploymentConfig custom resource (CR).

Here is an example AKODeploymentConfig definition. You can customize all of the settings in this file.

apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
kind: AKODeploymentConfig
metadata:
  name: install-ako-for-non-l7-wc
spec:
  adminCredentialRef:
    name: avi-controller-credentials
    namespace: tkg-system-networking
  certificateAuthorityRef:
    name: avi-controller-ca
    namespace: tkg-system-networking
  cloudName: Default-Cloud
  clusterSelector:
  matchLabels:
  wc: default
  controller: 10.186.39.152
  dataNetwork:
    cidr: 10.186.32.0/20
    name: VM Network
  extraConfigs:
    cniPlugin: antrea
    disableStaticRouteSync: true
    image:
      pullPolicy: IfNotPresent
      repository: projects.registry.vmware.com/tkg/ako
      version: v1.3.2_vmware.1
    ingress:
      defaultIngressController: false
      disableIngressClass: true
  serviceEngineGroup: Default-Group

Configure L7 Ingress with NSX Advanced Load Balancer for Workload Clusters

Optionally, you can also configure L7 ingress with NSX Advanced Load Balancer for workload clusters. Before you configure L7 ingress, ensure the following:

Note: If Avi Kubernetes Operator (AKO) is in clusterIP service type, configure the SE group per cluster. Each SE Group can only be used by one cluster. Otherwise, AKO crashes and throws errors.

To configure L7 ingress for workload clusters, edit or create a YAML file for your AKODeploymentConfig objects on the management cluster. If you have an existing management cluster, upgrade to Tanzu Kubernetes Grid 1.4 before editing or creating AKODeploymentConfig. L7 ingress is applied to the matched workload clusters specified for spec.clusterSector in AKODeploymentConfig.

  1. Set the context of kubectl to your management cluster by running:

    kubectl config use-context MY-MGMT-CLUSTER@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  2. Create a yaml file for your AKODeploymentConfig object following the template below for L7 ingress settings.

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: test-node-network-list
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          test-node-network-list: "l7-net"
      controller: 10.185.43.245
      dataNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      extraConfigs:
        disableStaticRouteSync: false                               # required
        image:
          pullPolicy: IfNotPresent
          repository: projects-stg.registry.vmware.com/tkg/ako
          version: v1.3.2_vmware.1
        ingress:
          disableIngressClass: false                                # required
          nodeNetworkList:                                          # required
            - cidrs:
                - 10.185.32.0/20
              networkName: VM Network
          serviceType: ClusterIP                                    # required
          shardVSSize: MEDIUM                                       # required
      serviceEngineGroup: Default-Group
    
  3. Apply the configuration in the yaml file to the management cluster:

    kubectl apply -f ./FILE-NAME.yaml
    

    Where FILE-NAME is the name you choose for your AKODeploymentConfig file.

  4. Assign a label to workload clusters that matches one of the clusterSelector.matchLabels definitions, to specify the custom L7 configuration:


    kubectl label cluster WORKLOAD-CLUSTER LABEL-NAME="LABEL-VALUE"
    

    Where:

    • WORKLOAD-CLUSTER is the name of your workload cluster.

    • LABEL-NAME is a label name that you set under matchLabels in the AKO configuration.

    • LABEL-VALUE is a label value that you set in the AKO configuration.

    For example, to set workload cluster wc-1 to use the AKO configuration defined above:

    kubectl label cluster wc-1 test-node-network-list="l7-net"
    

Create Multiple NSX Advanced Load Balancer Configurations for Different Workload Clusters

You can use AKODeploymentConfig to create multiple AKO configurations in your management cluster. When you add a label to your workload cluster that matches AKODeploymentConfig.spec.clusterSelector.matchLabels, you deploy the AKO configurations you want in your workload clusters.

For more information about the AKO, see Install NSX Advanced Load Balancer.

  1. Set the context of kubectl to your management cluster:

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  2. Create a yaml file for your AKODeploymentConfig object, and add the required AKO configurations, separated by ---. The template below shows the AKODeploymentConfig object specifications for SE groups, VIP Networks, and L7 ingress for workload clusters.

    #Example AKODeploymentConfig object for SE group
    
    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: ADC-OBJECT-NAME
    spec:
      clusterSelector:
        matchLabels:
          service_engine_group: "LABEL-VALUE-SEG"
      serviceEngineGroup: SEG-NAME
    
    ---
    
    #Example AKODeploymentConfig object for VIP Networks
    
    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: ADC-OBJECT-NAME
    spec:
      clusterSelector:
        matchLabels:
          vip_network: "LABEL-VALUE-VIP"
      dataNetwork:
        cidr: NETWORK-CIDR
        name: NETWORK-NAME
    
    ---
    
    #Example AKODeploymentConfig object for L7 Ingress
    
    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: ADC-OBJECT-NAME
    spec:
      clusterSelector:
        matchLabels:
          enable_l7: "LABEL-VALUE-L7-1"
          enable_l7: "LABEL-VALUE-L7-2"
      serviceEngineGroup: SEG-NAME
      extraConfigs:
        disableStaticRouteSync: false                               
        ingress:
          disableIngressClass: false                                
          nodeNetworkList:                                          
            - cidrs:
                - NETWORK-CIDR
              networkName: NETWORK-NAME
          serviceType: ClusterIP                                   
          shardVSSize: MEDIUM     
    

    Where:

    • ADC-OBJECT-NAME is a unique name you choose for the AKODeploymentConfig object.
    • SEG-NAME is the name of the SE group you want to assign to a workload cluster.
    • NETWORK-CIDR is the CIDR range of the network you want to assign to your workload clusters' load balancers.
    • NETWORK-NAME is the name of the network you want to assign to a workload cluster's load balancers.
    • LABEL-VALUE-* settings are label values that you set under matchLabels in the AKO configuration, for the label names service_engine_group, vip_network, or enable_l7.

      Note: all specifications under extraConfigs are required for the L7 ingress for workload clusters AKODeploymentConfig.

      The example below shows two SE group AKODeploymentConfigs ready to assign to different workload clusters with the labels seg-1 and seg-2:

      apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
      kind: AKODeploymentConfig
      metadata:
        name: install-ako-use-seg-1
      spec:
        clusterSelector:
          matchLabels:
            service_engine_group: "seg-1"
        serviceEngineGroup: seg-1
      
      ---
      
      apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
      kind: AKODeploymentConfig
      metadata:
        name: install-ako-use-seg-2
      spec:
        clusterSelector:
          matchLabels:
            service_engine_group: "seg-2"
        serviceEngineGroup: seg-2
      
      
  3. Apply the configuration in the yaml file to the management cluster:

    kubectl apply -f ./FILE-NAME.yaml
    

    Where FILE-NAME is the name you choose for your AKODeploymentConfig file.

  4. Assign labels to workload clusters by running:


    kubectl label cluster WORKLOAD-CLUSTER LABEL-NAME="LABEL-VALUE"
    

    Where:

    • WORKLOAD-CLUSTER is the name of your workload cluster.

    • LABEL-NAME is a label name that you set under matchLabels in the AKO configuration, for example, service_engine_group, vip_network, or enable_l7.

    • LABEL-VALUE is a label value that you set in the AKO configuration.

    For example, the following commands deploy SE group seg-1 in workload clusters 1 and 2, and SE group seg-2 in workload clusters 3 and 4:

    kubectl label cluster wc-1 service_engine_group="seg-1"
    kubectl label cluster wc-2 service_engine_group="seg-1"
    kubectl label cluster wc-3 service_engine_group="seg-2"
    kubectl label cluster wc-4 service_engine_group="seg-2"
    

Update the Avi Certificate

Tanzu Kubernetes Grid authenticates to the Avi Controller by using certificates. When these certificates near expiration, update them by using the Tanzu CLI. You can update the certificates in an existing workload cluster, or in a management cluster for use by new workload clusters. Newly-created workload clusters obtain their Avi certificate from their management cluster.

Update the Avi Certificate in an Existing Workload Cluster

Updating the Avi certificate in an existing workload cluster is performed through the workload cluster context in the Tanzu CLI. Before performing this task, ensure that you have the workload cluster context and the new base64 encoded Avi certificate details. For more information on obtaining the workload cluster context, see Retrieve Tanzu Kubernetes Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the workload cluster:

    kubectl config use-context *WORKLOAD-CLUSTER-CONTEXT*
    
  2. Run the following command to update the avi-secret value under avi-system namespace:

    kubectl edit secret avi-secret -n avi-system
    

    Within your default text editor that pops up, update the certificateAuthorityData field with your new base64 encoded certificate data.

  3. Save the changes.

  4. Run the following command to obtain the number of Avi Kubernetes Operator (AKO) pods in your environment:

    kubectl get pod -n avi-system
    

    Record the number of pods in the output. The values start from 0, which suggests one AKO pod in the environment.

  5. Run the following command to restart the AKO pods:

    kubectl delete ako-NUMBER -n avi-system
    

    Where NUMBER is the number of AKO pods in your environment recorded in the previous step.

Update the Avi Certificate in a Management Cluster

Workload clusters obtain their Avi certificates from their management cluster. This procedure updates the Avi certificate in a management cluster. The management cluster then includes the updated certificate in any new workload clusters that it creates.

Before performing this task, ensure that you have the management cluster context and the new base64 encoded Avi certificate details. For more information on obtaining the management cluster context, see Retrieve Tanzu Kubernetes Cluster kubeconfig.

  1. In the Tanzu CLI, run the following command to switch the context to the management cluster:

    kubectl config use-context MANAGEMENT-CLUSTER-CONTEXT
    
  2. Run the following command to update the avi-controller-ca value under tkg-system-networking namespace:

    kubectl edit secret avi-controller-ca -n tkg-system-networking
    

    Within your default text editor that pops up, update the certificateAuthorityData field with your new base64 encoded certificate data.

  3. Save the changes.

  4. Run the following command to obtain the AKO Controller Manager string:

    kubectl get pod -n tkg-system-networking
    

    Note down the random string in the output. You will require this string while restarting the AKO pods.

  5. Run the following command to restart the AKO pods:

    kubectl delete po ako-operator-controller-manager-RANDOM-STRING -n tkg-system-networking
    

    Where RANDOM-STRING is the string that you noted down in Step 5.

Add an SE Group for NSX Advanced Load Balancer

The NSX Advanced Load Balancer Essentials Tier has limited high-availability (HA) capabilities. To distribute the load balancer services to different service engine groups (SEGs), create additional SEGs on the Avi Controller, and create a new AKO configuration object (AKODeploymentConfig object) in a YAML file in the management cluster. Alternatively, you can update an existing AKODeploymentConfig object in the management cluster with the name of the new SEG.

  1. In the Avi Controller UI, go to Infrastructure > Service Engine Groups, and click CREATE to create the new SEG.

    Create Service Engine Group

  2. Create the SEG as follows:

    Create Service Engine Group - Basic

  3. In a terminal, do the following depending on whether you want to create a new AKODeploymentConfig object for the new SEG or update an existing AKODeploymentConfig object:

    • Create a new AKODeploymentConfig object:

      1. Run the following command to open the text editor.

        vi FILE_NAME
        

        Where FILE_NAME is the name of the AKODeploymentConfig YAML file that you want to create.

      2. Add the AKO configuration details in the file. The following is an example:

          apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
          kind: AKODeploymentConfig
          metadata:
            name: install-ako-for-all
          spec:
            adminCredentialRef:
              name: avi-controller-credentials
              namespace: tkg-system-networking
            certificateAuthorityRef:
              name: avi-controller-ca
              namespace: tkg-system-networking
            cloudName: Default-Cloud
            controller: 10.184.74.162
            dataNetwork:
              cidr: 10.184.64.0/20
              name: VM Network
            extraConfigs:
              cniPlugin: antrea
              disableStaticRouteSync: true
              image:
                pullPolicy: IfNotPresent
                repository: projects.registry.vmware.com/tkg/ako
                version: v1.4.3_vmware.1
              ingress:
                defaultIngressController: false
                disableIngressClass: true
            serviceEngineGroup: SEG-1
        
      3. Save the file, and exit the text editor.

      4. Run the following command to apply the new configuration:

        kubectl apply -f FILE_NAME
        

        Where FILE NAME is the name of the YAML file that you have created.

    • Update an existing AKODeploymentConfig object:

      1. Run the following command to open the AKODeploymentConfig object:

        kubectl edit adc ADC_NAME
        

        Where ADC_NAME is the name of the AKODeploymentConfig object in the YAML file.

      2. Update the SEG name in the text editor that pops up.

      3. Save the file, and exit the text editor.

  4. Run the following command to verify that the new configuration is present in the management cluster:

    kubectl get adc ADC_NAME -o yaml
    

    Where ADC_NAME is the name of the AKODeploymentConfig object in the YAML file.

    In the file, verify that the adc.spec.serviceEngineGroup field displays the name of the new SE group.

  5. Switch the context to the workload cluster by using the kubectl utility.

  6. Run the following command to view the AKO deployment information:

    kubectl get cm avi-k8s-config -n avi-system -o yaml
    

    In the output, verify that the SE group has been updated.

  7. Run the following command to verify that AKO is running:

    kubectl get pod -n avi-system
    

Change a Cluster's Control Plane HA Provider to NSX Advanced Load Balancer

In Tanzu Kubernetes Grid, you can change the control plane high availability (HA) provider in a cluster from Kube-VIP to NSX Advanced Load Balancer. The control plane HA provider must be the same in a management cluster and it is in its workload clusters. If a management cluster has NSX Advanced Load Balancer as its control plane HA provider, new workload clusters that it creates will automatically have the same HA provider.

Prerequisites

Ensure that:

Procedure

  1. Add the control plane virtual IP address to the Avi Static IP Pool:

    1. In the Avi Controller UI, go to Infrastructure > Networks.
    2. Select the network that the cluster uses, and click Edit.
    3. Add the control plane virtual IP address to Static IP Address Pool, and click Save.
  2. In the Tanzu CLI, set the context of kubectl to your management cluster:

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  3. Verify that the cluster has control plane endpoint annotation:

    kubectl annotate --overwrite cluster CLUSTER-NAME -n CLUSTER-NAMESPACE tkg.tanzu.vmware.com/cluster-controlplane-endpoint="VIP"
    

    Where:

    • CLUSTER-NAME is the name of the cluster.
    • CLUSTER-NAMESPACE is the namespace that you use for the cluster.
    • VIP is the virtual IP address of the control plane endpoint.
  4. (Management clusters only) In the AKO operator secret, set Avi as the control plane HA provider:

    1. Retrieve the configuration values string from the AKO operator secret, decode it, and dump it into a YAML file:

      kubectl get secret CLUSTER_NAME-ako-operator-addon -n tkg-system --template="{{index .data \"values.yaml\" | base64decode}}" > values.yaml
      
    2. Modify the configuration values file to set avi_control_plane_ha_provider to true:

      yq e -i '.akoOperator.config.avi_control_plane_ha_provider=true' values.yaml
      

      The following is an example of the modified configuration values file:

      #@data/values
      #@overlay/match-child-defaults missing_ok=True
      ---
      akoOperator:
        avi_enable: true
        namespace: tkg-system-networking
        cluster_name: ha-mc-1
        config:
          avi_disable_ingress_class: true
          avi_ingress_default_ingress_controller: false
          avi_ingress_shard_vs_size: ""
          avi_ingress_service_type: ""
          avi_ingress_node_network_list: '""'
          avi_admin_credential_name: avi-controller-credentials
          avi_ca_name: avi-controller-ca
          avi_controller: 10.161.107.63
          avi_username: admin
          avi_password: Admin!23
          avi_cloud_name: Default-Cloud
          avi_service_engine_group: Default-Group
          avi_data_network: VM Network
          avi_data_network_cidr: 10.161.96.0/19
          avi_ca_data_b64: LS0tLS1CRUd[...]BVEUtLS0tLQ==
          avi_labels: '""'
          avi_disable_static_route_sync: true
          avi_cni_plugin: antrea
          avi_management_cluster_vip_network_name: VM Network
          avi_management_cluster_vip_network_cidr: 10.161.96.0/19
          avi_control_plane_endpoint_port: 6443
          avi_control_plane_ha_provider: true
          ```
      
      
    3. Re-encode the configuration values file into a base64-encoded string:

      cat values.yaml | base64
      

      Record the base64-encoded string from the command output.

    4. Open the AKO operator secret specification in an editor:

      kubectl edit secret CLUSTER NAME-ako-operator-addon -n tkg-system
      
    5. Replace the data.values.yaml field with the new base64-encoded configuration values string that you recorded. Save the file.

  5. Before proceeding, confirm both of the following:

    • A service exists in the cluster's namespace of type LoadBalancer and with a name of the form CLUSTER-NAMESPACE-CLUSTER-NAME-control-plane.
    • In the Avi Controller UI Applications > Virtual Services you see a virtual service listed with Name CLUSTER-NAMESPACE-CLUSTER-NAME-control-plane as above and a Health score code that is not red or grey.
  6. Delete the Kube-VIP pods on all of the cluster's control plane VMs:

    1. Establish an SSH connection to the control plane VM:

      ssh -i PRIVATE-KEY capv@IP-ADDRESS
      

      Where:

      • PRIVATE-KEY is the private key that is paired with the public key, which is configured in the clusterconfig.yaml file.
      • IP-ADDRESS is the IP address of the control plane VM. You can see this listed in vCenter or retrieve it by running kubectl get vspheremachine -o yaml.
    2. Remove the kube-vip.yaml file:

      rm /etc/kubernetes/manifest/kube-vip.yaml
      
    3. Terminate the SSH connection to the control plane VM:

      exit
      
    4. Wait for a few minutes, and run the following command to ensure that all the Kube-VIP pods are deleted from the system:

      kubectl get po -A | grep "kube-vip"
      

      Ensure that the output of this command does not include any Kube-VIP pods.

  7. Open the KCP (Kubernetes Control Plane) object specification in an editor:

    kubectl edit kcp CLUSTER-NAME-control-plane -n CLUSTER-NAMESPACE
    

    Where:

    • CLUSTER-NAME is the name of the cluster.
    • CLUSTER-NAMESPACE is the namespace that you use for the cluster.
  8. In the KCP object specification, delete the entire files block under spec.kubeadmConfigSpec, and save the file.

    The following is an example of the KCP object specification after it is updated:

    spec:
    infrastructureTemplate:
      apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
      kind: VSphereMachineTemplate
      name: ha-mc-1-control-plane
      namespace: tkg-system
    kubeadmConfigSpec:
      clusterConfiguration:
        apiServer:
          extraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          timeoutForControlPlane: 8m0s
        controllerManager:
          extraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
        dns:
          imageRepository: projects.registry.vmware.com/tkg
          imageTag: v1.8.0_vmware.5
          type: CoreDNS
        etcd:
          local:
            dataDir: /var/lib/etcd
            extraArgs:
              cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
            imageRepository: projects.registry.vmware.com/tkg
            imageTag: v3.4.13_vmware.15
        imageRepository: projects.registry.vmware.com/tkg
        networking: {}
        scheduler:
          extraArgs:
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
      initConfiguration:
        localAPIEndpoint:
          advertiseAddress: ""
          bindPort: 0
        nodeRegistration:
          criSocket: /var/run/containerd/containerd.sock
          kubeletExtraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          name: '{{ ds.meta_data.hostname }}'
      joinConfiguration:
        discovery: {}
        nodeRegistration:
          criSocket: /var/run/containerd/containerd.sock
          kubeletExtraArgs:
            cloud-provider: external
            tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
          name: '{{ ds.meta_data.hostname }}'
      preKubeadmCommands:
      - hostname "{{ ds.meta_data.hostname }}"
      - echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
      - echo "127.0.0.1   localhost" >>/etc/hosts
      - echo "127.0.0.1   {{ ds.meta_data.hostname }}" >>/etc/hosts
      - echo "{{ ds.meta_data.hostname }}" >/etc/hostname
      useExperimentalRetryJoin: true
      users:
      - name: capv
        sshAuthorizedKeys:
        - ssh-rsa AAAAB3NzaC1yc2[...]kx21vUu58cj
        sudo: ALL=(ALL) NOPASSWD:ALL
    replicas: 1
    rolloutStrategy:
      rollingUpdate:
        maxSurge: 1
      type: RollingUpdate
    version: v1.21.2+vmware.1
    
    

    The system triggers a rolling upgrade after KCP is edited. Wait until this upgrade is completed. A new control plane VM based on AVI is created and the corresponding Kubernetes objects are updated to use the new VM.

check-circle-line exclamation-circle-line close-line
Scroll to top icon