After you install NSX Advanced Load Balancer using the steps in Install NSX Advanced Load Balancer, you can configure L7 ingress for your Tanzu Kubernetes (workload) clusters using one of the following options.

  • L7 ingress in ClusterIP mode. This option enables NSX Advanced Load Balancer L7 ingress capabilities, including sending traffic directly from the service engines (SEs) to the pods, preventing multiple hops that other ingress solutions need when sending packets from the load balancer to the right node where the pod runs. This option is fully supported by VMware. However, each workload cluster needs a dedicated SE group for Avi Kubernetes Operator (AKO) to work, which could increase the amount of SEs you need for your environment.
  • L7 ingress in NodePortLocal mode. Like the option above, this option avoids the potential extra hop when sending traffic from the NSX Advanced Load Balancer SEs to the pod by targeting the right nodes where the pods run, this time by leveraging the integration between NSX Advanced Load Balancer and Antrea. With this option, the workload clusters can share SE groups. However, VMware support will not assist in configuring or troubleshooting this option.
  • L7 ingress in NodePort mode. NodePort mode is the default mode when AKO is installed on Tanzu Kubernetes Grid. This option allows your workload clusters to share SE groups and is fully supported by VMware. In this mode, traffic will leverage standard Kubernetes NodePort behavior, including its limitations, and will require services to be of type NodePort.
  • NSX ALB L4 ingress with Contour L7 ingress. This option lets workload clusters share SE groups, it is supported by VMware, and requires minimal setup. However, you will not have all the NSX Advanced Load Balancer L7 ingress capabilities.
NSX ALB L7 ClusterIP mode NSX ALB L7 NodePortLocal mode NSX ALB L7 NodePort mode NSX ALB L4 with Contour L7
Minimal SE groups required N Y Y Y
VMware supported Y N Y Y
NSX ALB L7 ingress capabilities Y Y Y N

Prerequisites

The following prerequisites apply to all L7 ingress options except the one that uses Contour L7 ingress.

L7 Ingress in ClusterIP Mode

To configure L7 ingress for workload clusters using ClusterIP mode, create a YAML file for your AKODeploymentConfig objects on the management cluster. L7 ingress is applied to the matched workload clusters specified for spec.clusterSector in AKODeploymentConfig.

Note: Each SE group can only be used by one workload cluster, so you need a dedicated AKODeploymentConfig per cluster for AKO to work in this mode.

  1. Ensure you meet the prerequisites in Prerequisites above.

  2. Create an SE group in the AVI Controller by following the procedure in Create a Service Engine Group in the AVI Networks documentation.

  3. Set the context of kubectl to your management cluster.

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  4. Create an AKODeploymentConfig YAML file for the new configuration. Set the parameters as shown in the following sample:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: test-node-network-list
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          LABEL: LABEL-VALUE
      controller: 10.185.43.245
      dataNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      extraConfigs:
        disableStaticRouteSync: false                               # required
        image:
          pullPolicy: IfNotPresent
          repository: projects.registry.vmware.com/tkg/ako
          version: v1.3.2_vmware.1
        ingress:
          disableIngressClass: false                                # required
          nodeNetworkList:                                          # required
            - cidrs:
                - 10.185.32.0/20
              networkName: VM Network
          serviceType: ClusterIP                                    # required
          shardVSSize: MEDIUM                                       # required
      serviceEngineGroup: Default-Group
    

    Where LABEL and LABEL-VALUE define the label and value needed to assign this configuration to a workload cluster in a later step. For example, ako-l7-clusterip-01: "true".

  5. Create the object.

    kubectl apply -f ./FILE-NAME.yaml
    

    Where FILE-NAME is the name you choose for your AKODeploymentConfig file.

  6. Label one of your workload clusters to match the selector. Do not label more than one workload cluster.

    kubectl label cluster CLUSTER-NAME LABEL=LABEL-VALUE
    

    Where:

    • CLUSTER-NAME is the name of the workload cluster.
    • LABEL and LABEL-VALUE is the label and value you choose to match an AKODeploymentConfig to a workload cluster. For example, ako-l7-clusterip-01="true".
  7. Set the context of kubectl to the workload cluster.

    kubectl config use-context MY-WKLD-CLUSTER-admin@MY-WKLD-CLUSTER
    

    Where MY-WKLD-CLUSTER is the name of your workload cluster.

  8. Run the following command to ensure that NodePort changed to ClusterIP.

    kubectl get cm avi-k8s-config -n avi-system -o=jsonpath='{.data.serviceType}'
    
  9. Delete the AKO pod so it redeploys and reads the new configuration file.

    kubectl delete pod ako-0 -n avi-system
    
  10. In the Avi Controller UI, go to Applications > Virtual Services to see an L7 virtual service similar to the following:

    Avi Controller UI lists the status of L7 virtual services.

L7 Ingress in NodePortLocal Mode

When you configure L7 ingress for workload clusters using NodePortLocal mode, workload clusters can share SE groups and the NSX Advanced Load Balancer SEs send traffic directly to the pods in your workload cluster.

Note: This option allows workload clusters to share SE groups and ensures efficient routing of traffic from the NSX Advanced Load Balancer to the worker nodes where the pods run, but it is not currently supported by VMware.

  1. Ensure you meet the prerequisites in Prerequisites above.

  2. Set the context of kubectl to your management cluster.

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  3. Pause kapp-controller.

    kubectl patch app ako-operator -n tkg-system --type "json" -p '[{"op":"replace","path":"/spec/paused","value":true}]'
    
  4. Add NodePortLocal to the AKODeploymentConfig.

    kubectl patch crd akodeploymentconfigs.networking.tkg.tanzu.vmware.com --type "json" -p '[{"op":"add","path":"/spec/versions/0/schema/openAPIV3Schema/properties/spec/properties/extraConfigs/properties/ingress/properties/serviceType/enum/-","value":"NodePortLocal"}]'
    
  5. Create an AKODeploymentConfig YAML file for the new configuration. Set the parameters as shown in the following sample:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
     name: ako-l7-npl
    spec:
     adminCredentialRef:
       name: avi-controller-credentials
       namespace: tkg-system-networking
     certificateAuthorityRef:
       name: avi-controller-ca
       namespace: tkg-system-networking
     cloudName: Default-Cloud
     clusterSelector:
       matchLabels:
         LABEL: LABEL-VALUE
     controller: 192.168.14.190
     dataNetwork:
       cidr: 192.168.15.0/24
       name: VIP-VLAN15-PG
     extraConfigs:
       cniPlugin: antrea
       disableStaticRouteSync: false                       # required
       image:
         pullPolicy: IfNotPresent
         repository: projects.registry.vmware.com/tkg/ako
         version: v1.3.2_vmware.1
       ingress:
         disableIngressClass: false                        # required
         nodeNetworkList:                                  # required
           - cidrs:
               - 192.168.14.0/24
             networkName: TKG-VLAN14-PG
         serviceType: NodePortLocal                        # required
         shardVSSize: MEDIUM                               # required
     serviceEngineGroup: Default-Group
    
    

    Where LABEL and LABEL-VALUE define the label and value needed to assign this configuration to workload clusters in a later step. For example, ako-l7-npl: "true".

  6. Create the object.

    kubectl apply -f ./FILE-NAME.yaml
    

    Where FILE-NAME is the name you choose for your AKODeploymentConfig file.

  7. Unpause kapp-controller.

    kubectl patch app ako-operator -n tkg-system --type "json" -p '[{"op":"replace","path":"/spec/paused","value":false}]'
    
  8. Create a workload cluster configuration file by following the procedure in Create a Tanzu Kubernetes Cluster Configuration File and include the following property:

    ANTREA_NODEPORTLOCAL: true
    
  9. Label the workload cluster to match the selector.

    kubectl label cluster CLUSTER-NAME LABEL=LABEL-VALUE
    

    Where:

    • CLUSTER-NAME is the name of the workload cluster.
    • LABEL and LABEL-VALUE is the label and value you choose to match an AKODeploymentConfig to a workload cluster. For example, ako-l7-npl="true".
  10. Set the context of kubectl to the workload cluster.

    kubectl config use-context MY-WKLD-CLUSTER-admin@MY-WKLD-CLUSTER
    

    Where MY-WKLD-CLUSTER is the name of your workload cluster.

  11. Run the following command to check that NodePort changed to NodePortLocal.

    kubectl get cm avi-k8s-config -n avi-system -o=jsonpath='{.data.serviceType}'
    
  12. Delete the AKO pod so it redeploys and reads the new configuration file.

    kubectl delete pod ako-0 -n avi-system
    

L7 Ingress in NodePort Mode

When you configure L7 ingress for workload clusters using NodePort mode, workload clusters can share SE groups.

Note: With this option, the services of your workloads must be set to NodePort instead of ClusterIP even when accompanied by an ingress object. This ensures that NodePorts are created on the worker nodes and traffic can flow through the SEs to the pods via the NodePorts.

  1. Ensure you meet the prerequisites in Prerequisites above.

  2. Set the context of kubectl to your management cluster.

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  3. Create an AKODeploymentConfig YAML file for the new configuration. Set the parameters as shown in the following sample:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: test-node-network-list
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          LABEL: LABEL-VALUE
      controller: 10.185.43.245
      dataNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      extraConfigs:
        disableStaticRouteSync: false                               # required
        image:
          pullPolicy: IfNotPresent
          repository: projects.registry.vmware.com/tkg/ako
          version: v1.3.2_vmware.1
        ingress:
          disableIngressClass: false                                # required
          nodeNetworkList:                                          # required
            - cidrs:
                - 10.185.32.0/20
              networkName: VM Network
          serviceType: NodePort                                     # required
          shardVSSize: MEDIUM                                       # required
      serviceEngineGroup: Default-Group
    

    Where LABEL and LABEL-VALUE define the label and value needed to assign this configuration to workload clusters in a later step. For example, ako-l7-nodeport: "true".

  4. Create the object.

    kubectl apply -f ./FILE-NAME.yaml
    

    Where FILE-NAME is the name you choose for your AKODeploymentConfig file.

  5. Label your workload clusters to match the selector.

    kubectl label cluster CLUSTER-NAME LABEL=LABEL-VALUE
    

    Where:

    • CLUSTER-NAME is the name of the workload cluster.
    • LABEL and LABEL-VALUE is the label and value you choose to match an AKODeploymentConfig to a workload cluster. For example, ako-l7-nodeport="true".
  6. Delete the AKO pod so it redeploys and reads the new configuration file.

    kubectl delete pod ako-0 -n avi-system
    
  7. In the Avi Controller UI, go to Applications > Virtual Services to see an L7 virtual service similar to the following:

    Avi Controller UI lists the status of L7 virtual services.

NSX ALB L4 Ingress with Contour L7 Ingress

When you configure NSX Advanced Load Balancer L4 with Contour L7 ingress, workload clusters can share SE groups. However, you will not have the NSX Advanced Load Balancer L7 ingress capabilities.

To configure Contour L7 ingress, follow the procedure in Implementing Ingress Control with Contour. You do not need to assign a license to the Avi Controller or set up IPAM or DNS.

Delete an L7 Ingress service

Delete an L7 Ingress service from the workload cluster if you do not want to use it anymore.

  1. In the Tanzu CLI, run the following command to switch the context to the workload cluster:

    kubectl config use-context WORKLOAD-CLUSTER-CONTEXT
    
  2. Delete the L7 Ingress service:

    kubelet delete service SERVICE-NAME -n NAMESPACE
    

    Where:

    • SERVICE-NAME is the name of the L7 Ingress service that you want to delete.
    • NAMESPACE is the namespace in which the workload cluster is running.

NOTE: Deleting a workload cluster's L7 ingress service does not delete the corresponding L7 ingress Virtual Service (VS); it remains visible in the Avi Controller UI until the workload cluster is deleted. In contrast, deleting an L4 load balancer service does delete the corresponding L4 ingress Virtual Service (VS).

check-circle-line exclamation-circle-line close-line
Scroll to top icon