Configuring L7 Ingress with NSX Advanced Load Balancer

After you install NSX Advanced Load Balancer using the steps in Install NSX Advanced Load Balancer, you can configure L7 ingress for your workload clusters using one of the following options:

  • L7 ingress in ClusterIP mode
  • L7 ingress in NodePortLocal mode
  • L7 ingress in NodePort mode
  • NSX ALB L4 ingress with Contour L7 ingress

For more information on these L7 ingress modes, see NSX ALB as an L4+L7 Ingress Service Provider.

Prerequisites

The following prerequisites apply to all L7 ingress options except the one that uses Contour L7 ingress.

L7 Ingress in ClusterIP Mode

To configure L7 ingress for workload clusters using ClusterIP mode, create a specification file for your AKODeploymentConfig objects on the management cluster. L7 ingress is applied to the matched workload clusters specified for spec.clusterSector in AKODeploymentConfig.

Note: Each SE group can only be used by one workload cluster, so you need a dedicated AKODeploymentConfig per cluster for AKO to work in this mode.

  1. Ensure you meet the prerequisites in Prerequisites above.

  2. Create an SE group in the AVI Controller by following the procedure in Create a Service Engine Group in the AVI Networks documentation.

  3. Set the context of kubectl to your management cluster.

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  4. Create an AKODeploymentConfig specification file for the new configuration. Set the parameters as shown in the following sample:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: test-node-network-list
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          LABEL: LABEL-VALUE
      controller: 10.185.43.245
      dataNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      extraConfigs:
        disableStaticRouteSync: false                               # required
        image:
          pullPolicy: IfNotPresent
          repository: projects.registry.vmware.com/tkg/ako
          version: v1.3.2_vmware.1
        ingress:
          disableIngressClass: false                                # required
          nodeNetworkList:                                          # required
            - networkName: NODE-NETWORK
                cidrs:
                  - NODE-CIDR-RANGE
          serviceType: ClusterIP                                    # required
          shardVSSize: MEDIUM                                       # required
      serviceEngineGroup: Default-Group
    

    Where:

    • LABEL and LABEL-VALUE define the label and value needed to assign this configuration to a workload cluster in a later step. For example, ako-l7-clusterip-01: "true".
    • NODE-NETWORK and NODE-CIDR-RANGE are only required if disableStaticRouteSync is false. If so, specify the name of the port group (PG) network that your nodes are a part of and the associated CIDR that the CNI allocates to each node for that node to assign to its pods. For more information, see Node Network List in the Avi Networks documentation.
  5. Create the object.

    kubectl apply -f FILENAME.yaml
    

    Where FILENAME is the name of your AKODeploymentConfig file.

  6. Label one of your workload clusters to match the selector. Do not label more than one workload cluster.

    kubectl label cluster CLUSTER-NAME LABEL=LABEL-VALUE
    

    Where:

    • CLUSTER-NAME is the name of the workload cluster.
    • LABEL and LABEL-VALUE is the label and value you choose to match an AKODeploymentConfig to a workload cluster. For example, ako-l7-clusterip-01="true".
  7. Set the context of kubectl to the workload cluster.

    kubectl config use-context MY-WKLD-CLUSTER-admin@MY-WKLD-CLUSTER
    

    Where MY-WKLD-CLUSTER is the name of your workload cluster.

  8. Run the following command to ensure that NodePort changed to ClusterIP.

    kubectl get cm avi-k8s-config -n avi-system -o=jsonpath='{.data.serviceType}'
    
  9. (Optional) To make AKO redeploy sooner with its new configuration, delete the AKO pod:

    kubectl delete pod ako-0 -n avi-system
    
  10. Create the ingress service specification file for the workload cluster, based on the Sample Ingress Service Specification code below.

  11. Deploy the ingress service by running:

    kubectl apply -f FILENAME
    

    Where FILENAME is the name of your ingress service specification file.

  12. In the Avi Controller UI, go to Applications > Virtual Services to see an L7 virtual service similar to the following:

    Avi Controller UI lists the status of L7 virtual services.

L7 Ingress in NodePortLocal Mode

When you configure L7 ingress for workload clusters using NodePortLocal mode, workload clusters can share SE groups and the NSX Advanced Load Balancer SEs send traffic directly to the pods in your workload cluster. This option allows workload clusters to share SE groups and ensures efficient routing of traffic from the NSX Advanced Load Balancer to the worker nodes where the pods run.

  1. Ensure you meet the prerequisites in Prerequisites above.

  2. Set the context of kubectl to your management cluster by running:

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  3. Create an AKODeploymentConfig specification file for the new configuration. Set the parameters as shown in the following sample:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: npl-enabled
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          npl-enabled: "true"
      controlPlaneNetwork:
        cidr: 10.191.240.0/20
        name: VM Network
      controller: 10.191.248.219
      dataNetwork:
        cidr: 10.191.240.0/20
        name: VM Network
      extraConfigs:
        cniPlugin: antrea               # required
        disableStaticRouteSync: false
        ingress:
          disableIngressClass: false
          nodeNetworkList:
            - cidrs:
              - 10.191.240.0/20
              networkName: VM Network
          serviceType: NodePortLocal    # required
          shardVSSize: MEDIUM
      serviceEngineGroup: Default-Group
    
  4. Create the object.

    kubectl apply -f FILENAME.yaml
    

    Where FILENAME is the name of your AKODeploymentConfig file.

  5. Create a cluster configuration file for the workload cluster that includes the following configurations:

    • Set ANTREA_NODEPORTLOCAL to true.
    • If you are using NSX Advanced Load Balancer for the control plane endpoint, do not set VSPHERE_CONTROL_PLANE_ENDPOINT unless you have added the address manually to the Static IP pool. See VSPHERE_CONTROL_PLANE_ENDPOINT under vSphere in the Tanzu CLI Configuration File Variable Reference.
  6. Create a workload cluster by running:

    tanzu cluster create -f FILENAME
    

    Where FILENAME is the name of the cluster configuration file for the workload cluster.

  7. Label the workload cluster to match the AKODeploymentConfig specification file by running:

    kubectl label cluster CLUSTER-NAME npl-enabled="true"
    

    Where CLUSTER-NAME is the name of the workload cluster.

  8. Obtain the kubeconfig settings of the workload cluster by running:

    tanzu cluster kubeconfig get CLUSTER-NAME --admin
    

    Where CLUSTER-NAME is the name of your workload cluster.

  9. Switch the context to the workload cluster by running:

    kubectl config use-context CLUSTER-NAME-admin@CLUSTER-NAME
    

    Where CLUSTER-NAME is the name of the workload cluster.

  10. (Optional) To make AKO redeploy sooner with its new configuration, delete the AKO pod:

    kubectl delete pod ako-0 -n avi-system
    
  11. Create the ingress service specification file for the workload cluster, based on the Sample Ingress Service Specification code below.

  12. Deploy the ingress service by running:

    kubectl apply -f FILENAME
    

    Where FILENAME is the name of your ingress service specification file.

  13. Go to the Avi Controller UI and change the display to View VS Tree. Verify that you can view the ingress service that you deployed in the NodePortLocal mode.

    Avi Controller VS tree with 192.168.14.32:61000

L7 Ingress in NodePort Mode

When you configure L7 ingress for workload clusters using NodePort mode, workload clusters can share SE groups.

Note: With this option, the services of your workloads must be set to NodePort instead of ClusterIP even when accompanied by an ingress object. This ensures that NodePorts are created on the worker nodes and traffic can flow through the SEs to the pods via the NodePorts.

  1. Ensure you meet the prerequisites in Prerequisites above.

  2. Set the context of kubectl to your management cluster.

    kubectl config use-context MY-MGMT-CLUSTER-admin@MY-MGMT-CLUSTER
    

    Where MY-MGMT-CLUSTER is the name of your management cluster.

  3. Create the AKODeploymentConfig specification file for the new configuration. Set the parameters as shown in the following sample:

    apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1
    kind: AKODeploymentConfig
    metadata:
      name: test-node-network-list
    spec:
      adminCredentialRef:
        name: avi-controller-credentials
        namespace: tkg-system-networking
      certificateAuthorityRef:
        name: avi-controller-ca
        namespace: tkg-system-networking
      cloudName: Default-Cloud
      clusterSelector:
        matchLabels:
          LABEL: LABEL-VALUE
      controller: 10.185.43.245
      dataNetwork:
        cidr: 10.185.32.0/20
        name: VM Network
      extraConfigs:
        disableStaticRouteSync: false                               # required
        image:
          pullPolicy: IfNotPresent
          repository: projects.registry.vmware.com/tkg/ako
          version: v1.3.2_vmware.1
        ingress:
          disableIngressClass: false                                # required
          nodeNetworkList:                                          # required
            - cidrs:
                - 10.185.32.0/20
              networkName: VM Network
          serviceType: NodePort                                     # required
          shardVSSize: MEDIUM                                       # required
      serviceEngineGroup: Default-Group
    

    Where LABEL and LABEL-VALUE define the label and value needed to assign this configuration to workload clusters in a later step. For example, ako-l7-nodeport: "true".

  4. Create the object.

    kubectl apply -f FILENAME.yaml
    

    Where FILENAME is the name of your AKODeploymentConfig file.

  5. Label your workload clusters to match the selector.

    kubectl label cluster CLUSTER-NAME LABEL=LABEL-VALUE
    

    Where:

    • CLUSTER-NAME is the name of the workload cluster.
    • LABEL and LABEL-VALUE is the label and value you choose to match an AKODeploymentConfig to a workload cluster. For example, ako-l7-nodeport="true".
  6. (Optional) To make AKO redeploy sooner with its new configuration, delete the AKO pod:

    kubectl delete pod ako-0 -n avi-system
    
  7. Create the ingress service specification file for the workload cluster, based on the Sample Ingress Service Specification code below.

  8. Deploy the ingress service by running:

    kubectl apply -f FILENAME
    

    Where FILENAME is the name of your ingress service specification file.

  9. In the Avi Controller UI, go to Applications > Virtual Services to see an L7 virtual service similar to the following:

    Avi Controller UI lists the status of L7 virtual services.

NSX ALB L4 Ingress with Contour L7 Ingress

When you configure NSX Advanced Load Balancer L4 with Contour L7 ingress, workload clusters can share SE groups. However, you will not have the NSX Advanced Load Balancer L7 ingress capabilities.

To configure Contour L7 ingress, follow the procedure in Implement Ingress Control with Contour. You do not need to assign a license to the Avi Controller or set up IPAM or DNS.

Delete an L7 Ingress service

Delete an L7 Ingress service from the workload cluster if you do not want to use it anymore.

  1. In the Tanzu CLI, run the following command to switch the context to the workload cluster:

    kubectl config use-context WORKLOAD-CLUSTER-CONTEXT
    
  2. Delete the L7 Ingress service:

    kubelet delete service SERVICE-NAME -n NAMESPACE
    

    Where:

    • SERVICE-NAME is the name of the L7 Ingress service that you want to delete.
    • NAMESPACE is the namespace in which the workload cluster is running.

Note: Deleting a workload cluster’s L7 ingress service does not delete the corresponding L7 ingress Virtual Service (VS); it remains visible in the Avi Controller UI until the workload cluster is deleted. In contrast, deleting an L4 load balancer service does delete the corresponding L4 ingress Virtual Service (VS).

Sample Ingress Service Specification

Use the following code for the specification file that you pass to kubectl apply to create the ingress service.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: cafe-ingress
spec:
  ingressClassName: avi-lb # required, make sure the ingress class is not None
  rules:
    - host: cafe.avilocal.lol
      http:
        paths:
          - path: /coffee
            pathType: Prefix
            backend:
              service:
                name: coffee-svc
                port:
                  number: 80
---
apiVersion: v1
kind: Service
metadata:
  name: coffee-svc
  labels:
    app: coffee
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
      name: http
  selector:
    app: coffee
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: coffee
  replicas: 2
  template:
    metadata:
      labels:
        app: coffee
    spec:
      containers:
        - name: nginx
          image: harbor-repo.vmware.com/dockerhub-proxy-cache/library/nginx
          ports:
            - containerPort: 80
          livenessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 5
          readinessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 5
            periodSeconds: 5
check-circle-line exclamation-circle-line close-line
Scroll to top icon