Configuring load balancing involves configuring a Kubernetes LoadBalancer service or Ingress resource, and the NCP replication controller.

You can create a layer 4 load balancer by configuring a Kubernetes service of type LoadBalancer. The service is allocated an IP address from the external IP block that you configure. The load balancer is exposed on this IP address and the service port. You can specify the name or ID of an IP pool using the loadBalancerIP spec in the LoadBalancer definition. The Loadbalancer service's IP will be allocated from this IP pool. If the loadBalancerIP spec is empty, the IP will be allocated from the external IP block that you configure.

You can create a layer 7 load balancer by configuring a Kubernetes Ingress resource. The Ingress resource is allocated an IP address from the external IP block that you configure. The load balancer is exposed on this IP address. Note that a Kubernetes service of type LoadBalancer is not supported as a backend for the Ingress resource.

Note:

You cannot assign a specific IP address for the LoadBalancer service or the Ingress resource. Any address you specify when creating the LoadBalancer service or Ingress resource will be ignored.

To configure load balancing in NCP, in the ncp_rc.yml file:

  1. Set use_native_loadbalancer = True.

  2. (Optional) Set pool_algorithm to 'ROUND_ROBIN' or 'LEAST_CONNECTION/IP_HASH'. The default is 'ROUND_ROBIN'.

  3. (Optional) Set service_size = 'SMALL', 'MEDIUM', or 'LARGE'. The default is 'SMALL'.

The LEAST_CONNECTION/IP_HASH algorithm means that traffic from the same source IP address will be sent to the same backend pod.

The small NSX-T load balancer supports the following:

  • 10 NSX-T virtual servers.

  • 10 NSX-T pools.

  • 30 NSX-T pool members.

  • 8 ports for LoadBalancer services.

  • A total of 10 ports defined by the LoadBalancer services and Ingress resources.

  • A total of 30 endpoints referenced by the LoadBalancer services and Ingress resources.

The medium NSX-T load balancer supports the following:

  • 100 NSX-T virtual servers.

  • 100 NSX-T pools.

  • 300 NSX-T pool members.

  • 98 ports for LoadBalancer services.

  • A total of 100 ports defined by the LoadBalancer services and Ingress resources.

  • A total of 300 endpoints referenced by the LoadBalancer services and Ingress resources.

The large NSX-T load balancer supports the following:

  • 1000 NSX-T virtual servers.

  • 1000 NSX-T pools.

  • 3000 NSX-T pool members.

  • 998 ports for LoadBalancer services.

  • A total of 1000 ports defined by the LoadBalancer services and Ingress resources.

  • A total of 3000 endpoints referenced by the LoadBalancer services and Ingress resources.

After the load balancer is created, the load balancer size cannot be changed by updating the configuration file. It can be changed through the UI or API.

Ingress

  • NSX-T will create one layer 7 load balancer for Ingresses with TLS specification, and one layer 7 load balancer for Ingresses without TLS specification.

  • All Ingresses will get a single IP address.

  • Ingresses without TLS specification will be hosted on HTTP Virtual Server (port 80).

  • Ingresses with TLS specification will be hosted on HTTPS Virtual Server (port 443). The load balancer will act as an SSL server and terminate the client SSL connection.

  • The order of creation of secrets and Ingress does not matter. If the secret object is present and there is an Ingress referencing it, the certificate will be imported in NSX-T. If the secret is deleted or the last Ingress referencing the secret is deleted, the certificate corresponding to the secret will be deleted.

  • Modification of Ingress by adding or removing the TLS section is supported. When the tls key is removed from the Ingress specification, the Ingress rules will be transferred from the HTTPS Virtual Server (port 443) to the HTTP Virtual Server (port 80). Similarly, when the tls key is added to Ingress specification, the Ingress rules are transferred from the HTTP Virtual Server (port 80) to the HTTPS Virtual Server (port 443).

  • If there are duplicate rules in Ingress definitions for a single cluster, only the first rule will be applied.

  • Only a single Ingress with a default backend is supported per cluster. Traffic not matching any Ingress rule will be forwarded to the default backend.

  • If there are multiple Ingresses with a default backend, only the first one will be configured. The others will be annotated with an error.

  • Wildcard URI matching is supported using the regular expression characters "." and "*". For example, the path "/coffee/.*" matches "/coffee/" followed by zero, one or more characters, such as "/coffee/", "/coffee/a", "/coffee/b", but not "/coffee", "/coffeecup" or "/coffeecup/a". Note that if the path contains "/*", for example "/tea/*", it will match "/tea" followed by zero, one or more characters, such as "/tea", "/tea/", "/teacup", "/teacup/", "/tea/a" or "/teacup/b". In this case, the regular expression special character "*" is acting as a wildcard character as well.

    An Ingress specification example:

    kind: Ingress
    metadata:
      name: cafe-ingress
    spec:
      rules:
      - http:
          paths:
          - path: /tea/*        #Matches /tea, /tea/, /teacup, /teacup/, /tea/a, /teacup/b, etc.
            backend:
              serviceName: tea-svc
              servicePort: 80
          - path: /coffee/.*    #Matches /coffee/, /coffee/a but NOT /coffee, /coffeecup, etc.
            backend:
              serviceName: coffee-svc
              servicePort: 80

  • You can configure HTTP URL request rewrite by adding an annotation to the Ingress resource. For example,

    kind: Ingress
    metadata:
      name: cafe-ingress
      annotations:
        ncp/rewrite_target: "/"
    spec:
      rules:
      - host: cafe.example.com
        http:
          paths:
          - path: /tea
            backend:
              serviceName: tea-svc
              servicePort: 80
          - path: /coffee
            backend:
              serviceName: coffee-svc
              servicePort: 80

    The paths /tea and /coffee will be rewritten to / before the URL is sent to the backend service.

Layer 7 Load Balancer and Network Policy

When traffic is forwarded to the pods from the NSX load balancer virtual server, the source IP is the tier-1 router's uplink port's IP address. This address is on the private tier-1 transit network, and can cause the CIDR-based network policies to disallow traffic that should be allowed. To avoid this issue, the network policy must be configured such that the tier-1 router's uplink port's IP address is part of the allowed CIDR block. This internal IP address will be visible to the user as part of the Ingress specification in the status.loadbalancer.ingress.ip field and as an annotation (ncp/internal_ip_for_policy) on the Ingress resource.

For example, if the external IP address of the virtual server is 4.4.0.5 and the IP address of the internal tier-1 router's uplink port is 100.64.224.11, the Ingress specification will be:

    kind: Ingress
    ...
    status:
      loadBalancer:
      ingress:
      - ip: 4.4.0.5
      - ip: 100.64.224.11

The annotation on the Ingress resource will be:

    ncp/internal_ip_for_policy: 100.64.224.11

LoadBalancer Service

  • NSX-T will create a layer 4 load balancer for each service port.

  • Both TCP and UDP are supported.

  • Each service will have a unique IP address.