You can configure a Kubernetes service of type LoadBalancer to use a static IP address. Be aware of minimum component requirements, an important security consideration and cluster hardening guidance before implementing this feature.

Minimum Requirements

Static IP addressing for Kubernetes services of type LoadBalancer is supported on Tanzu Kubernetes clusters that meet the following requirements:
Component Minimum Requirement More Information
vCenter Server and ESXi vSphere 7.0 Update 2 See the Release Notes.
Supervisor Cluster v1.19.1+vmware.2-vsc0.0.8-17610687 See Update the Supervisor Cluster by Performing a vSphere Namespaces Update.
Load Balancer

NSX-T Data Center v3.1

Or

NSX Advanced 20.1.x

See the Release Notes.
Tanzu Kubernetes Release One of the latest Tanzu Kubernetes releases. See Verify Tanzu Kubernetes Cluster Compatibility for Update.

Using a Static IP for a Service of Type LoadBalancer

Typically when you define a Kubernetes service of Type LoadBalancer you get an ephemeral IP address assigned by the load balancer. See Tanzu Kubernetes Service Load Balancer Example.

Alternatively you can specify a static IP address for the load balancer. On creation of the service, the load balancer instance is provisioned with the static IP address you assigned.

The following example service demonstrates how to configure a supported load balancer with a static IP address. In the service specification you include the loadBalancerIP parameter and an IP address value, which is 10.11.12.49 in this example.
kind: Service
apiVersion: v1
metadata:
  name: load-balancer-service-with-static-ip
spec:
  selector:
    app: hello-world
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 80
  type: LoadBalancer
  loadBalancerIP: 10.11.12.49

For the NSX Advanced load balancer, you use a IP address from the IPAM pool configured for the load balancer when it was installed. When the service is created and the static IP address is assigned, the load balancer marks it as allocated and manages the lifecycle of the IP address the same way it does an ephemeral IP address. That is, if the service is removed, the IP address is unassigned and made available for reallocation.

For the NSX-T load balancer, you have two options. The default mechanism is the same as the NSX Advanced load balancer: use an IP address taken from the IP pool configured for the load balancer when it was installed. When the static IP address is assigned, the load balancer automatically marks it as allocated and manages its lifecycle.

The second NSX-T option is to manually pre-allocate the static IP address. In this case you use an IP address that is not part of the external load balancer IP pool assigned to load balancer, but instead taken from a floating IP pool. In this case you manually administer the allocation and lifecycle of the IP address using the NSX Manager.

Important Security Consideration and Hardening Requirement

There is a potential security issue to be aware of when using this feature. If a developer is able to patch the Service.status.loadBalancerIP value, the developer may be able to hijack the traffic in the cluster destined for the patched IP address. Specifically, if a Role or ClusterRole with the patch permission is bound to a service or user account on a cluster where this feature is implemented, that account owner is able to use its own credentials to issue kubectl commands and change the static IP address assigned to the load balancer.

To avoid the potential security implications of using static IP allocation for a load balancer service, you must harden each cluster where you are implementing this feature. To do this, the Role or ClusterRole you define for any developer must not allow the patch verb for apiGroups:"" and resources: services/status. The example role snippet demonstrates what not to do when implementing this feature.

DO NOT ALLOW PATCH
- apiGroups:
  - ""
  resources:
  - services/status
  verbs:
  - patch
To check if a developer has patch permissions, run the following command as that user:
kubectl --kubeconfig <KUBECONFIG> auth can-i patch service/status

If the command returns yes, the user has patch permissions. See Checking API Access in the Kubernetes documentation for more information.

To grant developer access to a cluster, see Grant Developer Access to Tanzu Kubernetes Clusters. For a sample Role template that you can customize, see Example Role for Pod Security Policy. For an example on how to restrict cluster access, see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-example.