The networking used for Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service is a combination of the fabric that underlies the vSphere with Tanzu infrastructure and open-source software that provides networking for cluster pods, services, and ingress. VMware provides you with options for both networking stacks.

Data Center Networking Options for Tanzu Kubernetes Clusters

VMware supports two data center networking options for the Supervisor Cluster where a Tanzu Kubernetes Grid Service is deployed:
  • vSphere Distributed Switch (vDS) networking
  • VMware NSX-T™ Data Center networking
The networking stack you choose depends on your licensing and requirements. For more information, see Prerequisites for Enabling Workload Management.

For direct enablement of vSphere with Tanzu and deployment of Tanzu Kubernetes clusters, you can use native vSphere networking. With VDS networking, you supply your own load balancer. For more information, see System Requirements and Topologies for Setting Up a Supervisor Cluster with vSphere Networking.

Each Tanzu Kubernetes cluster is on a private network. Kubectl requests come into the cluster load balancer, which then forwards the requests to the Tanzu Kubernetes cluster control plane API server or servers.

With NSX-T networking, when you create a Supervisor Namespace, an NSX-T segment is defined for each namespace. This segment is connected to the NSX-T Tier-1 gateway for the Supervisor Cluster network. For more information, see System Requirements and Topologies for Setting Up a Supervisor Cluster with NSX-T Data Center.

With NSX-T networking, when DevOps engineers provision the first Tanzu Kubernetes cluster in a Supervisor Namespace, a new tier-1 gateway is created. For each Tanzu Kubernetes cluster that is provisioned in that namespace, a segment is created for that cluster and it is connected to the tier-1 gateway in its Supervisor Namespace.

With NSX-T networking, when a Tanzu Kubernetes cluster is provisioned by the Tanzu Kubernetes Grid Service, an NSX-T load balancer is automatically deployed to an NSX Edge node. This load balancer provides load balancing services in the form of virtual servers. A single virtual server is created that provides layer-4 load balancing for the Kubernetes API. This virtual server is responsible for routing kubectl traffic to the control plane. In addition, for each Kubernetes service load balancer that is resourced on the cluster, a virtual server is created that provides layer-4 load balancing for that service.

The diagram illustrates NSX-T networking for Tanzu Kubernetes clusters.

Figure 1. NSX-T Networking for Tanzu Kubernetes Clusters

CNI Networking for Tanzu Kubernetes Clusters

Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service support the following Container Network Interface (CNI) options:
Antrea is the default CNI for new Tanzu Kubernetes clusters. If you are using Antrea, you do not have to specify it as the CNI during cluster provisioning. To use Calico as the CNI you have two options:
Note: The use of Antrea as the default CNI requires a minimum version of the OVA file for Tanzu Kubernetes clusters. See Supported Update Path.

The table summarizes Tanzu Kubernetes cluster networking features and their implementation.

Table 1. Tanzu Kubernetes Cluster Networking Summary
Endpoint Provider Description
Kubernetes API NSX-T load balancer or supply your own for vDS networking For NSX-T, one virtual server per cluster.
Node connectivity NSX-T segment or vSphere Distributed Switch Provides connectivity between cluster nodes through the NSX-T tier-1 router or the vDS, depending on which network stack is chosen.
Pod connectivity Antrea or Calico Container network interface for pods. Antrea uses Open vSwitch. Calico uses the Linux bridge with BGP.
Service type: ClusterIP Antrea or Calico Default Kubernetes service type that is only accessible from within the cluster.
Service type: NodePort Antrea or Calico Allows external access through a port opened on each worker node by the Kubernetes network proxy.
Service type: LoadBalancer NSX-T load balancer or supply your own for vDS networking For NSX-T, one virtual server per service type definition.
Cluster ingress Third-party ingress controller Routing for inbound pod traffic; you can use any third-party ingress controller.
Network policy Antrea or Calico Controls what traffic is allowed to and from selected pods and network endpoints. Antrea uses Open vSwitch. Calico uses Linux IP tables.