vSphere with Kubernetes requires specific networking configuration to enable connectivity for namespaces, vSphere Pods, Kubernetes services, and Tanzu Kubernetes clusters that are provisioned through the Tanzu Kubernetes Grid Service.
Supervisor Cluster Networking
VMware NSX-T™ Data Center provides network connectivity to the objects inside the Supervisor Cluster and external networks. Connectivity to the ESXi hosts comprising the cluster is handled by the standard vSphere networks.
- NSX Container Plug-in (NCP) provides integration between VMware NSX-T™ Data Center and Kubernetes. The main component of NCP runs in a container and communicates with NSX Manager and with the Kubernetes control plane. NCP monitors changes to containers and other resources and manages networking resources such as logical ports, segments, routers, and security groups for the containers by calling the NSX API.
- NSX Edge provides connectivity from external networks to Supervisor Cluster objects. The NSX Edge cluster has a load balancer that provides a redundancy to the Kubernetes API servers residing on the control plane VMs and any application that must be published and be accessible from outside the Supervisor Cluster.
- A tier-0 gateway is associated with the NSX Edge cluster to provide routing to the external network. The uplink interface uses either the dynamic routing protocol, BGP, or static routing.
- A tier-1 gateway is required per Supervisor Cluster that provides southbound traffic from the cluster to the tier-0 gateway.
- Segments are associated with each namespace within the Supervisor Cluster to provide connectivity to vSphere Pods.
- The control plane VM segment provides connectivity between the control plane VMs and vSphere Pods.
To learn more about Supervisor Cluster networking, watch the video vSphere 7 with Kubernetes Network Service - Part 1 - The Supervisor Cluster.
The segments for each namespace reside on the vSphere Distributed Switch (VDS) functioning in Standard mode that is associated with the NSX Edge cluster. The segment provides an overlay network to the Supervisor Cluster.
Each vSphere Pod connects through an interface to the segment that is associated with the namespace where the pod resides.
The Spherelet processes on each ESXi hosts communicate with vCenter Server through an interface on the Management Network.
Networking Configuration Methods
- The simplest way to configure the Supervisor Cluster networking is by using the VMware Cloud Foundation SDDC Manager. For more information, see the VMware Cloud Foundation SDDC Manager documentation. For information, see Working with Workload Management.
- You can also configure the Supervisor Cluster networking manually by using an existing NSX-T Data Center deployment or by deploying a new instance of NSX-T Data Center. See Configure vSphere with Kubernetes to Use NSX-T Data Center for more information.
Tanzu Kubernetes Cluster Networking
As a vSphere administrator, when you create a Supervisor Namespace, an NSX-T segment is defined for each namespace. This segment is connected to the NSX-T tier-1 gateway for the Supervisor Cluster network.
When DevOps engineers provision the first Tanzu Kubernetes cluster in a Supervisor Namespace, a new tier-1 gateway is created. For each Tanzu Kubernetes cluster that is provisioned in that namespace, a segment is created for that cluster and it is connected to the tier-1 gateway in its Supervisor Namespace.
Kubernetes pods running inside Tanzu Kubernetes clusters connect to the cluster segment through the Container Network Interface (CNI). Tanzu Kubernetes clusters provisioned by the Tanzu Kubernetes Grid Service support the open-source Calico plugin for the CNI. The Tanzu Kubernetes Grid Service provides an extensible framework that can support additional CNIs in the future.
When a Tanzu Kubernetes cluster is provisioned by the Tanzu Kubernetes Grid Service, an NSX-T load balancer is automatically deployed to an NSX Edge node. This load balancer provides load balancing services in the form of virtual servers. A single virtual server is created that provides layer-4 load balancing for the Kubernetes API. This virtual server is responsible for routing kubectl traffic to the control plane. In addition, for each Kubernetes service load balancer that is resourced on the cluster, a virtual server is created that provides layer-4 load balancing for that service.
The following table summarizes Tanzu Kubernetes cluster networking features and their implementation.
|Kubernetes API||NSX-T load balancer||One virtual server per cluster.|
|Node connectivity||NSX-T segment||Provides connectivity between cluster node VMs through the NSX-T tier-1 router.|
|Pod connectivity||Calico||Container network interface for pods through the Linux bridge.|
|Service type: ClusterIP||Calico||Default Kubernetes service type that is only accessible from within the cluster.|
|Service type: NodePort||Calico||Allows external acces through a port opened on each worker node by the Kubernetes network proxy.|
|Service type: LoadBalancer||NSX-T load balancer||One virtual server per service type definition.|
|Pod ingress||Third-party ingress controller||Routing for inbound pod traffic; you can use any third-party ingress controller.|
|Network policy||Calico||Controls what traffic is allowed to selected pods and network endpoints using Linux IP tables.|