A Supervisor Cluster can either use the vSphere networking stack or VMware NSX-T™ Data Center to provide connectivity to Kubernetes control plane VMs, services, and workloads. When a Supervisor Cluster is configured with the vSphere networking stack, all hosts from the cluster are connected to a vSphere Distributed Switch that provides connectivity to Kubernetes workloads and control plane VMs. A Supervisor Cluster that uses the vSphere networking stack requires a third-party load balancer that provides connectivity to DevOps users and external services. A Supervisor Cluster that is configured with VMware NSX-T™ Data Center, uses the software-based networks of the solution as well as an NSX Edge load balancer to provide connectivity to external services and DevOps users.
Supervisor Cluster Networking with NSX-T Data Center
VMware NSX-T™ Data Center provides network connectivity to the objects inside the Supervisor Cluster and external networks. Connectivity to the ESXi hosts comprising the cluster is handled by the standard vSphere networks.
You can also configure the Supervisor Cluster networking manually by using an existing NSX-T Data Center deployment or by deploying a new instance of NSX-T Data Center.
|vSphere with Tanzu||NSX-T Data Center|
|Version 7.0 Update 1c||Versions 3.0, 3.0.1, 126.96.36.199, 3.0.2, 3.1, and 3.1.1.|
|Version 7.0 Update 1||Versions 3.0, 3.0.1, 188.8.131.52, and 3.0.2.|
|Version 7.0||Version 3.0.|
This section describes the networking topology when you install and configure vSphere with Tanzu Version 7.0 Update 1. For the network topology when you configure vSphere with Tanzu Version 7.0 Update 1c or upgrade from Version 7.0 Update 1 to Version 7.0 Update 1c, see Network Topology Upgrade.
- NSX Container Plug-in (NCP) provides integration between NSX-T Data Center and Kubernetes. The main component of NCP runs in a container and communicates with NSX Manager and with the Kubernetes control plane. NCP monitors changes to containers and other resources and manages networking resources such as logical ports, segments, routers, and security groups for the containers by calling the NSX API.
- NSX Edge provides connectivity from external networks to Supervisor Cluster objects. The NSX Edge cluster has a load balancer that provides a redundancy to the Kubernetes API servers residing on the control plane VMs and any application that must be published and be accessible from outside the Supervisor Cluster.
- A tier-0 gateway is associated with the NSX Edge cluster to provide routing to the external network. The uplink interface uses either the dynamic routing protocol, BGP, or static routing.
- One or more tier-1 gateways are associated per Supervisor Cluster that provide southbound traffic from the cluster to the tier-0 gateway.
- Segments are associated with each namespace within the Supervisor Cluster to provide connectivity to vSphere Pods.
- The control plane VM segment provides connectivity between the control plane VMs and vSphere Pods.
To learn more about Supervisor Cluster networking, watch the video vSphere 7 with Kubernetes Network Service - Part 1 - The Supervisor Cluster.
The segments for each namespace reside on the vSphere Distributed Switch (VDS) functioning in Standard mode that is associated with the NSX Edge cluster. The segment provides an overlay network to the Supervisor Cluster.
Each vSphere Pod connects through an interface to the segment that is associated with the namespace where the pod resides.
The Spherelet processes on each ESXi hosts communicate with vCenter Server through an interface on the Management Network.
Networking Configuration Methods with NSX-T Data Center
- The simplest way to configure the Supervisor Cluster networking is by using the VMware Cloud Foundation SDDC Manager. For more information, see the VMware Cloud Foundation SDDC Manager documentation. For information, see Working with Workload Management.
- You can also configure the Supervisor Cluster networking manually by using an existing NSX-T Data Center deployment or by deploying a new instance of NSX-T Data Center. See Install and Configure NSX-T Data Center for vSphere with Tanzu for more information.
Supervisor Cluster Networking with vSphere Distributed Switch
A Supervisor Cluster that is backed by a vSphere Distributed Switch uses distributed port groups as Workload Networks for namespaces.
Depending on the topology that you implement for the Supervisor Cluster, you can use one or more distributed port groups as Workload Networks. The network that provides connectivity to the Kubernetes Control Plane VMs is called Primary Workload Network. You can assign this network to all the namespaces on the Supervisor Cluster, or you can use different networks for each namespace. The Tanzu Kubernetes clusters connect to the Workload Network that is assigned to the namespace where the clusters reside.
A Supervisor Cluster that is backed by a vSphere Distributed Switch uses a load balancer for providing connectivity to DevOps users and external services. You can use the NSX Advanced Load Balancer or the HAProxy load balancer.
For more information on the possible topologies, see System Requirements and Topologies for Setting Up a Supervisor Cluster with vSphere Networking and HAProxy Load Balancing.