This topic lists the networking objects created for a TKG cluster when you are using Supervisor with NSX networking.

NSX Networking Objects for TKG Clusters

Each TKG cluster should have the following network resources: a virtual network, virtual network interface, and virtual machine service.

The system automatically provisions an NSX embedded load balancer when vSphere IaaS control plane is enabled and an instance of Supervisor is deployed. This load balancer is for the Supervisor control plane and provides access to the Kubernetes API server.

When you create a Kubernetes service of type LoadBalancer for a TKG cluster, an NSX embedded load balancer is provisioned for that service.
Network Object Network Resources Description
VirtualNetwork Tier-1 Router and linked Segment Node network for the cluster
VirtualNetworkInterface Logical Port on Segment Node network interface for cluster nodes
VirtualMachineService N/A VirtualMachineService is created and is translated to a k8s service.
Service Load Balancer Server with VirtualServer instance and associated Server Pool (member pool) Kubernetes service of type Load Balancer is created for access to the TKG cluster API server.
Endpoints The endpoint members (TKG cluster control plane nodes) should be in the member pool. An endpoint is created to include all the TKG cluster control plane nodes.
VirtualMachineService in Supervisor N/A A VirtualMachineService is created in Supervisor and translated to a Kubernetes service in Supervisor
Load Balancer Service in Supervisor VirtualServer in the TKG cluster load balancer, and an associated member pool. Load balancer Service is created in Supervisor for access of this LB type of service
Endpoints in Supervisor The endpoint members (TKG cluster worker nodes) should be in the member pool in NSX. An endpoint is created to include all the TKG cluster worker nodes
Load Balancer Service in TKG cluster N/A The Load Balancer Service in the TKG cluster deployed by user should have its status updated with the load balancer IP

Node Networking

Each TKG cluster should have the following network objects and associated NSX resources created.

Network Object NSX Resources Description IPAM
VirtualNetwork Tier-1 Gateway and linked Segment Node network for the TKG cluster SNAT IP is assigned
VirtualNetworkInterface Logical Port on the linked Segment Node network interface for TKG cluster nodes Each node is assigned an IP

Control Plane Load Balancer

Network Object Network Resources Description IPAM
VirtualMachineService N/A VirtualMachineService is created and is translated to a Kubernetes service. Includes the load balancer VIP
Service Load Balancer Server with VirtualServer instance and associated Server Pool (member pool) Kubernetes service of type Load Balancer is created for access to the TKG cluster API server.

External IP is assigned.

Endpoints The endpoint members are the TKG cluster control plane nodes and should be in the member pool. An endpoint is created to include all the TKG cluster control plane nodes. N/A

NSX Load Balancers

For each TKG cluster created, the system creates a single instance of a small NSX load balancer. This load balancer contains the objects listed in the following table:
Object Number Description
1 Virtual Server (VS) to access Kubernetes control plane API on port 8443.
1 Server Pool containing the 3 Kubernetes control plane nodes.
1 VS for HTTP Ingress Controller.
1 VS for HTTPS Ingress Controller.

NAT Rules

For each TKG cluster created, the system defines the following NSX NAT rules on the Tier-0 logical router:

Object Number Description
1 SNAT rule created for each Kubernetes namespace using 1 IP from the Floating IP Pool as translated IP address.
1 (NAT topology only) SNAT rule created for each Kubernetes cluster using 1 IP from the Floating IP Pool as translated IP address. The Kubernetes cluster subnet is derived from the Nodes IP Block using a /24 netmask.

DFW Rules

For each TKG cluster created, the system defines the following NSX distributed firewall rules:

Object Number Description
1 DFW rule for kube-dns, applied to CoreDNS pod logical port:
1 DFW rule for Validator in the namespace, applied to Validator pod logical port: