This topic describes the vSphere VMs and NSX objects created by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) when you create a Kubernetes cluster. When you delete a Kubernetes cluster, Tanzu Kubernetes Grid Integrated Edition removes these objects.
For information about creating a Kubernetes cluster using Tanzu Kubernetes Grid Integrated Edition, see Creating Clusters. For information about deleting a Kubernetes cluster using Tanzu Kubernetes Grid Integrated Edition, see Deleting Clusters.
When a new Kubernetes cluster is created, Tanzu Kubernetes Grid Integrated Edition creates the following virtual machines (VMs) in the designated vSphere cluster:
Object Number | Object Description |
---|---|
1 or 3 | Kubernetes control plane nodes. The number depends on the plan used to create the cluster. |
1 or more | Kubernetes worker nodes. The number depends on the plan used to create the cluster, or the number specified during cluster creation. |
Note: For production clusters, three control plane nodes are required, and a minimum of three worker nodes are required. See Requirements for Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX for more information.
When a new Kubernetes cluster is created, Tanzu Kubernetes Grid Integrated Edition creates the following NSX logical switches:
Object Number | Object Description |
---|---|
1 | Logical switch for Kubernetes control plane and worker nodes. |
1 | Logical switch for each Kubernetes namespace: default , kube-public , kube-system , pks-infrastructure . |
1 | Logical switch for the NSX load balancer associated with the Kubernetes cluster. |
When a new Kubernetes cluster is created, Tanzu Kubernetes Grid Integrated Edition creates the following NSX Tier-1 logical routers:
Object Number | Object Description |
---|---|
1 | Tier-1 router for Kubernetes control plane and worker nodes. Name: cluster-router . |
1 | Tier-1 router for each Kubernetes namespace: default , kube-public , kube-system , pks-infrastructure . |
1 | Tier-1 router for the NSX load balancer associated with the Kubernetes cluster. |
For each Kubernetes cluster created, Tanzu Kubernetes Grid Integrated Edition creates a single instance of a small NSX load balancer. This load balancer contains the objects listed in the following table:
Object Number | Object Description |
---|---|
1 | Virtual Server (VS) to access Kubernetes control plane API on port 8443. |
1 | Server Pool containing the 3 Kubernetes control plane nodes. |
1 | VS for HTTP Ingress Controller. |
1 | VS for HTTPS Ingress Controller. |
The IP address allocated to each VS is derived from the Floating IP Pool that was created for use with Tanzu Kubernetes Grid Integrated Edition. The VS for the HTTP Ingress Controller and the VS for the HTTPS Ingress Controller use the same IP address.
For each Kubernetes cluster created, Tanzu Kubernetes Grid Integrated Edition extracts and allocates the following NSX subnets from the IP blocks created in preparation for installing Tanzu Kubernetes Grid Integrated Edition with NSX:
Object Number | Object Description |
---|---|
1 | A /24 subnet from the Nodes IP Block will be extracted and allocated for the Kubernetes control plane and worker nodes. |
1 | A /24 subnet from the Pods IP Block will be extracted and allocated for each Kubernetes namespace: default , kube-public , kube-system , pks-infrastructure . |
For each Kubernetes cluster created, Tanzu Kubernetes Grid Integrated Edition defines the following NSX NAT rules on the Tier-0 logical router:
Object Number | Object Description |
---|---|
1 | SNAT rule created for each Kubernetes namespace: default , kube-public , kube-system , pks-infrastructure using 1 IP from the Floating IP Pool as translated IP address. |
1 | (NAT topology only) SNAT rule created for each Kubernetes cluster using 1 IP from the Floating IP Pool as translated IP address. The Kubernetes cluster subnet is derived from the Nodes IP Block using a /24 netmask. |
For each Kubernetes cluster created, Tanzu Kubernetes Grid Integrated Edition defines the following NSX distributed firewall rules:
Object Amount | Object Description |
---|---|
1 | DFW rule for kube-dns , applied to CoreDNS pod logical port: Source=Kubernetes worker node (hosting the DNS Pod); Destination=Any; Port: TCP/8080; Action: allow |
1 | DFW rule for Validator in pks-system namespace, applied to Validator pod logical port: Source=Kubernetes worker node (hosting the DNS Pod) IP Address; Destination=Any; Port: TCP/9000; Action: allow |