A vSphere Namespace is a network-scoped tenancy on Supervisor. vSphere Namespaces are used to host TKG Service clusters, and provide networking, role permissions, persistent storage, resource quota, and content library and VM class integration.

vSphere Namespace Network

A vSphere Namespace network is a subnet carved from the Supervisor > Workload Network > Namespace Network. The Namespace subnet prefix defines the size of the subnet reserved for each vSphere Namespace. The default is /28.

The vSphere Namespace network provides connectivity for TKG clusters to Supervisor. By default, the vSphere Namespace will use cluster level network configurations and allocate IP addresses from its subnet. When you create a vSphere Namespace, a /28 overlay segment and corresponding IP pool is instantiated to service pods in that vSphere Namespace.

When the first TKG cluster is provisioned in a vSphere Namespace, the TKG cluster will share the same subnet as its vSphere Namespace. For each subsequent TKG cluster that is provisioned in that vSphere Namespace, a new subnet is created for that cluster and connected to its vSphere Namespace gateway.

There is a shared load balancer instance in the vSphere Namespace that is responsible for routing kubectl traffic to each TKG cluster control plane. In addition, for each Kubernetes service load balancer that is resourced on the TKG cluster, a layer-4 load balancer instance is created for that service.

TKG clusters within the same vSphere Namespace share a SNAT IP for north-south connectivity. East-west connectivity between namespaces is no SNAT.

The vSphere Namespace is typically non-routable. However, if you are using NSX networking, you can override the vSphere Namespace Network with a routable subnet. See Override Workload Network Settings for a vSphere Namespace.

vSphere Namespace Resource Pools

In a single vSphere Zone Supervisor deployment, when you create a vSphere Namespace, a resource pool backing that namespace is created. The vSphere Namespace provides a logical unit of resources on Supervisor, including compute, storage, permissions, classes, and images. When you configure a CPU or memory limit on a vSphere Namespace, for example, the same resource limits are applied to the resource pool backing that namespace. In this fashion vSphere Namespaces enable multi-tenancy in Supervisor.

The same multi-tenant experience applies to Supervisor deployed across three vSphere Zones. When a vSphere Namespace is created on a zoned Supervisor, the system creates a resource pool in each of the vSphere clusters supporting that Supervisor. This enables a TKG cluster provisioned in that vSphere Namespace to be deployed in any of the zones belonging to this Supervisor.

Using the vSphere Client, you can view the vSphere Namespace resource pool and objects by selecting the Hosts and Clusters perspective, and also by selecting the VMs and Templates view. When you provision a TKG cluster, it is created in the target vSphere Namespace. In a zoned Supervisor deployment, you will have the same resource pool in each vSphere cluster.

vSphere Namespace Objects

vSphere Namespace Storage for TKG Service Clusters

vSphere Cloud Native Storage (CNS) provides storage policies that support the provisioning of persistent volumes and their backing virtual disks for use with Kubernetes workloads.

The Container Storage Interface (CSI) is an industry standard that Kubernetes uses to provision persistent storage for containers. Supervisor runs a CNS-CSI driver that connects vSphere CNS storage to the Kubernetes environment through the vSphere Namespace. The vSphere CNS-CSI communicates directly with the CNS control plane for all storage provisioning requests that come from TKG clusters in the vSphere Namespace.

TKG clusters run a modified version of the vSphere CNS-CSI driver that is responsible for all storage related requests originating from the TKG cluster. The requests are delivered to the CNS-CSI on Supervisor, which then propagates them to the CNS on the vCenter Server.

The diagram shows the relationship between the vSphere Namespace, Supervisor, and TKG cluster storage mechanisms.

""

Persistent Storage Volumes for TKG Service Clusters

Persistent volumes are required for stateful applications in Kubernetes. For more information about persistent volumes, refer to the Kubernetes documentation.

In the vSphere environment, persistent volume objects are backed by virtual disks that reside on datastores. Datastores are represented by storage policies. When you assign a vSphere storage policy to a vSphere Namespace, the storage policy is made available as a Kubernetes storage class for each TKG cluster in that namespace.

TKG supports dynamic and static provisioning of persistent volumes. With dynamic provisioning, the persistent volume does not need to be pre-provisioned. You issue a persistent volume claim (PVC) that references a storage class available in the vSphere Namespace. TKG automatically provisions the corresponding persistent volume with a backing virtual disk. See Creating Persistent Storage Volumes Dynamically.

With static provisioning, you use an existing storage object and make it available to a cluster. You define the persistent volume by providing the details of the existing storage object, its supported configurations, and mount options. See Creating Persistent Storage Volumes Statically.

The diagram illustrates the dynamic persistent volume provisioning workflow. You create a PVC using kubectl on the TKG cluster. This action generates a matching PVC on Supervisor and triggers the CNS-CSI driver, which invokes the CNS create volume API.

""

Storage Class Editions for TKG Service Clusters

To configure a vSphere Namespace, you assign one or more vSphere storage policies. When the vSphere storage policy is applied, it is converted to a Kubernetes storage class and replicated on Supervisor. Likewise the TKG controller replicates the storage class on each TKG cluster deployed in that vSphere Namespace.

On the TKG cluster side you will see two editions of the storage class: one with the user-defined name specified when the vSphere storage policy was created, and the other with *-latebinding appended to the name.

The late binding edition of the storage class can be used by developers to bind to a persistent storage volume after the compute node is selected by the TKG pod scheduler. For more information on which storage class to use and when, see Using Storage Classes for Persistent Volumes.
kubectl get sc
NAME                                    PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
wcpglobal-storage-profile               csi.vsphere.vmware.com   Delete          Immediate              true                   2m43s
wcpglobal-storage-profile-latebinding   csi.vsphere.vmware.com   Delete          WaitForFirstConsumer   true                   2m43s

vSphere Namespace Creation

There are several ways in which to create vSphere Namespaces.

Administrators can create vSphere Namespaces using the vSphere Client. See Create a vSphere Namespace for Hosting TKG Service Clusters.

vCenter Single Sign-On users who are granted the Owner role permission for a vSphere Namespace can create vSphere Namespaces in a self-service manner using kubectl. See Enable vSphere Namespace Creation Using Kubectl.

VMware exposes vCenter Server APIs for managing the life cycle of vSphere Namespaces, and provides software development kits (SDKs), including: