You deploy one or more TKG Service clusters in a vSphere Namespace. Configuration settings applied to the vSphere Namespace are inherited by each TKG Service cluster deployed there.

Configure Role Permissions for the vSphere Namespace

Role permissions are scoped to the vSphere Namespace. There are three role permissions that can be assigned to TKG cluster users and groups: Owner, Can Edit, and Can View. The table describes each role. For more information, see About Identity and Access Management for TKG Service Clusters.

If you are using vCenter Single Sign-On, all three roles are available. To assign SSO users and groups to a vSphere Namespace, see Configure vSphere Namespace Permissions for vCenter Single Sign-On Users and Groups.

If you are using an external OIDC provider, the Owner role permission is not available. To assign OIDC users and groups to a vSphere Namespace, see Configure vSphere Namespace Permissions for External Identity Provider Users and Groups.

Role Description

Owner

The Owner role permission lets assigned users and groups administer vSphere Namespace objects using kubectl, and operate TKG clusters. See Enable vSphere Namespace Creation Using Kubectl.

Can Edit

The Can Edit role permission lets assigned users and groups view vSphere Namespace objects, and operate TKG clusters.

A vCenter Single Sign-On user/group granted the Can Edit permission is bound to the Kubernetes cluster-admin role for each TKG cluster deployed in that vSphere Namespace.

Can View

The Can View role permission lets assigned users and groups view vSphere Namespace objects.
Note: There is no equivalent read-only role in Kubernetes that the Can View permission can be bound to. To grant cluster access to Kubernetes users, see Grant Developers vCenter SSO Access to TKG Service Clusters.

Configure Persistent Storage for the vSphere Namespace

You can assign one or more vSphere storage policies to a vSphere Namespace. An assigned storage policy controls datastore placement of persistent volumes in the vSphere storage environment.

Typically a vSphere administrator defines a vSphere storage policy. If you are using vSphere Zones, the storage policy must be configured with the Zonal topology. See Create a vSphere Storage Policy for TKG Service Clusters.

To assign a vSphere storage policy to a vSphere Namespace:
  1. Select Workload Management > Namespace and the target vSphere Namespace.
  2. From the Storage tile, select Add Storage.
  3. Select one or more storage policies from the available options.

For each vSphere storage policy you assign to a vSphere Namespace, the system creates two matching Kubernetes storage classes in that vSphere Namespace. These storage classes are replicated to each TKG cluster deployed in that vSphere Namespace. See Using Storage Classes for Persistent Volumes.

Set Capacity and Usage Limits for the vSphere Namespace

When you configure a vSphere Namespace, a resource pool for the vSphere Namespace is created on vCenter Server. By default this resource pool is configured without any capacity and usage quota; resources are limited by the infrastructure.

In the Capacity and Usage tile for the vSphere Namespace, you can configure the following Limits.
CPU The amount of CPU resources to reserve for the vSphere Namespace.
Memory The amount of memory to reserve for the vSphere Namespace.
Storage The total amount of storage space to reserve for the vSphere Namespace.
Storage Policy Limit Set the amount of storage dedicated individually to each of the storage policies that are associated with the vSphere Namespace.

Typically, for TKG cluster deployments, you do not need to configure resource quota on the vSphere Namespace. If you do assign quota limits, it is important to understand their potential impact on TKG clusters deployed there.

CPU and Memory Limits

CPU and Memory limits configured on the vSphere Namespace have no bearing on a TKG cluster deployed there, if the TKG cluster nodes are using the guaranteed VM class type. However, if the TKG cluster nodes are using the best effort VM class type, the CPU and Memory limits can impact the TKG cluster.

Because the best effort VM class type allows resources to be overcommitted, you can run out of resources if you have set CPU and memory limits on the vSphere Namespace where you are provisioning TKG clusters. If contention occurs and the TKG cluster control plane is impacted, the cluster may stop running. For this reason you should always use the guaranteed VM class type for production clusters. If you cannot use the guaranteed VM class type for all production nodes, at a minimum you should use guaranteed for the control plane nodes.

Storage and Storage Policy Limits

A Storage limit configured on the vSphere Namespace determines the overall amount of storage that is available to the vSphere Namespace for all TKG clusters deployed there.

A Storage Policy limit configured on the vSphere Namespace determines the amount of storage available for that storage class for each TKG cluster where the storage class is replicated.

Some workloads have minimum storage requirements. See #GUID-9CA5FE35-8DA5-4F76-BD7F-81059CCA602E, for example.

Associate the TKR Content Library with the TKG Service

To provision TKG clusters you associate the TKG Service with a content library. To create a content library for hosting TKr images, see Administering Kubernetes Releases for TKG Service Clusters.

To associate a TKr content library with a vSphere Namespace:
  1. Select Workload Management > Supervisors > Supervisor (select the Supervisor instance).
  2. Select Configure > Supervisor > General > Tanzu Kubernetes Grid Service.
  3. Select Content Library > Edit.
  4. Select a TKR content library.
  5. Navigate to the vSphere Namespace and select Manage Namespace.
  6. Verify that the selected Content Library appears in the Tanzu Kubernetes Grid Service configure pane.

It is important to understand that the TKR content library is not namespace-scoped. All vSphere Namespaces use same TKR content library that is configured for the Tanzu Kubernetes Grid Service (TKGS). Editing the TKR content library for TKGS will apply to each vSphere Namespace.

Note: The content library referenced in the VM Service tile is for use with standalone VMs, not the TKR content library. Do not add the TKR content library to this tile.

Associate VM Classes with the vSphere Namespace

vSphere IaaS control plane provides several default VM classes, and you can create your own.

To provision a TKG cluster, associate one or more VM classes with the target vSphere Namespace. Bound classes are available for use by TKG cluster nodes deployed in that vSphere Namespace.

To associate the default VM classes with a vSphere Namespace, log in to the vCenter Server using the vSphere Client and complete the following procedure.
  1. Select Workload Management > Namespace and the target vSphere Namespace.
  2. For the VM Service tile, select Add VM Class.
  3. Select each of the VM classes you want to add.
    1. To add the default VM classes, select the check box in the table header on page 1 of the list, navigate to page 2 and select the check box in the table header on that page. Verify that all classes are selected.
    2. To create a custom class, click Create New VM Class. Refer to the VM Services documentation for instructions.
  4. Click OK to complete the operation.
  5. Confirm that the classes are added. The VM Service tile shows Manage VM Classes.

Verify vSphere Namespace Configuration

A configured vSphere Namespace includes Status, Permissions, Storage, Capacity and Usage, TKR Content Library, and VM Classes.
Figure 1. Configured vSphere Namespace
Configured vSphere Namespace
The Status tile includes a link to the vSphere IaaS control plane CLI tools. The DevOps page is served by the Supervisor control plane load balancer. Provide the link to TKG cluster users to download the Kubernetes CLI Tools for vSphere. See Install the Kubernetes CLI Tools for vSphere.
Figure 2. vSphere Namespace DevOps Page
vSphere Namespace DevOps Page

To verify vSphere Namespace configuration using kubectl, see Verify vSphere Namespace Readiness for Hosting TKG Service Clusters.