DevOps engineers connect to the vSphere IaaS control plane to provision and manage the life cycle of TKG Service clusters. Developers connect to TKG Service clusters to deploy packages, workloads, and services. Administrators might need direct access to TKG Service cluster nodes to troubleshoot. The platform provides identity and access management tools and methods supporting each use case and role.
TKG Service Cluster Access Is Scoped to the vSphere Namespace
You provision TKG Service clusters in a vSphere Namespace. When you configure a vSphere Namespace, you set its DevOps permissions, including the identity source, users and groups, and roles. The platform propagates these permissions to each TKG Service cluster provisioned in that vSphere Namespace. The platform supports two authentication methods: vCenter Single Sign-On and an OIDC-compliant external identity provider.
Authentication Using vCenter Single Sign-On and Kubectl
By default, vCenter Single Sign-On is used to authenticate with the environment, including Supervisor and TKG Service clusters. vCenter Single Sign-On provides authentication for the vSphere infrastructure and can integrate with AD/LDAP systems. For more information, see vSphere Authentication with vCenter Single Sign-On.
To authenticate using vCenter Single Sign-On, you use the vSphere Plugin for kubectl. Once authenticated you use kubectl to declaratively provision and manage the life cycle of TKG Service clusters and interact with Supervisor.
The vSphere Plugin for kubectl plugin depends on kubectl. When you authenticate with kubectl vsphere login
command, the plugin issues a POST request with basic authentication to a /wcp/login endpoint on Supervisor. vCenter Server issues a JSON Web Token (JWT) that the Supervisor trusts.
To connect using vCenter Single Sign-On, see Connecting to TKG Service Clusters Using vCenter SSO Authentication.
Authentication Using an External Identity Provider and the Tanzu CLI
You can configure Supervisor with an external identity provider that supports the OpenID Connect protocol. Once configured Supervisor functions as an OAuth 2.0 client and uses the Pinniped authentication service to provide client connectivity using the Tanzu CLI. The Tanzu CLI supports provisioning and managing the life cycle of TKG Service clusters. Each Supervisor instance can support a single external identity provider.
Once the authentication plugin and OIDC issuer are configured appropriately for the pinniped-auth CLI to work, when you log in to Supervisor using tanzu login --endpoint
, the system looks up a few well known configmaps to build the pinniped config
configuration file.
To connect using an external OIDC provider, see Connecting to TKG Clusters on Supervisor Using an External Identity Provider.
Authentication Using a Hybrid Approach: vCenter SSO with Tanzu CLI
If you are using vCenter Single Sign-On as the identity provider and you want to use the Tanzu CLI, you can take a hybrid approach and log in to Supervisor using both tools. This approach may be useful for installing standard packages. See Connect to Supervisor Using the Tanzu CLI and vCenter SSO Authentication.
Users and Groups for DevOps
The permissions you establish when you configure a vSphere Namespace are for DevOps users to manage the life cycle of TKG Service clusters. The DevOps user or group to whom you assign permissions must exist in the identity source. DevOps users authenticate using their identity provider credentials.
Role Permissions and Bindings
There are two types of role based access control (RBAC) systems for TKGS clusters: vSphere Namespace permissions and Kubernetes RBAC authorization. As a vSphere administrator, you assign vSphere Namespace permissions to allow users to create and operate TKG Service clusters. Cluster operators use Kubernetes RBAC to grant cluster access and assign role permissions to developers. See Grant Developers vCenter SSO Access to TKG Service Clusters.
vSphere Namespaces support three roles: Can edit, Can view, and Owner. Role permissions are assigned at and scoped to the vSphere Namespace that hosts a TKG Service cluster. See Configuring vSphere Namespaces for Hosting TKG Service Clusters.
cluster-admin
. The
cluster-admin
role lets users provision and operate TKG Service clusters in the target
vSphere Namespace. You can view this mapping using the command
kubectl get rolebinding
from the target
vSphere Namespace.
kubectl get rolebinding -n tkgs-cluster-namespace NAME ROLE AGE wcp:tkg-cluster-namespace:group:vsphere.local:administrators ClusterRole/edit 33d wcp:tkg-cluster-namespace:user:vsphere.local:administrator ClusterRole/edit 33d
A user/group granted the Can view role permission on a vSphere Namespace has read-only access to TKG Service cluster objects provisioned in that vSphere Namespace. However, unlike the Can edit privilege, for the Can view role there is no Kubernetes RoleBinding created on TKGS clusters in that vSphere Namespace. The reason for this is that in Kubernetes there is no equivalent read-only role that a user/group granted the Can view permission can be bound to. For users other than cluster-admin
, you use Kubernetes RBAC to grant access. See Grant Developers vCenter SSO Access to TKG Service Clusters.
vSphere Permissions
The table shows the types of vSphere permissions required for various vSphere IaaS control plane personas. If needed you can create a custom vSphere SSO group and role for Workload Management. See Create a Dedicated Group and Role for Platform Operators.
Persona | vSphere Role | vSphere SSO Group | vSphere Namespace |
---|---|---|---|
VI /Cloud Admin | Administrator | Administrators | SSO User and/or AD User |
DevOps/Platform Operator | Non-admin or custom role | ServiceProviderUsers | SSO User and/or AD User |
Developer | Read Only or None | None | SSO User and/or AD User |
System Administrator Connectivity
Administrators can connect to TKG Service cluster nodes as the kubernetes-admin
user. This method might be appropriate if vCenter Single Sign-On authentication is not available. For troubleshooting purposes, system administrators can connect to a TKG Service as the vmware-system-user
using SSH and a private key. See Connecting to TKG Service Clusters as a Kubernetes Administrator and System User.