If you enabled identity management when you deployed a management cluster, you must perform additional post-deployment steps on the management cluster so that authenticated users can access it.

To configure identity management on a management cluster, you must perform the following steps after the management cluster has been deployed:

Prerequisites

  • You have deployed a management cluster with either OIDC or LDAPS identity management configured.
  • If you configured an OIDC server as the identity provider, you have followed the procedures in Prepare External Identity Management to add users in the OIDC server.
  • You have connected kubectl to the management cluster as described below.

Connect kubectl to the Management Cluster

To configure identity management, you must obtain and use the admin context of the management cluster.

  1. Get the admin context of the management cluster.

    The procedures in this topic use a management cluster named id-mgmt-test.

    tanzu management-cluster kubeconfig get id-mgmt-test --admin
    

    If your management cluster is named id-mgmt-test, you should see the confirmation Credentials of workload cluster 'id-mgmt-test' have been saved. You can now access the cluster by running 'kubectl config use-context id-mgmt-test-admin@id-mgmt-test'. The admin context of a cluster gives you full access to the cluster without requiring authentication with your IDP.

  2. Set kubectl to the admin context of the management cluster.

    kubectl config use-context id-mgmt-test-admin@id-mgmt-test
    

The next steps depend on whether you are using an OIDC or LDAP identity management service.

Check the Status of an OIDC Identity Management Service

Tanzu Kubernetes Grid uses Pinniped to integrate clusters with an OIDC identity service. Do the following to check the status of an OIDC service, and note the EXTERNAL-IP address at which the service is exposed.

Note: In Tanzu Kubernetes Grid v1.3.0, Pinniped used Dex as the endpoint for OIDC providers. This required a different procedure for checking OIDC service status than the one below.

  1. Get information about the services that are running in the management cluster.

    The identity management service runs in the pinniped-supervisor namespace:

    kubectl get all -n pinniped-supervisor
    

    You see the following entry in the output:

    vSphere:

    NAME                          TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
    service/pinniped-supervisor   NodePort   100.70.70.12   <none>        5556:31234/TCP   84m
    

    Amazon EC2:

    NAME                          TYPE           CLUSTER-IP     EXTERNAL-IP                              PORT(S)         AGE
    service/pinniped-supervisor   LoadBalancer   100.69.13.66   ab1[...]71.eu-west-1.elb.amazonaws.com   443:30865/TCP   56m
    

    Azure:

    NAME                          TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)         AGE
    service/pinniped-supervisor   LoadBalancer   100.69.169.220   20.54.226.44     443:30451/TCP   84m
    
  2. Note the following information:

    • vSphere: Note the port on which the pinniped-supervisor service is running. In the example above, this port is 31234.
    • Amazon EC2 and Azure: Note the external address of the pinniped-supervisor service, as listed under EXTERNAL-IP.
  3. Check that all services in the management cluster are running.

    kubectl get pods -A
    

    It can take several minutes for the Pinniped service to be up and running. For example, on Amazon EC2 and Azure deployments the service must wait for the LoadBalancer IP addresses to be ready. Wait until you see that pinniped-post-deploy-job is completed before you proceed to the next steps.

    NAMESPACE             NAME                                   READY  STATUS      RESTARTS  AGE
    [...]
    pinniped-supervisor   pinniped-post-deploy-job-hq8fc         0/1    Completed   0         85m
    

NOTE: You are able to run kubectl get pods because you are using the admin context for the management cluster. Users who attempt to connect to the management cluster with the regular context will not be able to access its resources, because they are not yet authorized to do so.

Check the Status of an LDAP Identity Management Service

Tanzu Kubernetes Grid uses Pinniped to integrate clusters with an LDAP identity service, along with Dex to expose the service endpoint. Do the following to check the status of an LDAP service, and note the EXTERNAL-IP address at which the service is exposed.

  1. Get information about the services that are running in the management cluster in the tanzu-system-auth namespace.

    kubectl get all -n tanzu-system-auth
    

    You see the following entry in the output:

    vSphere:

    NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
    service/dexsvc   NodePort   100.70.70.12   <none>        5556:30167/TCP   84m
    

    Amazon EC2:

    NAME             TYPE           CLUSTER-IP       EXTERNAL-IP                              PORT(S)         AGE
    service/dexsvc   LoadBalancer   100.65.184.107   a6e[...]74.eu-west-1.elb.amazonaws.com   443:32547/TCP   84m
    

    Azure:

    NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
    service/dexsvc   LoadBalancer   100.69.169.220   20.54.226.44  443:30451/TCP   84m
    
  2. Check that all services in the management cluster are running.

    kubectl get pods -A
    

    It can take several minutes for the Pinniped service to be up and running. For example, on Amazon EC2 and Azure deployments the service must wait for the LoadBalancer IP addresses to be ready. Wait until you see that pinniped-post-deploy-job is completed before you proceed to the next steps.

    NAMESPACE             NAME                                   READY  STATUS      RESTARTS  AGE
    [...]
    pinniped-supervisor   pinniped-post-deploy-job-hq8fc         0/1    Completed   0         85m
    

NOTE: You are able to run kubectl get pods because you are using the admin context for the management cluster. Users who attempt to connect to the management cluster with the regular context will not be able to access its resources, because they are not yet authorized to do so.

Provide the Callback URI to the OIDC Provider

If you configured an LDAP server as your identity provider, you do not need to configure a callback URI. For the next steps, go to Generate and Test a kubeconfig File for Cluster Access.

If you configured the management cluster to use OIDC authentication, you must provide the callback URI for that management cluster to your OIDC identity provider.

For example, if you are using OIDC and your IDP is Okta, perform the following steps:

  1. Log in to your Okta account.
  2. In the main menu, go to Applications.
  3. Select the application that you created for Tanzu Kubernetes Grid.
  4. In the General Settings panel, click Edit.
  5. Under Login, update Login redirect URIs to include the address of the node in which the pinniped-supervisor is running:

    • vSphere: Add the IP address that you set as the API endpoint and the pinniped-supervisor port number that you noted in the previous procedure.

      https://<API-ENDPOINT-IP>:31234/callback
      
    • Amazon EC2 and Azure: Add the external IP address of the node at which the pinniped-supervisor service is running, that you noted in the previous procedure.

      https://<EXTERNAL-IP>/callback
      

      In all cases, you must specify https, not http.

  6. Click Save.

Create a Role Binding on the Management Cluster

To give non-admin users access to workload clusters, you generate and distribute a kubeconfig file as described in Generate and Test a kubeconfig File for Cluster Access below.

To make this kubeconfig work, you must first set up role-based access control (RBAC) for clusters by creating a role binding on the management cluster. This role binding assigns role-based permissions to individual authenticated users or user groups. There are many roles with which you can associate users, but the most useful roles are the following:

  • cluster-admin: Can perform any operation on the cluster.
  • admin: Permission to view most resources but can only modify resources like roles and bindings. Cannot modify pods or deployments.
  • edit: The opposite of admin. Can create, update, and delete resources like deployments, services, and pods. Cannot change roles or permissions.
  • view: Read-only.

You can assign any of these roles to users. For more information about RBAC and cluster role bindings, see Using RBAC Authorization in the Kubernetes documentation.

  1. Make sure that you are using the admin context of the management cluster.

    kubectl config current-context
    

    If the context is not the management cluster admin context, set kubectl to use that context. For example:

    kubectl config use-context id-mgmt-test-admin@id-mgmt-test
    
  2. To see the full list of roles that are available on a cluster, run the following command:

    kubectl get clusterroles
    
  3. Create a cluster role binding to associate a given user with a role.

    The following command creates a role binding named id-mgmt-test-rb that binds the role cluster-admin for this cluster to the user user@example.com. For OIDC the username is usually the email address of the user. For LDAPS it is the LDAP username, not the email address.

    OIDC:

    kubectl create clusterrolebinding id-mgmt-test-rb --clusterrole cluster-admin --user user@example.com
    

    LDAP:

    kubectl create clusterrolebinding id-mgmt-test-rb --clusterrole cluster-admin --user <username>
    
  4. Attempt to connect to the management cluster again by using the kubeconfig file that you created in the previous procedure.

    kubectl get pods -A --kubeconfig /tmp/id_mgmt_test_kubeconfig
    

    This time, because the user is bound to the cluster-admin role on this management cluster, the list of pods should be displayed.

Add a Load Balancer for an Identity Provider on vSphere

On vSphere, you can use a load balancer with the OIDC or LDAP identity management services. Setting up a load balancer on the management cluster for identity management can simplify your DNS and firewall configuration requirements.

If you are using NSX Advanced Load Balancer (ALB) as your control plane endpoint and are using an identity provider, in other words, if your management cluster configuration sets AVI_CONTROL_PLANE_HA_PROVIDER: true and does not set IDENTITY_MANAGEMENT_TYPE: none, then you must apply NSX ALB to your identity provider as follows.

This procedure modifies Pinniped by updating the app secret that contains the deployment configuration. This update ensures that any configuration changes made to Pinniped components are preserved during future upgrades of the management cluster.

Prerequisites

Before you begin this procedure, you must have the following:

  • An external load balancer service configured and available for use as a provider by the management cluster. For example, NSX ALB or MetalLB.
  • Successfully installed and configured Pinniped for OIDC or Pinniped and Dex for LDAP on the management cluster.

Procedure

  1. Make sure that you are using the admin context of the management cluster.

    kubectl config current-context
    

    If the context is not the management cluster admin context, set kubectl to use that context. For example:

    kubectl config use-context id-mgmt-test-admin@id-mgmt-test
    
  2. Create an overlay for the Pinniped addon that changes the pinniped-supervisor service to be type=LoadBalancer:

    1. Create a file pinniped-supervisor-svc-overlay.yaml with the following content:

      #@ load("@ytt:overlay", "overlay")
      #@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "pinniped-supervisor", "namespace": "pinniped-supervisor"}})
      ---
      #@overlay/replace
      spec:
        type: LoadBalancer
        selector:
          app: pinniped-supervisor
        ports:
          - name: https
            protocol: TCP
            port: 443
            targetPort: 8443
      
      #@ load("@ytt:overlay", "overlay")
      #@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "dexsvc", "namespace": "tanzu-system-auth"}}), missing_ok=True
      ---
      #@overlay/replace
      spec:
        type: LoadBalancer
        selector:
          app: dex
        ports:
          - name: dex
            protocol: TCP
            port: 443
            targetPort: https
      
    2. Convert the file into a base64-encoded string:

      • Linux:

        cat pinniped-supervisor-svc-overlay.yaml | base64 -w 0
        
      • MacOS:

        cat pinniped-supervisor-svc-overlay.yaml | base64
        
  3. Patch the mgmt-pinniped-addon secret, which contains the Pinniped configuration values, with the overlay values:

    kubectl patch secret mgmt-pinniped-addon -n tkg-system -p '{"data": {"overlays.yaml": "OVERLAY-BASE64"}}'
    
    

    Where OVERLAY-BASE64 is the output from the previous base64 step.

  4. After a few seconds, list the pinniped-supervisor (and dexsvc if using LDAP) services to confirm that they now have type LoadBalancer:

    $ kubectl get services -n pinniped-supervisor
    NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)         AGE
    pinniped-supervisor   LoadBalancer   100.65.5.190   192.168.15.12   443:30754/TCP   125m
    
  5. Delete pinniped-post-deploy-job to re-run it:

    kubectl delete jobs pinniped-post-deploy-job -n pinniped-supervisor
    

    Wait for the Pinniped post-deploy job to re-create, run, and complete, which may take a few minutes. You can check status by kubectl get job:

    $ kubectl get job pinniped-post-deploy-job -n pinniped-supervisor
    NAME                       COMPLETIONS   DURATION   AGE
    pinniped-post-deploy-job   1/1           6s         22h
    
  6. After the post-deploy job completes, get the non-admin kubeconfig, which you can distribute to non-admin users:

    tanzu management-cluster kubeconfig get
    

Using this kubeconfig, non-admin users can now set their kubectl context to the management cluster and use kubectl commands to access any of its workload clusters.

Generate and Test a kubeconfig File for Cluster Access

To give users access to management or workload clusters, you generate a kubeconfig file for the management cluster and then share the file with those users. To generate the file, you run tanzu management-cluster kubeconfig get with the --export-file option, and an optional --admin option that works as follows:

  • With --admin, the command generates an administrator kubeconfig that contains embedded credentials. With this admin version of the kubeconfig, any users with whom you share it will have full access to the management cluster and IDP authentication is bypassed.

  • Without --admin, the command generates a standard, non-admin kubeconfig that prompts users to authenticate with an external identity provider. The identity provider then verifies the user's identity before the user can access the cluster's resources.

See Retrieve Management Cluster kubeconfig for more information about these two options.

This procedure allows you to test the login step of the authentication process if a browser is present on the machine on which you are running tanzu and kubectl commands. If the machine does not have a browser, see Authenticate Users on a Machine Without a Browser below.

  1. Export the regular kubeconfig for the management cluster to the local file /tmp/id_mgmt_test_kubeconfig.

    Note that the command does not include the --admin option, so the kubeconfig that is exported is the regular kubeconfig, not the admin version.

    tanzu management-cluster kubeconfig get --export-file /tmp/id_mgmt_test_kubeconfig
    

    You should see confirmation that You can now access the cluster by specifying '--kubeconfig /tmp/id_mgmt_test_kubeconfig' flag when using 'kubectl' command.

  2. Connect to the management cluster by using the newly-created kubeconfig file.

    kubectl get pods -A --kubeconfig /tmp/id_mgmt_test_kubeconfig
    

    The authentication process requires a browser to be present on the machine from which users connect to clusters, because running kubectl commands automatically opens the IDP login page so that users can log in to the cluster.

    Your browser should open and display the login page for your OIDC provider or an LDAPS login page.

    LDAPS:

    LDAPS login page

    OIDC:

    OIDC login page

    Enter the credentials of a user account that exists in your OIDC or LDAP server.

    After a successful login, the browser should display the following message:

    you have been logged in and may now close this tab
    
  3. Go back to the terminal in which you run tanzu and kubectl commands.

    If you already configured a role binding on the cluster for the authenticated user, the output of kubectl get pods -A appears, displaying the pod information.

    If you have not configured a role binding on the cluster, you see a message denying the user account access to the pods: Error from server (Forbidden): pods is forbidden: User "user@example.com" cannot list resource "pods" in API group "" at the cluster scope. This happens because the user has been successfully authenticated, but they are not yet authorized to access any resources on the cluster. To authorize the user to access the cluster resources, you must Create a Role Binding on the Management Cluster.

Authenticate Users on a Machine Without a Browser

If the machine on which you are running tanzu and kubectl commands does not have a browser, you can skip the automatic opening of a browser during the authentication process.

  1. Set the TANZU_CLI_PINNIPED_AUTH_LOGIN_SKIP_BROWSER=true environment variable.

    This adds the --skip-browser option to the kubeconfig for the cluster.

    export TANZU_CLI_PINNIPED_AUTH_LOGIN_SKIP_BROWSER=true
    

    On Windows systems, use the SET command instead of export.

  2. Export the regular kubeconfig for the management cluster to the local file /tmp/id_mgmt_test_kubeconfig.

    Note that the command does not include the --admin option, so the kubeconfig that is exported is the regular kubeconfig, not the admin version.

    tanzu management-cluster kubeconfig get --export-file /tmp/id_mgmt_test_kubeconfig
    

    You should see confirmation that You can now access the cluster by specifying '--kubeconfig /tmp/id_mgmt_test_kubeconfig' flag when using 'kubectl' command.

  3. Connect to the management cluster by using the newly-created kubeconfig file.

    kubectl get pods -A --kubeconfig /tmp/id_mgmt_test_kubeconfig
    

    The login URL is displayed in the terminal. For example:

    Please log in: https://ab9d82be7cc2443ec938e35b69862c9c-10577430.eu-west-1.elb.amazonaws.com/oauth2/authorize?access_type=offline&client_id=pinniped-cli&code_challenge=vPtDqg2zUyLFcksb6PrmE8bI9qF8it22KQMy52hB6DE&code_challenge_method=S256&nonce=2a66031e3075c65ea0361b3ba30bf174&redirect_uri=http%3A%2F%2F127.0.0.1%3A57856%2Fcallback&response_type=code&scope=offline_access+openid+pinniped%3Arequest-audience&state=01064593f32051fee7eff9333389d503
    
  4. Copy the login URL and paste it into a browser on a machine that does have one.

  5. In the browser, log in to your identity provider.

    You will see a message that the identity provider could not send the authentication code because there is no localhost listener on your workstation.

  6. Copy the URL of the authenticated session from the URL field of the browser.

  7. On the machine that does not have a browser, use the URL that you copied in the preceding step to get the authentication code from the identity provider.

    curl -L '<copied_URL>'
    

    Wrap the URL in quotes, to escape any special characters. For example, the command will resemble the following:

    curl - L 'http://127.0.0.1:37949/callback?code=FdBkopsZwYX7w5zMFnJqYoOlJ50agmMWHcGBWD-DTbM.8smzyMuyEBlPEU2ZxWcetqkStyVPjdjRgJNgF1-vODs&scope=openid+offline_access+pinniped%3Arequest-audience&state=a292c262a69e71e06781d5e405d42c03'
    

    After running curl -L '<copied_URL>', you should see the following message:

    you have been logged in and may now close this tab
    
  8. Connect to the management cluster again by using the same kubeconfig file as you used previously.

    kubectl get pods -A --kubeconfig /tmp/id_mgmt_test_kubeconfig
    

    If you already configured a role binding on the cluster for the authenticated user, the output shows the pod information.

    If you have not configured a role binding on the cluster, you will see a message denying the user account access to the pods: Error from server (Forbidden): pods is forbidden: User "user@example.com" cannot list resource "pods" in API group "" at the cluster scope. This happens because the user has been successfully authenticated, but they are not yet authorized to access any resources on the cluster. To authorize the user to access the cluster resources, you must configure Role-Based Access Control (RBAC) on the cluster by creating a cluster role binding.

What to Do Next

Share the generated kubeconfig file with other users, to allow them to access the management cluster. You can also start creating workload clusters, assign users to roles on those clusters, and share their kubeconfig files with those users.

check-circle-line exclamation-circle-line close-line
Scroll to top icon