This topic explains how to configure custom TLS certificates for Pinniped and Dex in Tanzu Kubernetes Grid.

Configuring Custom TLS Certificates

By default, Tanzu Kubernetes Grid uses a self-signed Issuer to generate the TLS certificates that secure HTTPS traffic to Pinniped and Dex. You can optionally update the default configuration after deploying the management cluster, as follows:

  1. Set a custom ClusterIssuer resource or your own TLS secret. See Set a ClusterIssuer Resource or a TLS Secret below.
  2. Update your Pinniped configuration by redeploying the Pinniped add-on secret for the management cluster. See Update Your Pinniped Configuration below.

Note: The instructions in this topic are intended for Tanzu Kubernetes Grid v1.3.1+. In v1.3.1+, Tanzu Kubernetes Grid deploys Dex only for LDAP identity providers. If you configure an OIDC identity provider when you create the management cluster or configure it as a post-creation step, Dex is not deployed. In v1.3.0, Tanzu Kubernetes Grid deploys Dex for both LDAP and OIDC providers.

Set a ClusterIssuer Resource or a TLS Secret

If you want to use a custom ClusterIssuer resource to generate the TLS certificates:

  1. Verify that cert-manager is running in your management cluster. This component runs by default in all management clusters.
  2. Obtain the name of an existing ClusterIssuer resource in the management cluster. For more information, see Issuer Configuration in the cert-manager documentation.
  3. Specify the ClusterIssuer name in the custom_cluster_issuer field of the values.yaml section in the Pinniped add-on secret for the management cluster and then apply your changes. For instructions, see Update Your Pinniped Configuration below. After you complete the steps in this section, both the Pinniped the Dex certificate chains will be signed by your ClusterIssuer.

If you want to specify your own TLS secret directly:

  1. Retrieve the IP address or DNS hostname of the Pinniped service, pinniped-supervisor, and if you are using an LDAP identity provider, of the Dex service, dexsvc:

    • The pinniped-supervisor service:

      • If the type of the service is set to LoadBalancer (vSphere with a load balancer, Amazon EC2, or Azure), retrieve the external address of the service by running:

        kubectl get service pinniped-supervisor -n pinniped-supervisor
        
      • If the type of the service is set to NodePort (vSphere without a load balancer), the IP address of the service is the same as the vSphere control plane endpoint. To retrieve the IP address, you can run the following command:

        kubectl get configmap cluster-info -n kube-public -o yaml
        
    • (LDAP only) The dexsvc service:

      • If the type of the service is set to LoadBalancer (vSphere with a load balancer, for example, NSX Advanced Load Balancer, Amazon EC2, or Azure), retrieve the external address of the service by running:

        kubectl get service dexsvc -n tanzu-system-auth
        
      • If the type of the service is set to NodePort (vSphere without a load balancer), the IP address of the service is the same as the vSphere control plane endpoint. To retrieve the IP address, you can run the following command:

        kubectl get configmap cluster-info -n kube-public -o yaml
        
  2. Create a kubernetes.io/tls secret in the pinniped-supervisor namespace if you are using an OIDC identity provider. If you are using an LDAP identity provider, create two kubernetes.io/tls secrets with the same name, one, for the Pinniped service, in the pinniped-supervisor namespace and one, for the Dex service, in the tanzu-system-auth namespace. To create a TLS secret, run:

    kubectl create secret generic SECRET-NAME -n SECRET-NAMESPACE --type kubernetes.io/tls --from-file tls.crt=FILE-NAME-1.crt --from-file tls.key=FILE-NAME-2.pem --from-file ca.crt=FILE-NAME-3.pem
    

    Replace the placeholder text as follows:

    • SECRET-NAME is the name you choose for the secret. For example, my-secret.
    • SECRET-NAMESPACE is the namespace in which to create the secret. This must be pinniped-supervisor for Pinniped and tanzu-system-auth for Dex.
    • FILE-NAME-* is the name of your tls.crt, tls.key, or ca.crt. The TLS certificate that you specify in the TLS secret for Pinniped must include the IP or DNS hostname of the Pinniped service from the step above. Similarly, the TLS certificate for Dex must include the IP address or DNS hostname of the Dex service that you retrieved above. The ca.crt contains the CA bundle that is used to verify the TLS certificate.

    For example, the resulting secret for Pinniped looks similar to the following:

    apiVersion: v1
    kind: Secret
    metadata:
     name: my-secret
     namespace: pinniped-supervisor
    type: kubernetes.io/tls
    data:
     tls.crt: |       MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
     tls.key: |       MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
     ca.crt:  |       MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
    

    If the secret has been generated correctly, decoding tls.crt with openssl x509 -in tls.crt -text shows the IP address or DNS hostname in the Subject Alternative Name field.

  3. Specify the secret name in the custom_tls_secret field of the values.yaml section in the Pinniped add-on secret for the management cluster and then apply your changes. For instructions, see Update Your Pinniped Configuration below.

Update Your Pinniped Configuration

To apply your changes, update the Pinniped configuration by following the steps below:

  1. Save the Pinniped add-on secret for the management cluster to a file:

    kubectl get secret CLUSTER-NAME-pinniped-addon -n tkg-system -o yaml > FILE-NAME.yaml
    

    Replace the placeholder text as follows:

    • CLUSTER-NAME is the name of your management cluster.
    • FILE-NAME is the file name that you want to use for the secret. For example, pinniped-addon-secret.yaml.
  2. Decode the Base64-encoded string in the values.yaml section of the secret using the decoding tool of your choice. For example:

    pbpaste | base64 -d > pinniped-configuration.yaml
    
  3. In the decoded text, do one of the following:

    • If you prepared a ClusterIssuer resource above, specify the name of the resource in the custom_cluster_issuer field. For example:

      #@data/values
      #@overlay/match-child-defaults missing_ok=True
      ---
      infrastructure_provider: vsphere
      tkg_cluster_role: management
      custom_cluster_issuer: "my-cluster-issuer-name"
      pinniped:
       cert_duration: 2160h
       cert_renew_before: 360h
       supervisor_svc_endpoint: https://10.168.217.220:31234
       supervisor_ca_bundle_data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F……
      ...
      
    • If you prepared your own TLS secret above, specify the name of the secret in the custom_tls_secret field. For example:

      #@data/values
      #@overlay/match-child-defaults missing_ok=True
      ---
      infrastructure_provider: vsphere
      tkg_cluster_role: management
      custom_tls_secret: "my-tls-secret-name"
      pinniped:
       cert_duration: 2160h
       cert_renew_before: 360h
       supervisor_svc_endpoint: https://10.168.217.220:31234
       supervisor_ca_bundle_data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F……
      ...
      

    If you configure the custom_tls_secret field, the custom_cluster_issuer is ignored.

  4. Encode the values.yaml section again using the Base64 scheme. For example:

    cat pinniped-configuration.yaml | base64 > encoded.txt
    
  5. Copy the encoded string and paste it into the Pinniped add-on secret file that you retrieved from the cluster in step 1. For example, pinniped-addon-secret.yaml.

  6. Redeploy the Pinniped add-on secret by running the kubectl apply -f command. For example:

    kubectl apply -f pinniped-addon-secret.yaml
    
  7. Confirm that the changes have been applied successfully:

    1. Get the status of the pinniped app:

      kubectl get app pinniped -n tkg-system
      

      If the returned status is Reconcile failed, run the following command to get details on the failure:

      kubectl get app pinniped -n tkg-system -o yaml
      
    2. Generate a kubeconfig file for the management cluster by running the tanzu management-cluster kubeconfig get command and then run kubectl get pods using the kubeconfig. Additionally, if your management cluster manages any workload clusters, run tanzu cluster kubeconfig get and then kubectl get pods against each of the existing clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon