The default kubeconfig file in a VMware vSphere with Tanzu Kubernetes Guest Cluster contains a token which expires after ten hours by default and results in a warning message. To avoid the warning, work with your Kubernetes infrastructure administrator to generate a valid TKG Cluster on Supervisor configuration file with a non-expiring token that you can use during the NSX Application Platform deployment.

When the token in the default kubeconfig file expires, you see the following warning message in the NSX Manager UI for the NSX Application Platform.

Unable to connect, system has encountered a connectivity issue due to the expiry of Kubernetes Configuration. Update the Kubernetes Configuration to resolve.

The warning does not have an impact on the functionality of the NSX Application Platform nor any of the NSX security features currently activated. However, if you do not replace the default token ten hours after an NSX Application Platform deployment on the Tanzu Kubernetes Guest Cluster, you must generate a valid (not expired) token every time you perform the following operations:
  • Deploy the NSX Application Platform
  • Upgrade the NSX Application Platform
  • Delete the NSX Application Platform

To generate a TKG Cluster on Supervisor configuration file with a non-expiring token that you can use during the NSX Application Platform deployment, work with your Kubernetes infrastructure administrator using the following procedure.

Note: The following steps require a Linux-based client.

Procedure

  1. Log in the vSphere with Tanzu Kubernetes Guest Cluster using the following command.
    kubectl vsphere login --server <supervisor-cluster_ip> -u <user> --tanzu-kubernetes-cluster-name <tkg-cluster-name> --tanzu-kubernetes-cluster-namespace <namespace>
    The parameters are as follows:
    • <supervisor-cluster_ip> is the Control Plane Node Address which can be found in the vSphere Client by selecting Workload Management > Supervisor Cluster.
    • <user> is the account that has administrator access to the TKG Cluster on Supervisor.
    • <tkg-cluster-name> is the name of the TKG Cluster on Supervisor.
    • <namespace> is the vSphere namespace where this cluster resides.
    For example,
    kubectl vsphere login --server 192.111.33.22 -u [email protected] --tanzu-kubernetes-cluster-name napp-tkg-cluster --tanzu-kubernetes-cluster-namespace napp
  2. Run each of the following commands separately to generate an administrator service account and create a cluster role binding.
    kubectl create serviceaccount napp-admin -n kube-system
    
    kubectl create clusterrolebinding napp-admin --serviceaccount=kube-system:napp-admin --clusterrole=cluster-admin
  3. (Required) (For Kubernetes version 1.24 and later) Manually create the authentication token for the administrator service account. Use the following information.
    1. Create a YAML file with a service account. Use the following content for an example YAML file named, napp-admin.yaml.
      apiVersion: v1
      kind: Secret
      type: kubernetes.io/service-account-token
      metadata:
         name: napp-admin
         namespace: kube-system
         annotations:
            kubernetes.io/service-account.name: "napp-admin"
    2. Use the following command to create the authentication token with a service account.
      kubectl apply -f <filename create above.yaml>
      Using the example YAML file in the previous step, the command to use is as follows.
      kubectl apply -f napp-admin.yaml
      The authentication token or secret is generated.
  4. To obtain the authentication token for the administrator service account and the cluster certificate authority, run the following commands separately.
    For supported Kubernetes version 1.24 and later, use the following commands.
    SECRET=$(kubectl get secrets napp-admin -n kube-system -ojsonpath='{.metadata.name}')
    
    TOKEN=$(kubectl get secret $SECRET -n kube-system -ojsonpath='{.data.token}' | base64 -d)
    
    kubectl get secrets $SECRET -n kube-system -o jsonpath='{.data.ca\.crt}' | base64 -d > ./ca.crt
    For supported Kubernetes versions prior to version 1.24, use the following commands.
    SECRET=$(kubectl get serviceaccount napp-admin -n kube-system -ojsonpath='{.secrets[].name}')
    
    TOKEN=$(kubectl get secret $SECRET -n kube-system -ojsonpath='{.data.token}' | base64 -d)
    
    kubectl get secrets $SECRET -n kube-system -o jsonpath='{.data.ca\.crt}' | base64 -d > ./ca.crt
  5. Get the TKG Cluster on Supervisor URL. Run the following commands separately at the command prompt.
    CONTEXT=$(kubectl config view -o jsonpath='{.current-context}')
    
    CLUSTER=$(kubectl config view -o jsonpath='{.contexts[?(@.name == "'"$CONTEXT"'")].context.cluster}')
    
    URL=$(kubectl config view -o jsonpath='{.clusters[?(@.name == "'"$CLUSTER"'")].cluster.server}')
  6. Generate a configuration file, with a non-expiring token, for the TKG Cluster on Supervisor.
    TO_BE_CREATED_KUBECONFIG_FILE="<file-name>"

    The parameter <file-name> is the name of kubeconfig file you are trying to create.

    kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE set-cluster $CLUSTER --server=$URL --certificate-authority=./ca.crt --embed-certs=true
    
    kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE set-credentials napp-admin --token=$TOKEN 
    
    kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE set-context $CONTEXT --cluster=$CLUSTER --user=napp-admin
    
    kubectl config --kubeconfig=$TO_BE_CREATED_KUBECONFIG_FILE use-context $CONTEXT
  7. (Optional) Delete the ca.crt, which is a temporary file created during the generation of the new kubeconfig file.
  8. Use the newly generated kubeconfig file during the NSX Application Platform deployment.