If you’re having trouble registering a Tanzu Kubernetes Grid Management cluster with Tanzu Mission Control, there are a number of steps you can take to identify the root cause and fix the problem.

Problem

You might suspect you’re having trouble registering your Tanzu Kubernetes Grid management cluster if:

  • When registering your management cluster, the verification step fails.

  • If you’re registering a management cluster through the CLI, the operation times out. 

  • If you’re registering a management cluster through the API, the status of the cluster remains pending for longer than 7 minutes.

Cause

There are a few common reasons people have trouble registering their management clusters to Tanzu Mission Control. You can follow the solution described to identify which of these causes is the issue and help you correct it. 

  • You might need to rerun the registration step if the agent installation failed. Common reasons for failure:

    • You might have tried to attach instead of register the management cluster.

    • You might have tried to register a Tanzu Kubernetes Grid management cluster using the vSphere with Tanzu Supervisor cluster procedure and seen a message like “no such resource found.”

  • You might have a resource scheduling issue or permission issue preventing the agent from installing successfully.

  • You might have a problem with connectivity or proxy configuration.

Solution

  1. Check the format of the registration URL to verify you’re using the procedure to register a Tanzu Kubernetes Grid management cluster. You should see source=registration and type=tkgm. For example:
    https://myorgname.tmc.cloud.vmware.com/installer?id=1234&source=registration&type=tkgm.

    If you see attach or tkgs in the URL, you need to restart the registration process. Make sure you follow the procedure described in Register a Management Cluster with Tanzu Mission Control.

  2. Check if the problem is your access rights or network connection to the management cluster by running a simple kubectl command like get namespaces.
    kubectl get namespaces

    If you cannot access the cluster, contact your kubernetes administrator.

  3. Check if the problem is the agent installation failure.
    1. Verify that the cluster agent installed successfully by checking if the vmware-system-tmc namespace exists.
      kubectl get namespace vmware-system-tmc
      NAME                STATUS   AGE
      
      vmware-system-tmc   Active   1m
    2. If the namespace doesn’t exist, try rerunning the command to register your management cluster. Remember to run this command on your management cluster, using the URL from the Tanzu Mission Control registration UI. For example:
      kubectl apply -f 'https://myorgname.tmc.cloud.vmware.com/installer?id=8...7&source=registration&type=tkgm' --kubeconfig="C:\path-to\my-management-clusters\kubeconfig"
  4. Investigate the agent pods to locate the source of the problem.
    1. Check the status of the agent pods.
      kubectl get pods -n vmware-system-tmc -l "tmc-extension-name in (extension-manager,extension-updater,agent-updater)"
      NAME                                   READY   STATUS              RESTARTS      AGE
      
      agent-updater-cc9c67b4d-sp6q9          0/1    CLBO             0             3m
      
      agentupdater-workload-27478576-8rxvk   0/1    CLBO           0             60s
      
      extension-manager-85f9cf5497-rk8ms     0/1     CLBO             0             3m
      
      extension-updater-6c4c95c5bf-9wl6p     0/1    CLBO             0             3m
    2. If the pods are in a CLBO (crash loop backoff), retry the registration procedure and verify you’re using the Tanzu Kubernetes Grid registration procedure instead of the vSphere with Tanzu registration procedure.
    3. If any of the pods are in a failed state, fetch the logs for the failed pod to learn more information about the problem. For example:
      kubectl logs -l "tmc-extension-name=extension-updater" -n vmware-system-tmc

      For help solving pod failures, see Common Log Errors and Solutions for Pod Failure in the Examples section of this page.

    4. If the pods have not been scheduled (they appear as “pending” or not at all), inspect the deployments of extension-manager, extension-updater and agent-updater to find any deployments that are not in a ready state.
      kubectl get deployments -n vmware-system-tmc -l "tmc-extension-name in (extension-manager,extension-updater,agent-updater)"
      NAME                READY   UP-TO-DATE   AVAILABLE   AGE
      
      agent-updater       1/11            1           3m
      
      extension-manager   1/11            1           3m
      
      extension-updater   1/11            1           3m
    5. If any of the deployments show as 0/1 , describe the deployment to understand how to correct the issue. For example:
      kubectl describe deployment extension-updater -n vmware-system-tmc

      These kubernetes errors are usually resource related. For example, you might get <output>OutOfDisk.

      Ask your kubernetes administrator to solve these types of errors by expanding resources or relaxing policies.

  5. If the deployments are all showing 1/1 READY, look for Events messages in the replicasets to understand how to correct the issue. 
    1. Look for replicasets that are not in the ready status. For example:
      kubectl get replicaset -n vmware-system-tmc -l "tmc-extension-name in (extension-manager,extension-updater,agent-updater)"
    2. Inspect any replicasets that are not in ready status and check the events messages in the replicasets to understand how to correct the issue. For example:
      kubectl describe replicaset -n vmware-system-tmc extension-updater-6c4c95c5bf

    The Eventsmessages should contain enough information for you to solve the issue.

Example

Table 1. Common Log Errors and Solutions for Pod Failure

Application

Log

Solutions

Extension-Updater

Failed to discover cluster metadata

Make sure your cluster is CNCF conformant. You can check the CNCF site for a list of Platform Certified Kubernetes - Distribution Providers.

Extension-Updater

Failed to set up manager:get Tanzu Mission Control connection
  1. Make sure the cluster has internet connectivity.

  2. If you’re behind a firewall, set up or validate the configuration of your proxy. See Connecting through a Proxy for more information.

Extension-Updater

Pre-flight checks failed

Or something like

tmc.extension-flag.register.tkg.version.block.aws: 1.3.0,1.3.1,1.4.0

Make sure your cluster is using a version of kubernetes that is supported by Tanzu Mission Control.

What to do next

If you can’t solve the issue on your own, you can create a log bundle to share with support. Run the following commands against your management cluster:

  1. Get the logs of the extension-updater kubectl logs -n vmware-system-tmc -l "tmc-extension-name=extension-updater"

  2. Get the logs of the extension-manager kubectl logs -n vmware-system-tmc -l "tmc-extension-name=extension-manager”

  3. Save the logs and create a bundle to share with support.

See How to Get Support if you need more information about contacting them.