The Security Intelligence activation failed.

Problem

The Security Intelligence activation failed to complete successfully. You might have seen one of the following error messages.
  • Cluster status needs to be STABLE before feature deployment.

    This error message might appear after you clicked Activate.

  • The feature activation took too long. Either the Kubernetes pods failed to come up or the registration with NSX Manager failed. Please contact your Infrastructure Administrator for assistance.

Cause

The Security Intelligence activation failure might be due to one of the following reasons.
  • The Kubernetes pods used by the NSX Application Platform is in a degraded or unstable status. Because Security Intelligence is to be hosted on the platform, the activation cannot proceed if the platform is unstable.
  • The Kubernetes pods failed to come up or an attempt to register Security Intelligence with the NSX Manager failed.

Solution

  • To try to resolve the issue, perform one of the following suggested solutions that corresponds to the cause listed in the previous section.

Solution

To try to resolve the issue, perform one of the following suggested solutions that corresponds to the cause listed in the previous section.
  • If you received the Cluster status needs to be STABLE before feature deployment error message, resolve the issue that caused the Kubernetes cluster on which the NSX Application Platform deployed to be in an unstable state. For information, see the "Troubleshooting Issues with the NSX Application Platform" section of the Deploying and Managing the VMware NSX Application Platform document that is delivered with the VMware NSX Documentation set for versions 3.2 and later.
  • If you received the The feature activation took too long error message, use the following information to narrow down the root cause of the failure.
    1. Examine the logs for the cluster-api pod.
      1. Log in to the NSX Manager appliance with root account.
      2. Run the following command at the system prompt.
        napp-k logs cluster-api-xxxx -c cluster-api 
        The search for the cluster-api pod name using the napp-k get pods | grep cluster-api command. An autogenerated suffix is appended to the cluster-api pod name, denoted as -xxxx in the above command

      The Helm repository must be reachable from within the cluster-api pod. If there is a connectivity issue between the cluster-api pod and Helm repository, the cluster-api pod might not be able to fetch the Helm chart and render it to create Kubernetes resources for Security Intelligence. Connectivity depends on the network policies and other firewall rules that your Kubernetes infrastructure administrator put in place. Work with your infrastructure administrator to investigate this issue further and resolve it.

    2. Verify if all the desired pods are able to start up. The pod startup depends on the Docker registry being reachable. In the event that the Docker registry is unreachable or the download action fails due to authentication or authorization reasons, the Kubernetes worker node might not be able to download the Docker container image required to run the workloads. Check the connectivity as described in step 1. Docker registries with authentication are not currently supported.
    3. Check that all pods reach a Running state and all the jobs have completed successfully.
      napp-k get pods | awk '!/Running|Completed/'

      When the command is successful, it does not produce any output. After the Docker container image is downloaded, the pods must be able to start up and run.

    4. For pods that are not in Running state, check the events using the following describe command.
      napp-k describe pod <pod-name>
      You can also check the pods state from the logs using the following command.
      napp-k logs <pod-name>