This section describes how to upgrade NCP from 2.5.x or 3.0.x to 3.1 in a Kubernetes environment.

  1. Download the installation files. See Download Installation Files.
  2. Run the following commands to see the ConfigMap and ncp.ini in your current environment:
    kubectl describe configmap nsx-ncp-config -n nsx-system
    kubectl describe configmap nsx-node-agent-config -n nsx-system
  3. Edit the NCP YAML file based on your current environment. For reference, see Edit the NCP YAML File.
    • You must define ovs_uplink_port under the [nsx_node_agent] section.
    • Replace all instances of image: nsx-ncp the new NCP image name.
    • If you use Kubernetes Secrets to store certificates for NCP, see the section "Update the NCP Deployment Specs" in Edit the NCP YAML File about mounting the Secret volumes.
  4. Run the following command to check the syntax of the NCP YAML file:
    kubectl apply -f ncp-<platform_name>.yml --server-dry-run
    The response will list the resources to be created or updated (shown as "configured" in the output). For example,
    customresourcedefinition.apiextensions.k8s.io/nsxerrors.nsx.vmware.com created (server dry run)
    customresourcedefinition.apiextensions.k8s.io/nsxlocks.nsx.vmware.com created (server dry run)
    namespace/nsx-system unchanged (server dry run)
    serviceaccount/ncp-svc-account unchanged (server dry run)
    clusterrole.rbac.authorization.k8s.io/ncp-cluster-role configured (server dry run)
    clusterrole.rbac.authorization.k8s.io/ncp-patch-role configured (server dry run)
    clusterrolebinding.rbac.authorization.k8s.io/ncp-cluster-role-binding unchanged (server dry run)
    clusterrolebinding.rbac.authorization.k8s.io/ncp-patch-role-binding unchanged (server dry run)
    serviceaccount/nsx-node-agent-svc-account unchanged (server dry run)
    clusterrole.rbac.authorization.k8s.io/nsx-node-agent-cluster-role configured (server dry run)
    clusterrolebinding.rbac.authorization.k8s.io/nsx-node-agent-cluster-role-binding unchanged (server dry run)
    configmap/nsx-ncp-config configured (server dry run)
    deployment.extensions/nsx-ncp configured (server dry run)
    configmap/nsx-node-agent-config configured (server dry run)
    daemonset.extensions/nsx-ncp-bootstrap configured (server dry run)
    daemonset.extensions/nsx-node-agent configured (server dry run)
  5. Run the following command to delete old NCP resources:
    kubectl delete deployment nsx-ncp -n nsx-system
  6. Run the following command to check if all the old NCP Pods are terminated:
    kubectl get pods -l component=nsx-ncp -n nsx-system

    It should list no Pods if all old NCP Pods are terminated. Wait for all the Pods to be terminated before proceeding.

  7. Clear the old election lock. From the NSX Manager web UI, go to the Search page and do an advanced search for resources with the following tags:
    Scope: ncp\/ha         Tag: true
    Scope: ncp\/cluster    Tag: <name of the cluster in ncp.ini>

    You should see one or more SpoofGuard resources. Clear all the tags on these resources.

  8. Run the following command to start the upgrade:
    kubectl apply -f ncp-{platform_name}.yml
  9. Run the following command to check the upgrade status:
    kubectl get pods -o wide -n nsx-system

    The output should show new Pods being created and old Pods being terminated. After a successful upgrade, it should show the status of all the Pods as Running.