NSX Container Plug-in (NCP) is delivered as a Docker image. NCP should run on a node for infrastructure services. Running NCP on the master node is not recommended.

Procedure

  1. Download the NCP Docker image.
    The filename is nsx-ncp-xxxxxxx.tar, where xxxxxxx is the build number.
  2. Download the NCP ReplicationController yaml template.
    The filename is ncp-rc.yml. You can edit this file or use it as an example for your own template file.
  3. Load the NCP Docker image to your image registry.
        docker load -i <tar file>
  4. (Optional) Download the yaml template for the custom resource definition of the NSXError object.
    The filename is nsx-error-crd.yaml.
  5. (Optional) Create the custom resource.
        kubectl create -f nsx-error-crd.yaml
  6. Edit ncp-rc.yml.
    Change the image name to the one that was loaded.

    Specify the nsx_api_managers parameter. You can specify the IP address of a single NSX Manager, or the IP addresses (comma separated) of the three NSX Managers in an NSX Manager cluster, or the virtual IP address of an NSX Manager cluster. For example:

        nsx_api_managers = 192.168.1.180
    or
        nsx_api_managers = 192.168.1.181,192.168.1.182,192.168.1.183

    (Optional) Specify the parameter ca_file in the [nsx_v3] section. The value should be a CA bundle file to use in verifying the NSX Manager server certificate. If not set, the system root CAs will be used. If you specify one IP address for nsx_api_managers, then specify one CA file. if you specify three IP addresses for nsx_api_managers, you can specify one or three CA files. If you specify one CA file, it will be used for all three managers. If you specify three CA files, each will be used for the corresponding IP address in nsx_api_managers. For example,

        ca_file = ca_file_for_all_mgrs
    or
        ca_file = ca_file_for_mgr1,ca_file_for_mgr2,ca_file_for_mgr3

    Specify the parameters nsx_api_cert_file and nsx_api_private_key_file for authentication with NSX-T Data Center.

    nsx_api_cert_file is the full path to a client certificate file in PEM format. The contents of this file should look like the following:

        -----BEGIN CERTIFICATE-----
        <certificate_data_base64_encoded>
        -----END CERTIFICATE-----

    nsx_api_private_key_file is the full path to a client private key file in PEM format. The contents of this file should look like the following:

        -----BEGIN PRIVATE KEY-----
        <private_key_data_base64_encoded>
        -----END PRIVATE KEY-----

    Specify the parameter ingress_mode = nat if the Ingress controller is configured to run in NAT mode.

    By default, subnet prefix 24 is used for all subnets allocated from the IP blocks for the pod logical switches. To use a different subnet size, update the subnet_prefix option in the [nsx_v3] section.

    HA (high availability) is enabled by default. You can disable HA with the following specification:
    [ha]
    enable = False
    (Optional) Enable error reporting using NSXError in ncp.ini. This setting is disabled by default.
    [nsx_v3]
    enable_nsx_err_crd = True
    Note: In the yaml file, you must specify that the ConfigMap generated for ncp.ini be mounted as a ReadOnly volume. The downloaded yaml file already has this specification, which should not be changed.
  7. Create NCP ReplicationController.
        kubectl create -f ncp-rc.yml

Results

Note: NCP opens persistent HTTP connections to the Kubernetes API server to watch for life cycle events of Kubernetes resources. If an API server failure or a network failure causes NCP's TCP connections to become stale, you must restart NCP so that it can re-establish connections to the API server. Otherwise, NCP will miss the new events.
During a rolling update of the NCP ReplicationController, you might see two NCP pods running after the rolling update in the following situations:
  • You reboot the container host during the rolling update.
  • The rolling update initially fails because the new image does not exist on a Kubernetes node. You download the image and rerun the rolling update and it succeeds.
If you see two NCP pods running, do the following:
  • Delete one of the NCP pods. It does not matter which one. For example,
    kubectl delete pods <NCP pod name> -n nsx-system
  • Delete the NCP ReplicationController. For example,
    kubectl delete -f ncp-rc.yml -n nsx-system