NSX-T Container Plug-in (NCP) is delivered as a Docker image. NCP should run on a node for infrastructure services. Running NCP on the master node is not recommended.

Procedure

  1. Download the NCP Docker image.

    The filename is nsx-ncp-xxxxxxx.tar, where xxxxxxx is the build number.

  2. Download the NCP ReplicationController yaml template.

    The filename is ncp-rc.yml. You can edit this file or use it as an example for your own template file.

  3. Load the NCP Docker image to your image registry.
        docker load -i <tar file>
  4. Edit ncp-rc.yml.

    Change the image name to the one that was loaded.

    Specify the nsx_api_managers parameter. This release supports a single Kubernetes node cluster and a single NSX Manager instance. For example:

        nsx_api_managers = 192.168.1.180

    (Optional) Specify the parameter ca_file in the [nsx_v3] section. The value should be a CA bundle file to use in verifying the NSX Manager server certificate. If not set, the system root CAs will be used.

    Specify the parameters nsx_api_cert_file and nsx_api_private_key_file for authentication with NSX-T.

    nsx_api_cert_file is the full path to a client certificate file in PEM format. The contents of this file should look like the following:

        -----BEGIN CERTIFICATE-----
        <certificate_data_base64_encoded>
        -----END CERTIFICATE-----

    nsx_api_private_key_file is the full path to a client private key file in PEM format. The contents of this file should look like the following:

        -----BEGIN PRIVATE KEY-----
        <private_key_data_base64_encoded>
        -----END PRIVATE KEY-----

    Specify the parameter ingress_mode = nat if the Ingress controller is configured to run in NAT mode.

    By default, subnet prefix 24 is used for all subnets allocated from the IP blocks for the pod logical switches. To use a different subnet size, update the subnet_prefix option in the [nsx_v3] section.

    Note:

    In the yaml file, you must specify that the ConfigMap generated for ncp.ini be mounted as a ReadOnly volume. The downloaded yaml file already has this specification, which should not be changed.

  5. Create NCP ReplicationController.
        kubectl create -f ncp-rc.yml

Results

Note:

NCP opens persistent HTTP connections to the Kubernetes API server to watch for life cycle events of Kubernetes resources. If an API server failure or a network failure causes NCP's TCP connections to become stale, you must restart NCP so that it can re-establish connections to the API server. Otherwise, NCP will miss the new events.

During a rolling update of the NCP ReplicationController, do not reboot the container host. If the host is rebooted for any reason, you might see two NCP pods running after the reboot. In that case, you should delete one of the NCP pods. It does not matter which one.