Before installing OpenShift 4, you must update some NCP configuration files.

Starting with NCP 3.1.1 the YAML files are included in the NCP download file from download.vmware.com. You can go to https://github.com/vmware/nsx-container-plugin-operator/releases, find the corresponding operator release (for example, v3.1.1) and download openshift4.tar.gz.

For NCP 3.1.0, check out v0.2.0 from https://github.com/vmware/nsx-container-plugin-operator/releases.

The following files are in the nsx-container-plugin-operator/deploy folder:
  • configmap.yaml – Update this file with the NSX-T information.
  • operator.yaml – Specify the NCP image location in this file.
  • namespace.yaml – The namespace specification for the operator. Do not edit this file.
  • role_binding.yaml - The role binding specefication for the operator. Do not edit this file.
  • role.yaml - The role specification for the operator. Do not edit this file.
  • service_account.yaml - The service account specification for the operator. Do not edit this file.
  • lb-secret.yaml - Secret for the default NSX-T load balancer certificate.
  • nsx-secret.yaml - Secret for certificate-based authentication to NSX-T. This is used instead of nsx_api_user and nsx_api_password in the configmap.yaml.
  • operator.nsx.vmware.com_ncpinstalls_crd.yaml - Operator-owned Customer Resource Definition.
  • operator.nsx.vmware.com_v1_ncpinstall_cr.yaml - Operator-owned Customer Resource.
The following connfigmap.yaml example shows a basic configuration. See configmap.yaml in the deploy folder for more options. You must specify values for the following parameters according to your environment:
  • cluster
  • nsx_api_managers
  • nsx_api_user
  • nsx_api_password
  • external_ip_pools
  • tier0_gateway
  • overlay_tz
  • edge_cluster
  • apiserver_host_ip
  • apiserver_host_port
kind: ConfigMap 
metadata: 
  name: nsx-ncp-operator-config 
  namespace: nsx-system-operator 
data: 
  ncp.ini: | 
    [vc] 

    [coe] 

    # Container orchestrator adaptor to plug in. 
    adaptor = openshift4 

    # Specify cluster name. 
    cluster = ocp 

    [DEFAULT] 

    [nsx_v3] 
    policy_nsxapi = True 
    # Path to NSX client certificate file. If specified, the nsx_api_user and 
    # nsx_api_password options will be ignored. Must be specified along with 
    # nsx_api_private_key_file option 
    #nsx_api_cert_file = <None> 

    # Path to NSX client private key file. If specified, the nsx_api_user and 
    # nsx_api_password options will be ignored. Must be specified along with 
    # nsx_api_cert_file option 
    #nsx_api_private_key_file = <None> 

    nsx_api_managers = 10.114.209.10,10.114.209.11,10.114.209.12 

    nsx_api_user = admin 
    nsx_api_password = VMware1! 

    # Do not use in production 
    insecure = True 

    # Choices: ALL DENY <None> 
    log_firewall_traffic = DENY 

    external_ip_pools = 10.114.17.0/25 
    #top_tier_router = <None> 
    tier0_gateway = t0a 
    single_tier_topology = True 
    overlay_tz = 3efa070d-3870-4eb1-91b9-a44416637922 
    edge_cluster = 3088dc2b-d097-406e-b9de-7a161e8d0e47 

    [ha] 

    [k8s] 
    # Kubernetes API server IP address. 
    apiserver_host_ip = api-int.ocp.yasen.local 

    # Kubernetes API server port. 
    apiserver_host_port = 6443 

    client_token_file = /var/run/secrets/kubernetes.io/serviceaccount/token 

    # Choices: <None> allow_cluster allow_namespace 
    baseline_policy_type = allow_cluster 
    enable_multus = False 
    process_oc_network = False 
 
    [nsx_kube_proxy] 

    [nsx_node_agent] 

    ovs_bridge = br-int 

    # The OVS uplink OpenFlow port 
    ovs_uplink_port = ens192 

    [operator] 

    # The default certificate for HTTPS load balancing. 
    # Must be specified along with lb_priv_key option. 
    # Operator will create lb-secret for NCP based on these two options. 
    #lb_default_cert = <None> 

    # The private key for default certificate for HTTPS load balancing. 
    # Must be specified along with lb_default_cert option. 
    #lb_priv_key = <None>
In operator.yaml, you must specify the location of NCP image in the env section.
kind: Deployment
metadata: 
  name: nsx-ncp-operator 
  namespace: nsx-system-operator 
spec: 
  replicas: 1 
  selector: 
    matchLabels: 
      name: nsx-ncp-operator 
  template: 
    metadata: 
      labels: 
        name: nsx-ncp-operator 
    spec: 
      hostNetwork: true 
      serviceAccountName: nsx-ncp-operator 
      tolerations: 
      - effect: NoSchedule 
        key: node-role.kubernetes.io/master 
      - effect: NoSchedule 
        key: node.kubernetes.io/not-ready 
      containers: 
        - name: nsx-ncp-operator 
          # Replace this with the built image name 
          image: vmware/nsx-container-plugin-operator:latest 
          command: ["/bin/bash", "-c", "nsx-ncp-operator --zap-time-encoding=iso8601"] 
          imagePullPolicy: Always 
          env: 
            - name: POD_NAME 
              valueFrom: 
                fieldRef: 
                  fieldPath: metadata.name 
            - name: OPERATOR_NAME 
              value: "nsx-ncp-operator" 
            - name: NCP_IMAGE 
              value: "{NCP Image}"

For the operator image, specify the NCP version that will need to be installed. For example, for NCP 3.1.1, the operator image is vmware/nsx-container-plugin-operator:v3.1.1.

Note that pulling directly dockerhub is not recommended in a production environment because of its rate limiting policy. Once pulled from dockerhub, the image can be pushed to a local registry, possibly the same where NCP images are available.

Alternatively, you can use the operator image file included in the NCP download file from download.vmware.com and import it into a local registry. This image is the same as the one published on VMware's dockerhub.

To set the MTU value for CNI, modify the mtu parameter in the [nsx-node-agent] section of the Operator ConfigMap. The operator will trigger a recreation of the nsx-ncp-boostrap pods ensuring that CNI config files are properly updated on all the nodes. You must also update the node MTU accordingly. A mismatch between the node and pod MTU can cause problems for node-pod communication, affecting, for example, TCP liveness and readiness probes.

Note: Enabling HA in the Operator ConfigMap will create a single NCP pod because the ncpReplicas parameter is set to 1 by default. To have 3 NCP pods created, you can change it to 3. After the cluster is installed, you can change the number of NCP replicas with the command oc edit ncpinstalls ncp-install -n nsx-system.

Configuring certificate-based authentication to NSX-T using principal identity

In a production environment, it is recommended that you do not expose administrator credentials in configmap.yaml with the nsx_api_user and nsx_api_password parameters. The following steps describe how to create a principal identity and allow NCP to use a certificate for authentication.

  1. Generate a certificate and key.
  2. In NSX Manager, navigate to System > Users and Roles and click Add > Principal Identity with Role. Add a principal identity and paste the certificate generated in step 1.
  3. Add the base64-encoded crt and key values in nsx-secret.yaml.
  4. Set the location of the certificate and key files in configmap.yaml under the [nsx_v3] section:
    nsx_api_cert_file = /etc/nsx-ujo/nsx-cert/tls.crt
    nsx_api_private_key_file = /etc/nsx-ujo/nsx-cert/tls.key

Note: Changing the authentication method on a cluster that is already bootstrapped is not supported.

(Optional) Configuring the default NSX-T load balancer certificate

An NSX-T load balancer can implement OpenShift HTTPS Route objects and offload the OCP HAProxy. To do that a default certificate is required. Perform the following steps to configure the default certificate:

  1. Add the base64-encoded crt and key values in lb-secret.yaml.
  2. Set the location for the certificate and the key in configmap.yaml under the [nsx_v3] section:
    lb_default_cert_path = /etc/nsx-ujo/lb-cert/tls.crt
    lb_priv_key_path = /etc/nsx-ujo/lb-cert/tls.key

(Optional) Configuring certificate-based authentication to NSX Managers

If you set insecure = False in the ConfigMap, you must specify the certificate thumbprints of all three managers in the NSX Manager cluster. The following procedure is an example of how to do this.

Copy the certificates of all three NSX Managers to a file:

ssh -l admin 10.114.209.10 -f 'get certificate api' > nsx1.crt
ssh -l admin 10.114.209.11 -f 'get certificate api' > nsx2.crt
ssh -l admin 10.114.209.12 -f 'get certificate api' > nsx3.crt

NSX1=`openssl x509 -in nsx1.crt -fingerprint -noout|awk -F"=" '{print $2}'`
NSX2=`openssl x509 -in nsx2.crt -fingerprint -noout|awk -F"=" '{print $2}'`
NSX3=`openssl x509 -in nsx3.crt -fingerprint -noout|awk -F"=" '{print $2}'`
THUMB="$NSX1,$NSX2,$NSX3"
echo $THUMB

Edit the ConfigMap and add the thumbprints in the [nsx_v3] section:

oc edit cm nsx-ncp-operator-config -n nsx-system-operator

    nsx_api_managers = 10.114.209.10,10.114.209.11,10.114.209.12
    nsx_api_user = admin
    nsx_api_password = VMwareVMware1!
    insecure = False
    thumbprint = E0:A8:D6:06:88:B9:65:7D:FB:F8:14:CF:D5:E5:23:98:C9:43:10:71,A7:B0:26:B5:B2:F6:72:2B:39:86:19:84:E6:DD:AB:43:16:0E:CE:BD,52:9B:99:90:88:4C:9F:9B:83:5E:F7:AF:FC:60:06:50:BE:9E:32:08