The NCP YAML file contains information to configure, install and run all the NCP components.

The NCP YAML file contains the following information:
  • RBAC definitions.
  • Various CRDs (CustomResourceDefinitions).
  • ConfigMap containing ncp.ini dedicated to NCP. Some recommended configuration options are already set.
  • NCP Deployment.
  • ConfigMap containing ncp.ini deidicated to nsx-node-agent. Some recommended configuration options are already set.
  • nsx-node-agent DaemonSet, including nsx-node-agent, nsx-kube-proxy, and nsx-ovs.
  • nsx-ncp-bootstrap DaemonSet

The NSX CNI and OpenvSwitch kernel modules are installed automatically by nsx-ncp-bootstrap initContainers. The OpenvSwitch userspace daemons are running in nsx-ovs container on each node.

Update the NCP Deployment Specs

Locate the ConfigMap with the name nsx-ncp-config. For the complete list of the ConfigMap options, see GUID-2122160A-7B39-4F73-9ED3-6E1C2346E265.html#GUID-2122160A-7B39-4F73-9ED3-6E1C2346E265. Some options are already configured to recommended values. You can customize all the options for your environment. For example,
  • Log level and log directory.
  • Kubernetes API server IP, certificate path and client token path. By default, the API server ClusterIP from the environment variable is used, and the certficiate and token are automatically mounted from ServiceAccount. Usually no change is required.
  • Kubernetes cluster name.
  • NSX Manager IP and credentials.
  • NSX resource options such as overlay_tz, top_tier_router, container_ip_blocks, external_ip_blocks, and so on.

By default the Kubernetes Service VIP/port and ServiceAccount token and ca_file are used for Kubernetes API access. No change is required here, but you need to fill in some NSX API options of ncp.ini.

  • Specify the nsx_api_managers option. It can be a comma-separated list of NSX Manager IP addresses or URL specifications that are compliant with RFC3896. For example,
    nsx_api_managers = 192.168.1.181, 192.168.1.182, 192.168.1.183
  • Specify the nsx_api_user and nsx_api_password options with the user name and password, respectively, if you configure NCP to connect to NSX using basic authentication. This authentication method is not recommended because it is less secure. These options are ignored if NCP is configured to authenticate using client certificates. These options do not appear in the NCP YAML file. You must add them manually.
  • Specify the nsx_api_cert_file and nsx_api_private_key_file options for authentication with NSX. The nsx_api_cert_file option is the full path to a client certificate file in PEM format. The contents of this file should look like the following:
    -----BEGIN CERTIFICATE-----
    <certificate_data_base64_encoded>
    -----END CERTIFICATE-----
    The nsx_api_private_key_file option is the full path to a client private key file in PEM format. The contents of this file should look like the following:
    -----BEGIN PRIVATE KEY-----
    <private_key_data_base64_encoded>
    -----END PRIVATE KEY-----

    By using client certificate authentication, NCP can use its principal identity to create NSX objects. This means that only an identity with the same identity name can modify or delete the objects. It prevents NSX objects created by NCP from being modified or deleted by mistake. Note that an administrator can modify or delete any object. If the object was created with a principal identity, a warning will indicate that.

  • (Optional) Specify the ca_file option. The value should be a CA bundle file to use in verifying the NSX Manager server certificate. If not set, the system root CAs will be used. If you specify one IP address for nsx_api_managers, then specify one CA file. if you specify three IP addresses for nsx_api_managers, you can specify one or three CA files. If you specify one CA file, it will be used for all three managers. If you specify three CA files, each will be used for the corresponding IP address in nsx_api_managers. For example,
       nsx_api_managers = 192.168.1.181,192.168.1.182,192.168.1.183
       ca_file = ca_file_for_all_mgrs
    or
       nsx_api_managers = 192.168.1.181,192.168.1.182,192.168.1.183
       ca_file = ca_file_for_mgr1,ca_file_for_mgr2,ca_file_for_mgr3
  • (Optional) Specify the insecure option. If set to True, the NSX Manager server certificate is not verified. The default is False.
If you want to use a Kubernetes Secret to store the NSX client certificate and load balancer default certificate, you have to first create Secrets using a kubectl command, then update the Deployment spec:
  • Add Secret volumes to the NCP pod spec, or uncomment the example Secrete volumes.
  • Add volume mounts to the NCP container spec, or uncomment the example volume mounts.
  • Update ncp.ini in the ConfigMap to set the certificate file path pointing to the file in the mounted volume.

If you do not have a shared tier-1 topology, you must set the edge_cluster option to the edge cluster ID so that NCP will create a tier-1 gateway or router for the Loadbalancer service. You can find the edge cluster ID by navigating to System > Fabric > Nodes, selecting the Edge Clusters tab and clicking the edge cluster name.

HA (high availability) is enabled by default. In a production environment, it is recommended that you do not disable HA.

Note: kube-scheduler by default will not schedule pods on the master node. In the NCP YAML file, a toleration is added to allow NCP Pod to run on the master node.

The lb_segment_subnet parameter in ncp.ini is used for service ClusterIP self-access. The default value is 169.254.131.0/24. NCP will use the second last IP address in this subnet as the SNAT IP. For example, if lb_segment_subnet is set to 169.254.100.0/24, NCP will use 169.254.100.254 as the SNAT IP. On a Windows worker node, you must set lb_segment_subnet to a value that is not 169.254.131.0/24. You cannot change lb_segment_subnet after you create the cluster.

Configure SNAT

By default, NCP configures SNAT for every project. SNAT will not be configured for namespaces with the following annotation:
ncp/no_snat: True
If you do not want SNAT for any namespace in the cluster, configure the following option in ncp.ini:
[coe]
enable_snat = False

Note: Updating an existing namespace SNAT annotation is not supported. If you perform such an action, the topology for the namespace will be in an inconsistent state because a stale SNAT rule might remain. To recover from such an inconsistent state, you must recreate the namespace.

(Policy mode only) If SNAT is configured for a cluster, BGP on the tier-0 router is enabled, and Connected Interfaces & Segments is selected under Advertised tier-1 subnets when you configure route redistribution for the tier-0 router, you can use the following option to control route redistribution:
[nsx_v3]
configure_t0_redistribution = True 

If configure_t0_redistribution is set to True, NCP will add a deny route map entry in the redistribution rule to stop the tier-0 router from advertising the cluster's internal subnets to BGP neighbors. This is mainly used for vSphere with Kubernetes clusters. If you do not create a route map for the redistribution rule, NCP will create a route map using its principal identity and apply it in the rule. If you want to modify this route map, you must replace it with a new route map, copy the entries from the NCP-created route map, and add new entries. You must manage any potential conflicts between the new entries and NCP-created entries. If you simply unset the NCP-created route map without creating a new route map for the redistribution rule, NCP will apply the previously created route map to the redistribution rule again when NCP restarts.

Configure Firewall Matching for NAT Rules

Starting with NCP 3.2.1, you can use the natfirewallmatch option to specify how the NSX gateway firewall behaves with NAT rules created for a Kubernetes namespace. This option applies to newly created Kubernetes namespaces only and not to existing namespaces. This option works in policy mode only.
[nsx_v3]
# This parameter indicate how firewall is applied to a traffic packet.
# Firewall can be bypassed, or be applied to external/internal address of
# NAT rule
# Choices: MATCH_EXTERNAL_ADDRESS MATCH_INTERNAL_ADDRESS BYPASS
#natfirewallmatch = MATCH_INTERNAL_ADDRESS

Update the nsx-node-agent DaemonSet Specs

Locate the ConfigMap with the name nsx-node-agent-config. For the complete list of the ConfigMap options, see GUID-0239D3D6-B1A7-42A7-ABAD-200B040B28DE.html#GUID-0239D3D6-B1A7-42A7-ABAD-200B040B28DE. Some options are already configured to recommended values. You can customize all the options for your environment. For example,
  • Log level and log directory.
  • Kubernetes API server IP, certificate path and client token path. By default, the API server ClusterIP from the environment variable is used, and the certficiate and token are automatically mounted from ServiceAccount. Usually no change is required.
  • OpenvSwitch uplink port. For example: ovs_uplink_port=eth1
  • The MTU value for CNI.

To set the MTU value for CNI, modify the mtu parameter in the nsx-node-agent ConfigMap and restart the nsx-ncp-bootstrap pods. This will update the pod MTU on every node. You must also update the node MTU accordingly. A mismatch between the node and pod MTU can cause problems for node-pod communication, affecting, for example, TCP liveness and readiness probes.

The nsx-ncp-bootstrap DaemonSet installs CNI and OVS kernel modules on the node. It then shuts down OVS daemons on the node, so that later nsx-ovs container can run OVS daemons inside a Docker container. When CNI is not installed, all the Kubernetes nodes are in "Not Ready" state. There is a toleration on the bootstrap DaemonSet to allow it to run on "Not Ready" nodes. After it installs CNI plugin, the nodes should become "Ready".

If you are not using NSX OVS kernel module, you must update the volume parameter host-original-ovs-db with the correct path of where the OpenvSwitch database is configured to be during the compilation of the OVS kernel module. For example, if you specify --sysconfdir=/var/lib, then set host-original-ovs-db to /var/lib/openvswitch. Make sure you use the path of the actual OVS database and not a symbolic link pointing to it.

If you are using the NSX OVS kernel module, you must set use_nsx_ovs_kernel_module = True and uncomment the lines about volumes to be mounted:

  # Uncomment these mounts if installing NSX-OVS kernel module
#   # mount host lib modules to install OVS kernel module if needed
#   - name: host-modules
#     mountPath: /lib/modules
#   # mount openvswitch database
#   - name: host-config-openvswitch
#     mountPath: /etc/openvswitch
#   - name: dir-tmp-usr-ovs-kmod-backup
#   # we move the usr kmod files to this dir temporarily before
#   # installing new OVS kmod and/or backing up existing OVS kmod backup
#     mountPath: /tmp/nsx_usr_ovs_kmod_backup

#   # mount linux headers for compiling OVS kmod
#   - name: host-usr-src
#     mountPath: /usr/src

...

# Uncomment these volumes if installing NSX-OVS kernel module
# - name: host-modules
#   hostPath:
#     path: /lib/modules
# - name: host-config-openvswitch
#   hostPath:
#     path: /etc/openvswitch
# - name: dir-tmp-usr-ovs-kmod-backup
#   hostPath:
#     path: /tmp/nsx_usr_ovs_kmod_backup

# - name: host-usr-src
#   hostPath:
#     path: /usr/src

If you plan to specify hostPort for a Pod, set enable_hostport_snat to True in the [k8s] section in the nsx-node-agent-config ConfigMap. In the same ConfigMap, use_ncp_portmap must be set to False (the default value) if you install a CNI plugin. If you do not install a CNI plugin and want to use portmap from the NCP image, set use_ncp_portmap to True.

SNAT uses hostIP as the source IP for hostPort traffic. If there is a network policy for a Pod and you want to access a Pod's hostPort, you must add the worker node IP addresses in the network policy's allow rule. For example, if you have two worker nodes (172.10.0.2 and 172.10.0.3), the Ingress rule must look like:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          project: test
    - podSelector:
        matchLabels:
          app: tea
    - ipBlock:
        cidr: 172.10.0.3/32
    - ipBlock:
        cidr: 172.10.0.2/32
    ...
The NSX node agent is a DaemonSet where each pod runs 3 containers:
  • nsx-node-agent manages container network interfaces. It interacts with the CNI plugin and the Kubernetes API server.
  • nsx-kube-proxy implements Kubernetes service abstraction by translating cluster IPs into pod IPs. It implements the same functionality as the upstream kube-proxy, but is not mutually exclusive with it.
  • nsx-ovs runs the OpenvSwitch userspace daemons. It will also create the OVS bridge automatically and moves the IP address and routes back from node-if to br-int. You must add ovs_uplink_port=ethX in the ncp.ini so that it can use ethX as the OVS bridge uplink.

If worker nodes are using Ubuntu, the ncp-ubuntu.yaml assumes AppArmor kernel module is enabled, otherwise Kubelet will refuse to run nsx-node-agent DaemonSet since it's configured with AppArmor option. For Ubuntu and SUSE, it's enabled by default. To check whether the module is enabled, check the /sys/module/apparmor/parameters/enabled file.

If AppArmor is disabled intentionally, the following changes need to be applied to the YAML file:
  • Remove the AppArmor option:
    annotations:
        # The following line needs to be removed
        container.apparmor.security.beta.kubernetes.io/nsx-node-agent: localhost/node-agent-apparmor
  • Enable the privileged mode for both the nsx-node-agent and nsx-kube-proxy containers
    securityContext:
        # The following line needs to be appended
        privileged: true

Note: If kubelet is run inside a container that uses the hyperkube image, kubelet always reports that AppArmor as disabled regardless of the actual state. The same changes above must be made and applied to the YAML file.

Update the NameSpace Name

In the YAML file, all the namespaced objects such as ServiceAccount, ConfigMap, Deployment are created under the nsx-system namespace. If you use a different namespace, replace all instances of nsx-system.

Enabling Backup and Restore

NCP supports the backup and restore feature in NSX. The supported resources are Namespace, Pod, and Service.

Note: NCP must be configured in policy mode.

To enable this feature, set enable_restore to True in ncp.ini and restart NCP.
[k8s]
enable_restore = True
You can check the status of a restore with an NSX CLI command. For example,
nsxcli
> get ncp-restore status
NCP restore status is INITIAL

The status can be INITIAL, RUNNING, or SUCCESS. INITIAL means the backup/restore feature is ready, but no restore is running. RUNNING means the restore process is running in NCP. SUCCESS means a restore completed successfully. If an error occurs during a restore, NCP will restart automatically and retry. If the status is RUNNING for a long time, check the NCP log for error messages.