The NCP YAML file contains information to configure, install and run all the NCP components.
- RBAC definitions.
- Various CRDs (CustomResourceDefinitions).
- ConfigMap containing ncp.ini dedicated to NCP. Some recommended configuration options are already set.
- NCP Deployment.
- ConfigMap containing ncp.ini deidicated to nsx-node-agent. Some recommended configuration options are already set.
- nsx-node-agent DaemonSet, including nsx-node-agent, nsx-kube-proxy, and nsx-ovs.
- nsx-ncp-bootstrap DaemonSet
The NSX CNI and OpenvSwitch kernel modules are installed automatically by nsx-ncp-bootstrap initContainers. The OpenvSwitch userspace daemons are running in nsx-ovs container on each node.
Update the NCP Deployment Specs
- Log level and log directory.
- Kubernetes API server IP, certificate path and client token path. By default, the API server ClusterIP from the environment variable is used, and the certficiate and token are automatically mounted from ServiceAccount. Usually no change is required.
- Kubernetes cluster name.
- NSX Manager IP and credentials.
- NSX resource options such as overlay_tz, top_tier_router, container_ip_blocks, external_ip_blocks, and so on.
By default the Kubernetes Service VIP/port and ServiceAccount token and ca_file are used for Kubernetes API access. No change is required here, but you need to fill in some NSX API options of ncp.ini.
- Specify the nsx_api_managers option. It can be a comma-separated list of NSX Manager IP addresses or URL specifications that are compliant with RFC3896. For example,
nsx_api_managers = 192.168.1.181, 192.168.1.182, 192.168.1.183
- Specify the nsx_api_user and nsx_api_password options with the user name and password, respectively, if you configure NCP to connect to NSX-T using basic authentication. This authentication method is not recommended because it is less secure. These options are ignored if NCP is configured to authenticate using client certificates. These options do not appear in the NCP YAML file. You must add them manually.
- Specify the nsx_api_cert_file and nsx_api_private_key_file options for authentication with NSX-T. The nsx_api_cert_file option is the full path to a client certificate file in PEM format. The contents of this file should look like the following:
-----BEGIN CERTIFICATE----- <certificate_data_base64_encoded> -----END CERTIFICATE-----
The nsx_api_private_key_file option is the full path to a client private key file in PEM format. The contents of this file should look like the following:-----BEGIN PRIVATE KEY----- <private_key_data_base64_encoded> -----END PRIVATE KEY-----
By using client certificate authentication, NCP can use its principal identity to create NSX-T objects. This means that only an identity with the same identity name can modify or delete the objects. It prevents NSX-T objects created by NCP from being modified or deleted by mistake. Note that an administrator can modify or delete any object. If the object was created with a principal identity, a warning will indicate that.
- (Optional) Specify the ca_file option. The value should be a CA bundle file to use in verifying the NSX Manager server certificate. If not set, the system root CAs will be used. If you specify one IP address for nsx_api_managers, then specify one CA file. if you specify three IP addresses for nsx_api_managers, you can specify one or three CA files. If you specify one CA file, it will be used for all three managers. If you specify three CA files, each will be used for the corresponding IP address in nsx_api_managers. For example,
nsx_api_managers = 192.168.1.181,192.168.1.182,192.168.1.183 ca_file = ca_file_for_all_mgrs or nsx_api_managers = 192.168.1.181,192.168.1.182,192.168.1.183 ca_file = ca_file_for_mgr1,ca_file_for_mgr2,ca_file_for_mgr3
- (Optional) Specify the insecure option. If set to True, the NSX Manager server certificate is not verified. The default is False.
- Add Secret volumes to the NCP pod spec, or uncomment the example Secrete volumes.
- Add volume mounts to the NCP container spec, or uncomment the example volume mounts.
- Update ncp.ini in the ConfigMap to set the certificate file path pointing to the file in the mounted volume.
If you do not have a shared tier-1 topology, you must set the edge_cluster option to the edge cluster ID so that NCP will create a tier-1 gateway or router for the Loadbalancer service. You can find the edge cluster ID by navigating to , selecting the Edge Clusters tab and clicking the edge cluster name.
HA (high availability) is enabled by default. In a production environment, it is recommended that you do not disable HA.
Note: kube-scheduler by default will not schedule pods on the master node. In the NCP YAML file, a toleration is added to allow NCP Pod to run on the master node.
Configure SNAT
ncp/no_snat: True
[coe] enable_snat = False
Note: Updating an existing namespace SNAT annotation is not supported. If you perform such an action, the topology for the namespace will be in an inconsistent state because a stale SNAT rule might remain. To recover from such an inconsistent state, you must recreate the namespace.
[nsx_v3] configure_t0_redistribution = True
If configure_t0_redistribution is set to True, NCP will add a deny route map entry in the redistribution rule to stop the tier-0 router from advertising the cluster's internal subnets to BGP neighbors. This is mainly used for vSphere with Kubernetes clusters. If you do not create a route map for the redistribution rule, NCP will create a route map using its principal identity and apply it in the rule. If you want to modify this route map, you must replace it with a new route map, copy the entries from the NCP-created route map, and add new entries. You must manage any potential conflicts between the new entries and NCP-created entries. If you simply unset the NCP-created route map without creating a new route map for the redistribution rule, NCP will apply the previously created route map to the redistribution rule again when NCP restarts.
Update the nsx-node-agent DaemonSet Specs
- Log level and log directory.
- Kubernetes API server IP, certificate path and client token path. By default, the API server ClusterIP from the environment variable is used, and the certficiate and token are automatically mounted from ServiceAccount. Usually no change is required.
- OpenvSwitch uplink port. For example: ovs_uplink_port=eth1
- The MTU value for CNI.
To set the MTU value for CNI, modify the mtu parameter in the nsx-node-agent ConfigMap and restart the nsx-ncp-bootstrap pods. This will update the pod MTU on every node. You must also update the node MTU accordingly. A mismatch between the node and pod MTU can cause problems for node-pod communication, affecting, for example, TCP liveness and readiness probes.
The nsx-ncp-bootstrap DaemonSet installs CNI and OVS kernel modules on the node. It then shuts down OVS daemons on the node, so that later nsx-ovs container can run OVS daemons inside a Docker container. When CNI is not installed, all the Kubernetes nodes are in "Not Ready" state. There is a toleration on the bootstrap DaemonSet to allow it to run on "Not Ready" nodes. After it installs CNI plugin, the nodes should become "Ready".
If you are not using NSX OVS kernel module, you must update the volume parameter host-original-ovs-db with the correct path of where the OpenvSwitch database is configured to be during the compilation of the OVS kernel module. For example, if you specify --sysconfdir=/var/lib, then set host-original-ovs-db to /var/lib/openvswitch. Make sure you use the path of the actual OVS database and not a symbolic link pointing to it.
If you are using the NSX OVS kernel module, you must set use_nsx_ovs_kernel_module = True and uncomment the lines about volumes to be mounted:
# Uncomment these mounts if installing NSX-OVS kernel module # # mount host lib modules to install OVS kernel module if needed # - name: host-modules # mountPath: /lib/modules # # mount openvswitch database # - name: host-config-openvswitch # mountPath: /etc/openvswitch # - name: dir-tmp-usr-ovs-kmod-backup # # we move the usr kmod files to this dir temporarily before # # installing new OVS kmod and/or backing up existing OVS kmod backup # mountPath: /tmp/nsx_usr_ovs_kmod_backup # # mount linux headers for compiling OVS kmod # - name: host-usr-src # mountPath: /usr/src ... # Uncomment these volumes if installing NSX-OVS kernel module # - name: host-modules # hostPath: # path: /lib/modules # - name: host-config-openvswitch # hostPath: # path: /etc/openvswitch # - name: dir-tmp-usr-ovs-kmod-backup # hostPath: # path: /tmp/nsx_usr_ovs_kmod_backup # - name: host-usr-src # hostPath: # path: /usr/src
Starting with NCP 3.1.1, hostPort is supported. If you plan to specify hostPort for a Pod, set enable_hostport_snat to True in the [k8s] section in the nsx-node-agent-config ConfigMap. In the same ConfigMap, use_ncp_portmap must be set to False (the default value) if you install a CNI plugin. If you do not install a CNI plugin and want to use portmap from the NCP image, set use_ncp_portmap to True.
ingress: - from: - namespaceSelector: matchLabels: project: test - podSelector: matchLabels: app: tea - ipBlock: cidr: 172.10.0.3/32 - ipBlock: cidr: 172.10.0.2/32 ...
- nsx-node-agent manages container network interfaces. It interacts with the CNI plugin and the Kubernetes API server.
- nsx-kube-proxy implements Kubernetes service abstraction by translating cluster IPs into pod IPs. It implements the same functionality as the upstream kube-proxy, but is not mutually exclusive with it.
- nsx-ovs runs the OpenvSwitch userspace daemons. It will also create the OVS bridge automatically and moves the IP address and routes back from node-if to br-int. You must add ovs_uplink_port=ethX in the ncp.ini so that it can use ethX as the OVS bridge uplink.
If worker nodes are using Ubuntu, the ncp-ubuntu.yaml assumes AppArmor kernel module is enabled, otherwise Kubelet will refuse to run nsx-node-agent DaemonSet since it's configured with AppArmor option. For Ubuntu and SUSE, it's enabled by default. To check whether the module is enabled, check the /sys/module/apparmor/parameters/enabled file.
- Remove the AppArmor option:
annotations: # The following line needs to be removed container.apparmor.security.beta.kubernetes.io/nsx-node-agent: localhost/node-agent-apparmor
- Enable the privileged mode for both the nsx-node-agent and nsx-kube-proxy containers
securityContext: # The following line needs to be appended privileged: true
Note: If kubelet is run inside a container that uses the hyperkube image, kubelet always reports that AppArmor as disabled regardless of the actual state. The same changes above must be made and applied to the YAML file.
Update the NameSpace Name
In the YAML file, all the namespaced objects such as ServiceAccount, ConfigMap, Deployment are created under the nsx-system namespace. If you use a different namespace, replace all instances of nsx-system.