There are two methods to configure certain networking resources for NCP. This section describes configuring resources in Policy mode.

In the NCP configuration file ncp.ini, you must specify NSX-T resources using their resource IDs. Usually a resource's name and ID are the same. To be completely sure, on the NSX Manager web UI, click the 3-dot icon that displays options for a resource and select Copy path to clipboard. Paste the path to an application such as Notepad. The last part of the path is the resource ID.

Gateways and Segment

  1. Create a segment for the Kubernetes nodes, for example, Segment1.
  2. Create a tier-0 gateway, for example, T0GW1. Set the top_tier_router option in the [nsx_v3] section of ncp.ini with the gateway's ID if you do not have a shared tier-1 topology. See below for information on configuring a shared tier-1 topology. Set the HA mode to active-standby if you plan to configure NAT rules on this gateway. Otherwise, set it to active-active. Enable route redistribution. Also configure this gateway for access to the external network.
  3. Create a tier-1 gateway, for example, T1GW1. Connect this gateway to the tier-0 gateway.
  4. Configure router advertisement for T1GW1. At the very least, NSX-connected and NAT routes should be enabled.
  5. Connect T1GW1 to Segment1. Make sure that the gateway port's IP address does not conflict with the IP addresses of the Kubernetes nodes.
  6. For each node VM, make sure that the vNIC for container traffic is attached to the logical switch that is automatically created. You can find it in the Networking tab with the same name as the segment, that is, Segment1).
NCP must know the VIF ID of the vNIC. You can see Segment1's ports that are automatically created by navigating to Networking > Segments. These ports are not editable except for their tag property. These ports must have the following tags. For one tag, specify the name of the node. For the other tag, specify the name of the cluster. For the scope, specify the appropriate value as indicated below.
Tag Scope
Node name ncp/node_name
Cluster name ncp/cluster
These tags are automatically propagated to the corresponding logical switch ports. If the node name changes, you must update the tag. To retrieve the node name, you can run the following command:
kubectl get nodes

If you want to extend the Kubernetes cluster while NCP is running, for example, add more nodes to the cluster, you must add the tags to the corresponding switch ports before running "kubeadm join". If you forget to add the tags before running "kubeadm join", the new nodes will not have connectivity. In this case, you must add the tags and restart NCP to resolve the issue.

To identify the switch port for a node VM, you can make the following API call:
/api/v1/fabric/virtual-machines
In the response, look for the Node VM and retrieve the value for the ``external_id`` attribute. Or you can make the following API call:
/api/v1/search -G --data-urlencode "query=(resource_type:VirtualMachine AND display_name:<node_vm_name>)"
After you have the external ID, you can use it to retrieve the VIFs for the VM with the following API. Note that VIFs are not populated until the VM is started.
/api/v1/search -G --data-urlencode \
"query=(resource_type:VirtualNetworkInterface AND external_id:<node_vm_ext_id> AND \
_exists_:lport_attachment_id)"

The lport_attachment_id attribute is the VIF ID for the node VM. You can then find the logical port for this VIF and add the required tags.

IP Blocks for Kubernetes Pods

Navigate to Networking > IP Address Management > IP Address Pools > IP Address Blocks to create one or more IP blocks. Specify the IP block in CIDR format. Set the container_ip_blocks option in the [nsx_v3] section of ncp.ini to the UUIDs of the IP blocks. If you want NCP to automatically create IP blocks, you can set the container_ip_blocks option with a comma-separated list of addresses in CIDR format. Note that you cannot set container_ip_blocks to both UUIDs and CIDR addresses.

By default, projects share IP blocks specified in container_ip_blocks. You can create IP blocks specifically for no-SNAT namespaces (for Kubernetes) or clusters (for PCF) by setting the no_snat_ip_blocks option in the [nsx_v3] section of ncp.ini.

If you create no-SNAT IP blocks while NCP is running, you must restart NCP. Otherwise, NCP will keep using the shared IP blocks until they are exhausted.

When you create an IP block, the prefix must not be larger than the value of the subnet_prefix option in NCP's configuration file ncp.ini. The default is 24.

External IP Pools

An external IP pool is used for allocating IP addresses which will be used for translating pod IPs using SNAT rules, and for exposing Ingress controllers and LoadBalancer-type services using SNAT/DNAT rules, just like Openstack floating IPs. These IP addresses are also referred to as external IPs.

Navigate to Networking > IP Address Management > IP Address Pools to create an IP pool. Set the external_ip_pools option in the [nsx_v3] section of ncp.ini to the UUIDs of the IP pools. If you want NCP to automatically create IP pools, you can set the external_ip_pools option with a comma-separated list of addresses in CIDR format or IP ranges. Note that you cannot set external_ip_pools to both UUIDs and CIDR addresses.

Multiple Kubernetes clusters use the same external IP pool. Each NCP instance uses a subset of this pool for the Kubernetes cluster that it manages. By default, the same subnet prefix for pod subnets will be used. To use a different subnet size, update the external_subnet_prefix option in the [nsx_v3] section in ncp.ini.

You can change to a different IP pool by changing the configuration file and restarting NCP.

Shared Tier-1 Topology

To enable a shared tier-1 topology, perform the following configurations:
  • Set the top_tier_router option to the ID of a tier-1 gateway. Connect the tier-1 gateway to a tier-0 gateway for external connections.
  • If SNAT for Pod traffic is enabled, modify the uplink of the segment for Kubernetes nodes to the same tier-0 or tier-1 gateway that is set in top_tier_router.
  • Set the single_tier_topology option to True. The default value is False.
  • If you want NCP to automatically configure the top tier router as a tier-1 gateway, unset the top_tier_router option and set the tier0_gateway option. NCP will create a tier-1 gateway and uplink it to the tier-0 gateway specified in the tier0_gateway option.