This section describes how to configure NSX-T networking for Kubernetes master and minion nodes.

Each node must have at least two network interfaces. The first is a management interface which might or might not be on the NSX-T fabric. The other interfaces provide networking for the pods, are on the NSX-T fabric, and connected to a logical switch which is referred to as the node logical switch. The management and pod IP addresses must be routable for Kubernetes health check to work. For communication between the management interface and the pods, NCP automatically creates a DFW rule to allow health check and other management traffic. You can see details of this rule in the NSX Manager GUI. This rule should not be changed or deleted.

For each node VM, ensure that the vNIC that is designated for container networking is attached to the node logical switch.

The VIF ID of the vNIC used for container traffic in each node must be known to NSX-T Container Plug-in (NCP). The corresponding logical switch port must be tagged in the following way:

    {'ncp/node_name':  '<node_name>'}
    {'ncp/cluster': '<cluster_name>'}

You can identify the logical switch port for a node VM by navigating to Inventory > Virtual Machines from the NSX Manager GUI.

If the Kubernetes node name changes, you must update the tag ncp/node_name and restart NCP. You can use the following command to get the node names:

    kubectl get nodes

If you add a node to a cluster while NCP is running, you must add the tags to the logical switch port before you run the kubeadm join command. Otherwise, the new node will not have network connectivity. If the tags are incorrect or missing, you can take the following steps to resolve the issue:

  • Apply the correct tags to the logical switch port.

  • Restart NCP.