This topic describes how to customize networking for Tanzu Kubernetes (workload) clusters, including using a cluster network interface (CNI) other than the default Antrea, and supporting publicly-routable, no-NAT IP addresses for workload clusters on vSphere with NSX-T networking.
When you use the Tanzu CLI to deploy a Tanzu Kubernetes cluster, an Antrea cluster network interface (CNI) is automatically enabled in the cluster. Tanzu Kubernetes releases (TKr) include a version of Antrea and a version of Kubernetes that are compatible. For information on how Antrea is versioned in TKr, see Tanzu Kubernetes releases and Antrea Versions.
Alternatively, you can enable a Calico CNI or your own CNI provider.
Existing Tanzu Kubernetes clusters that you deployed with a version of Tanzu Kubernetes Grid earlier than 1.2.x and then upgrade to v1.3 continue to use Calico as the CNI provider. You cannot change the CNI provider for these clusters.
You can change the default CNI for a Tanzu Kubernetes cluster by specifying the
CNI variable in the configuration file. The
CNI variable supports the following options:
antrea: Enables Antrea.
calico: Enables Calico. See Enable Calico below.
none: Allows you to enable a custom CNI provider. See Enable a Custom CNI Provider below.
If you do not specify the
CNI variable, Antrea is enabled by default.
CNI: antrea #! --------------------------------------------------------------------- #! Antrea CNI configuration #! --------------------------------------------------------------------- ANTREA_NO_SNAT: false ANTREA_TRAFFIC_ENCAP_MODE: "encap" ANTREA_PROXY: false ANTREA_POLICY: true ANTREA_TRACEFLOW: false
To enable Calico in a Tanzu Kubernetes cluster, specify the following in the configuration file:
To enable a custom CNI provider in a Tanzu Kubernetes cluster, follow the steps below:
CNI: none in the configuration file when you create the cluster. For example:
The cluster creation process will not succeed until you apply a CNI to the cluster. You can monitor the cluster creation process in the Cluster API logs on the management cluster. For instructions on how to access the Cluster API logs, see Monitor Workload Cluster Deployments in Cluster API Logs.
After the cluster has been initialized, apply your CNI provider to the cluster:
admin credentials of the cluster. For example:
tanzu cluster kubeconfig get my-cluster --admin
Set the context of
kubectl to the cluster. For example:
kubectl config use-context my-cluster-admin@my-cluster
Apply the CNI provider to the cluster:
kubectl apply -f PATH-TO-YOUR-CNI-CONFIGURATION/example.yaml
Monitor the status of the cluster by using the
tanzu cluster list command. When the cluster creation completes, the cluster status changes from
running. For more information about how to examine your cluster, see Connect to and Examine Tanzu Kubernetes Clusters.
To enable multiple CNI providers on a workload cluster, such as macvlan, ipvlan, SR-IOV or DPDK, install the Multus package onto a cluster that is already running Antrea or Calico CNI, and create additional
NetworkAttachmentDefinition resources for CNIs. Then you can create new pods in the cluster that use different network interfaces for different address ranges.
For directions, see Implementing Multiple Pod Network Interfaces with Multus.
On vSphere with NSX-T networking and the Antrea container network interface (CNI), you can configure a Kubernetes workload cluster with routable IP addresses for its worker pods, bypassing network address translation (NAT) for external requests from and to the pods.
Routable IP addresses on pods let you:
The following sections explain how to deploy Tanzu Kubernetes Grid workload clusters with routable-IP pods. The range of routable IP addresses is set with the cluster's
CLUSTER_CIDR configuration variable.
Browse to your NSX-T server and open the Networking tab.
Under Connectivity > Tier-1 Gateways, click Add Tier-1 Gateway and configure a new Tier-1 gateway dedicated to routable-IP pods:
Click Save to save the gateway.
Under Connectivity > Segments, click Add Segment and configure a new NSX-T segment, a logical switch, for the workload cluster nodes containing the routable pods:
188.8.131.52/24. This range should not overlap with DHCP profile Server IP Address values.
Click Save to save the gateway.
To deploy a workload cluster that has no-NAT, publicly-routable IP addresses for its worker pods:
Create a workload cluster configuration file as described in Create a Tanzu Kubernetes Cluster Configuration File and as follows:
CLUSTER_CIDRin the workload cluster configuration file, or
tanzu cluster createcommand with a
CLUSTER_CIDR=setting, as shown in the following step.
NSXT_MANAGER_HOSTto your NSX-T manager IP address.
NSXT_ROUTER_PATHto the inventory path of the newly-added Tier-1 gateway for routable IPs. Obtain this from NSX-T manager > Connectivity > Tier-1 Gateways by clicking the menu icon () to the left of the gateway name and clicking Copy Path to Clipboard. The name starts with
NSXT_string variables for accessing NSX-T by following the NSX-T Pod Routing table in the Tanzu CLI Configuration File Variable Reference. Pods can authenticate with NSX-T in one of four ways, with the least secure listed last:
NSXT_CLIENT_CERT_KEY_DATA, and for a CA-issued certificate,
tanzu cluster create as described in Deploy Tanzu Kubernetes Clusters. For example:
$ CLUSTER_CIDR=100.96.0.0/11 tanzu cluster create my-routable-work-cluster -f my-routable-work-cluster-config.yaml Validating configuration... Creating workload cluster 'my-routable-work-cluster'... Waiting for cluster to be initialized... Waiting for cluster nodes to be available...
To test routable IP addresses for your workload pods:
Deploy a webserver to the routable workload cluster.
kubectl get pods --o wide to retrieve
EXTERNAL-IP values for your routable pods, and verify that the IP addresses listed are identical and are within the routable
kubectl get nodes --o wide to retrieve
EXTERNAL-IP values for the workload cluster nodes, which contain the routable-IP pods.
Log in to a different workload cluster's control plane node:
kubectl config use-context CLUSTER-CONTEXTto change context to the different cluster.
kubectl get nodesto retrieve the IP address of the current cluster's control plane node.
ssh capv@CONTROLPLANE-IPusing the IP address you just retrieved.
curlrequests to the routable IP address where you deployed the webserver, and confirm its responses.
pingoutput should list the webserver's routable pod IP as the source address.
From a browser, log in to NSX-T and navigate to the Tier-1 gateway that you created for routable-IP pods.
Click Static Routes and confirm that the following routes were created within the routable
After you delete a workload cluster that contains routable-IP pods, you may need to free the routable IP addresses by deleting them from T1 router:
In the NSX-T manager > Connectivity > Tier-1 Gateways select your routable-IP gateway.
Under Static Routes click the number of routes to open the list.
Search for routes that include the deleted cluster name, and delete each one from the menu icon () to the left of the route name.
curl -i -k -u 'NSXT_USERNAME:NSXT_PASSWORD' -H 'Content-Type: application/json' -H 'X-Allow-Overwrite: true' -X DELETE https://NSXT_MANAGER_HOST/policy/api/v1/STATIC-ROUTE-PATHwhere:
NSXT_PASSWORDare your NSX-T manager IP address and credentials
STATIC_ROUTE_PATHis the path that you just copied to the clipboard. The name starts with
You can configure cluster-specific IP address blocks for management or workload cluster nodes. How you do this depends on the cloud infrastructure that the cluster runs on:
On vSphere, the cluster configuration file's
VSPHERE_NETWORK sets the VM network that Tanzu Kubernetes Grid uses for cluster nodes and other Kubernetes objects. IP addresses are allocated to nodes by a DHCP server that runs in this VM network, deployed separately from Tanzu Kubernetes Grid.
If you are using NSX-T networking, you can configure DHCP bindings for your cluster nodes by following Configure DHCP Static Bindings on a Segment in the VMware NSX-T Data Center documentation.
To configure cluster-specific IP address blocks on Amazon EC2, set the following variables in the cluster configuration file as described in the Amazon EC2 table in the Tanzu CLI Configuration File Variable Reference.
AWS_PUBLIC_NODE_CIDRto set an IP address range for public nodes.
AWS_PRIVATE_NODE_CIDRto set an IP address range for private nodes.
AWS_VPC_CIDRor assign nodes to an existing VPC and address range with
To configure cluster-specific IP address blocks on Azure, set the following variables in the cluster configuration file as described in the Microsoft Azure table in the Tanzu CLI Configuration File Variable Reference.
AZURE_NODE_SUBNET_CIDRto create a new VNET with a CIDR block for worker node IP addresses.
AZURE_CONTROL_PLANE_SUBNET_CIDRto create a new VNET with a CIDR block for control plane node IP addresses.
AZURE_NODE_SUBNET_NAMEto assign worker node IP addresses from the range of an existing VNET.
AZURE_CONTROL_PLANE_SUBNET_NAMEto assign control plane node IP addresses from the range of an existing VNET.