This topic describes how to customize networking for workload clusters, including using a cluster network interface (CNI) other than the default Antrea, and supporting publicly-routable, no-NAT IP addresses for workload clusters on vSphere with VMware NSX networking.
As a safety precaution in environments where multiple control plane nodes might power off or drop offline for extended periods, adjust the DHCP reservations for the IP addresses of a newly-created cluster’s control plane nodes and control plane endpoint, so that the addresses remain static and the leases never expire. The vSphere control plane endpoint is the address that you assign to Kube-Vip or optionally for NSX Advanced Load Balancer when you deploy the cluster.
For instructions on how to configure DHCP reservations, see your DHCP server documentation.
If you are using NSX Advanced Load Balancer for your control plane endpoint and set vSphere control plane endpoint to an FQDN rather than a numeric IP address, reserve the addresses as follows:
Retrieve the control plane IP address that NSX ALB assigned to the cluster:
kubectl get cluster CLUSTER-NAME -o=jsonpath='{.spec.controlPlaneEndpoint} {"\n"}'
Record the IP address listed as "host"
in the output, for example 192.168.104.107
.
Create a DNS A
record that associates your the FQDN to the IP address you recorded.
To test the FQDN, create a new kubeconfig that uses the FQDN instead of the IP address from NSX ALB:
Generate the kubeconfig:
tanzu cluster kubeconfig get CLUSTER-NAME --admin --export-file ./KUBECONFIG-TEST
Edit the kubeconfig file KUBECONFIG-TEST
to replace the IP address with the FQDN. For example, replace this:
server: https://192.168.104.107:443
with this:
server: https://CONTROLPLANE-FQDN:443
Retrieve the cluster’s pods using the modified kubeconfig:
kubectl get pods -A --kubeconfig=./KUBECONFIG-TEST
If the output lists the pods, DNS works for the FQDN.
When you use the Tanzu CLI to deploy a workload cluster, an Antrea cluster network interface (CNI) is automatically enabled in the cluster. Tanzu Kubernetes releases (TKr) include a version of Antrea and a version of Kubernetes that are compatible. For information on how Antrea is versioned in TKr, see Tanzu Kubernetes releases and Component Versions. Alternatively, you can enable a Calico CNI or your own CNI provider.
Because auto-managed packages are managed by Tanzu Kubernetes Grid, you typically do not need to update their configurations. However, if you want to create a workload cluster that uses Calico as the CNI, you can do so by following the steps below.
To install calico
instead of antrea
on a class-based cluster, follow the steps below:
Create a YAML file that contains the following Kubernetes objects:
apiVersion: cni.tanzu.vmware.com/v1alpha1
kind: CalicoConfig
metadata:
name: CLUSTER-NAME
namespace: CLUSTER-NAMESPACE
spec:
calico:
config:
vethMTU: 0
---
apiVersion: run.tanzu.vmware.com/v1alpha3
kind: ClusterBootstrap
metadata:
annotations:
tkg.tanzu.vmware.com/add-missing-fields-from-tkr: TKR-VERSION
name: CLUSTER-NAME
namespace: CLUSTER-NAMESPACE
spec:
additionalPackages: # Customize additional packages
- refName: metrics-server*
- refName: secretgen-controller*
- refName: pinniped*
cni:
refName: calico*
valuesFrom:
providerRef:
apiGroup: cni.tanzu.vmware.com
kind: CalicoConfig
name: CLUSTER-NAME
Where:
CLUSTER-NAME
is the name of the workload cluster that you intend to create.CLUSTER-NAMESPACE
is the namespace of the workload cluster.TKR-VERSION
is the version of the Tanzu Kubernetes release (TKr) that you intend to use for the workload cluster. For example, v1.23.5+vmware.1-tkg.1
.Apply the file by running the kubectl apply -f
command against the Supervisor cluster.
Create a YAML file that contains the following configuration:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: CLUSTER-NAME
namespace: CLUSTER-NAMESPACE
spec:
clusterNetwork:
services:
cidrBlocks: ["SERVICES-CIDR"]
pods:
cidrBlocks: ["PODS-CIDR"]
serviceDomain: "SERVICE-DOMAIN"
topology:
class: tanzukubernetescluster
version: TKR-VERSION
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: node-pool
name: NODE-POOL-NAME
replicas: 1
variables:
- name: vmClass
value: VM-CLASS
# Default storageClass for control plane and node pool
- name: storageClass
value: STORAGE-CLASS-NAME
Where:
CLUSTER-NAME
is the name of the workload cluster that you intend to create.CLUSTER-NAMESPACE
is the namespace of the workload cluster.SERVICES-CIDR
is the CIDR block for services. For example, 198.51.100.0/12
.PODS-CIDR
is the CIDR block for pods. For example, 192.0.2.0/16
.SERVICE-DOMAIN
is the service domain name. For example, cluster.local
.TKR-VERSION
is the version of the TKr that you intend to use for the workload cluster. For example, v1.23.5+vmware.1-tkg.1
.NODE-POOL-NAME
is the name of the node pool for machineDeployments
.VM-CLASS
is the name of the VM class that you want to use for your cluster. For example, best-effort-small
.STORAGE-CLASS-NAME
is the name of the storage class that you want to use for your cluster. For example, wcpglobal-storage-profile
.For example:
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: my-workload-cluster
namespace: my-workload-cluster-namespace
spec:
clusterNetwork:
services:
cidrBlocks: ["198.51.100.0/12"]
pods:
cidrBlocks: ["192.0.2.0/16"]
serviceDomain: "cluster.local"
topology:
class: tanzukubernetescluster
version: v1.23.5+vmware.1-tkg.1
controlPlane:
replicas: 1
workers:
machineDeployments:
- class: node-pool
name: my-node-pool
replicas: 1
variables:
- name: vmClass
value: best-effort-small
# Default storageClass for control plane and node pool
- name: storageClass
value: wcpglobal-storage-profile
Create the workload cluster by passing the YAML file that you created in the step above to the -f
option of the tanzu cluster create
command.
To install calico
instead of antrea
on a workload cluster of type TanzuKubernetesCluster
, set the CNI
configuration variable in the cluster configuration file that you plan to use to create your workload cluster and then pass the file to the -f
option of the tanzu cluster create
command. For example, CNI: calico
.
To enable multiple CNI providers on a workload cluster, such as macvlan, ipvlan, SR-IOV or DPDK, install the Multus package onto a cluster that is already running Antrea or Calico CNI, and create additional NetworkAttachmentDefinition
resources for CNIs. Then you can create new pods in the cluster that use different network interfaces for different address ranges.
For directions, see Deploy Multus on Workload Clusters.
On vSphere with NSX networking and the Antrea container network interface (CNI), you can configure a TKC-based workload cluster with routable IP addresses for its worker pods, bypassing network address translation (NAT) for external requests from and to the pods.
Routable IP addresses on pods let you:
For more information on deploying pods with routable, no-NAT IP addresses, see TKC with Routable Pods Network.