This topic describes how to customize node networking for workload clusters, including customizing node IP addresses and configuring DHCP reservations, Node IPAM, and IPv6 on vSphere.
For new clusters on vSphere you need to create DHCP reservations for its nodes and may also need to create a DNS record for its control plane endpoint:
DHCP reservations for cluster nodes:
As a safety precaution in environments where multiple control plane nodes might power off or drop offline for extended periods, adjust the DHCP reservations for the IP addresses of a newly-created cluster’s control plane and worker nodes, so that the addresses remain static and the leases never expire.
For instructions on how to configure DHCP reservations, see your DHCP server documentation.
DNS record for control plane endpoint:
If you are using NSX Advanced Load Balancer, not Kube-Vip, for your control plane endpoint and set VSPHERE_CONTROL_PLANE_ENDPOINT
to an FQDN rather than a numeric IP address, reserve the addresses as follows:
Retrieve the control plane IP address that NSX ALB assigned to the cluster:
kubectl get cluster CLUSTER-NAME -o=jsonpath='{.spec.controlPlaneEndpoint} {"\n"}'
Record the IP address listed as "host"
in the output, for example 192.168.104.107
.
Create a DNS A
record that associates your the FQDN to the IP address you recorded.
To test the FQDN, create a new kubeconfig that uses the FQDN instead of the IP address from NSX ALB:
Generate the kubeconfig:
tanzu cluster kubeconfig get CLUSTER-NAME --admin --export-file ./KUBECONFIG-TEST
Edit the kubeconfig file KUBECONFIG-TEST
to replace the IP address with the FQDN. For example, replace this:
server: https://192.168.104.107:443
with this:
server: https://CONTROLPLANE-FQDN:443
Retrieve the cluster’s pods using the modified kubeconfig:
kubectl get pods -A --kubeconfig=./KUBECONFIG-TEST
If the output lists the pods, DNS works for the FQDN.
You can configure cluster-specific IP address blocks for nodes in standalone management clusters and the workload clusters they deploy. How you do this depends on the cloud infrastructure that the cluster runs on:
On vSphere, the cluster configuration file’s VSPHERE_NETWORK
sets the VM network that Tanzu Kubernetes Grid uses for cluster nodes and other Kubernetes objects. IP addresses are allocated to nodes by a DHCP server that runs in this VM network, deployed separately from Tanzu Kubernetes Grid.
If you are using NSX networking, you can configure DHCP bindings for your cluster nodes by following Configure DHCP Static Bindings on a Segment in the VMware NSX-T Data Center documentation.
NoteFor v4.0+, VMware NSX-T Data Center is renamed to “VMware NSX.”
To configure cluster-specific IP address blocks on Amazon Web Services (AWS), set the following variables in the cluster configuration file as described in the AWS table in the Configuration File Variable Reference.
AWS_PUBLIC_NODE_CIDR
to set an IP address range for public nodes.
AWS_PRIVATE_NODE_CIDR_1
or AWS_PRIVATE_NODE_CIDR_2
AWS_PRIVATE_NODE_CIDR
to set an IP address range for private nodes.
AWS_PRIVATE_NODE_CIDR_1
and AWS_PRIVATE_NODE_CIDR_2
10.0.0.0/16
.To configure cluster-specific IP address blocks on Azure, set the following variables in the cluster configuration file as described in the Microsoft Azure table in the Configuration File Variable Reference.
AZURE_NODE_SUBNET_CIDR
to create a new VNet with a CIDR block for worker node IP addresses.AZURE_CONTROL_PLANE_SUBNET_CIDR
to create a new VNet with a CIDR block for control plane node IP addresses.AZURE_NODE_SUBNET_NAME
to assign worker node IP addresses from the range of an existing VNet.AZURE_CONTROL_PLANE_SUBNET_NAME
to assign control plane node IP addresses from the range of an existing VNet.With Node IPAM, an in-cluster IPAM provider allocates and manages IP addresses for cluster nodes during cluster creation and scaling, eliminating any need to configure external DHCP.
You can configure Node IPAM for standalone management clusters on vSphere and the class-based workload clusters that they manage. The procedure below configures Node IPAM for class-based workload clusters; to configure Node IPAM for a management cluster, see Configure Node IPAM in Management Cluster Configuration for vSphere.
NoteThis procedure does not apply to TKG with a vSphere with Tanzu Supervisor or with a standalone management cluster on AWS or Azure.
When configuring Node IPAM for a new or existing workload cluster, the user specifies an internal IP pool that the IPAM provider allocates static IP addresses from, and a gateway address.
When allocating addresses to cluster nodes, Node IPAM always picks the lowest available address in the pool.
kubectl
and the Tanzu CLI installed locallyNode IPAM has the following limitations in TKG v2.3:
A workload cluster’s Node IPAM pool can be defined by two different object types, depending on how its addresses are shared with other clusters:
InClusterIPPool
configures IP pools that are only available to workload clusters in the same management cluster namespace, such as default
.
GlobalInClusterIPPool
configures IP pools with addresses that can be allocated to workload clusters across multiple namespaces.To configure a new or existing cluster with Node IPAM:
Create an IP pool object definition file my-ip-pool.yaml
that sets a range of IP addresses from a subnet that TKG can use to allocate static IP addresses for your workload cluster. Define the object as either an InClusterIPPool
or a GlobalInClusterIPPool
based on how you want to scope the IP pool, for example:
InClusterIPPool
: to create an IP pool inclusterippool
for workload clusters in the namespace default
that contains the range 10.10.10.2-10.10.10.100
plus 10.10.10.102
and 10.10.10.104
:
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: inclusterippool
namespace: default
spec:
gateway: 10.10.10.1
addresses:
- 10.10.10.2-10.10.10.100
- 10.10.10.102
- 10.10.10.104
prefix: 24
NotePrevious TKG versions used the
valpha1
version of theInClusterIPPool
object, which only supported a contiguous IP pool range specified bystart
andend
as described in the TKG v2.1 documentation. Upgrading clusters to v2.3 converts their IP pools to new structure.
GlobalInClusterIPPool
: to create an IP pool inclusterippool
to share across namespaces that contains the same addresses as the InClusterIPPool
above:
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: inclusterippool
spec:
gateway: 10.10.10.1
addresses:
- 10.10.10.2-10.10.10.100
- 10.10.10.102
- 10.10.10.104
prefix: 24
Create the IP pool object:
kubectl apply -f my-ip-pool.yaml
Configure the workload cluster to use the IP pool within either a flat cluster configuration file or a Kubernetes-style object spec, as described in Configuration Files.
For example:
Flat configuration file (to create new clusters):
# The name of the InClusterIPPool object specified above
NODE_IPAM_IP_POOL_NAME: inclusterippool
CONTROL_PLANE_NODE_NAMESERVERS: 10.10.10.10,10.10.10.11
WORKER_NODE_NAMESERVERS: 10.10.10.10,10.10.10.11
Object spec (to create new clusters or modify existing ones):
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
spec:
topology:
variables:
- name: network
value:
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: inclusterippool
- name: controlplane
value:
network:
nameservers: [10.10.10.10,10.10.10.11]
- name: worker
value:
network:
nameservers: [10.10.10.10,10.10.10.11]
Now you can deploy your workload cluster.
To see whether cluster nodes have IP addresses assigned, run kubectl get
to list the IaaS-specific machine objects, vspherevms
, and check their IPAddressClaimed
status. True
means the node’s address claim is successful, and if the status is False
, the command output reports a Reason
why the condition is not ready:
kubectl -n CLUSTER-NAMESPACE get vspherevms
To see the IP address claims, list ipaddressclaims
. For each machine, the addressesFromPools
entry causes one IPAddressClaim
to be created:
kubectl -n CLUSTER-NAMESPACE get ipaddressclaims
To see the IP addresses, list ipaddress
. The in-cluster IPAM Provider should detect each IPAddressClaim
and create a corresponding IPAddress
object:
kubectl -n CLUSTER-NAMESPACE get ipaddress
When all claims for a given VM have been matched with IP addresses, CAPV writes the assigned IP addresses into the VM’s cloud-init metadata and creates the VM. To see the IP address reconciliation steps, see the CAPV and Cluster API IPAM Provider (CAIP) logs:
kubectl logs -n capv-system deployment/capv-controller-manager
kubectl logs -n caip-in-cluster-system deployment/caip-in-cluster-controller-manager
You can run management and workload clusters in an IPv6-only single-stack networking environment on vSphere with Kube-Vip, using Ubuntu-based nodes.
Notes You cannot create IPv6 clusters with a vSphere with Tanzu Supervisor Cluster. You cannot register IPv6 clusters with Tanzu Mission Control. NSX Advanced Load Balancer services and dual-stack IPv4/IPv6 networking are not currently supported.
Prerequisites:
Do the following on your bootstrap machine to deploy a management cluster into an IPv6 networking environment:
Configure Linux to accept router advertisements to ensure the default IPv6 route is not removed from the routing table when the Docker service starts. For more information, see Docker CE deletes IPv6 Default route. sudo sysctl net.ipv6.conf.eth0.accept_ra=2
Create a masquerade rule for bootstrap cluster to send outgoing traffic from the bootstrap cluster: sudo ip6tables -t nat -A POSTROUTING -s fc00:f853:ccd:e793::/64 ! -o docker0 -j MASQUERADE
For more information about masquerade rules, See MASQUERADE.
Set the following variables in the configuration file for the management cluster.
TKG_IP_FAMILY
to ipv6
.VSPHERE_CONTROL_PLANE_ENDPOINT
to a static IPv6 address.CLUSTER_CIDR and SERVICE_CIDR
. Defaults to fd00:100:64::/48
and fd00:100:96::/108
respectively.Deploy the management cluster by running tanzu mc create
, as described in Deploy Management Clusters from a Configuration File.
If you have deployed an IPv6 management cluster, deploy an IPv6 workload cluster as follows:
Set the following variables in the configuration file for the workload cluster.
TKG_IP_FAMILY
to ipv6
.VSPHERE_CONTROL_PLANE_ENDPOINT
to a static IPv6 address.CLUSTER_CIDR and SERVICE_CIDR
. Defaults to fd00:100:64::/48
and fd00:100:96::/108
respectively.Deploy the workload cluster as described in Create Workload Clusters.
NoteThis feature is in the unsupported Technical Preview state; see TKG Feature States.
The dual-stack feature lets you deploy clusters with IPv4 and IPv6 IP families. However, the primary IP family is IPv4. Before experimenting with this feature, configure your vCenter Server to support both IPv4 and IPv6 connectivity.
The following are the limitations of the dual-stack feature in this release:
The dual-stack feature supports vSphere as the only infrastructure as a service (IaaS) product.
You cannot configure dual-stack on clusters with Photon OS nodes. Only clusters configured with an OS_NAME
of ubuntu
are supported.
You cannot configure dual-stack networking for vSphere with Tanzu Supervisor Clusters or the workload clusters that they create.
You cannot deploy a dual-stack management cluster with the installer interface.
You cannot use the dual-stack or the IPv6 services on the load balancer services provided by NSX Advanced Load Balancer (ALB). You can use kube-vip as the control plane endpoint provider for a dual-stack cluster. Using NSX ALB as the control plane endpoint provider for a dual-stack cluster has not been validated.
Only the core add-on components, such as Antrea, Calico, CSI, CPI, and Pinniped, have been validated for dual-stack support in this release.
To configure dual-stack on the clusters:
Set the dual-stack feature flag:
a. To enable the feature on the management cluster, run the following command:
tanzu config set features.management-cluster.dual-stack-ipv4-primary true
b. To enable the feature on the workload cluster, run the following command:
tanzu config set features.cluster.dual-stack-ipv4-primary true
Deploy Management Clusters or Create Workload Clusters, as required.
In the cluster configuration file:
TKG_IP_FAMILY: ipv4,ipv6
.NoteThere are two CIDRs for each variable. The IP families of these CIDRs follow the order of the configured
TKG_IP_FAMILY
. The largest CIDR range that is permitted for the IPv4 addresses is /12, and the largest IPv6 SERVICE_CIDR range is /108. If you do not set the CIDRs, the default values are used.
Set the following configuration file parameter, if you are using Antrea as the CNI for your cluster:
ANTREA_ENDPOINTSLICES: true
The services, which have an ipFamilyPolicy
specified in their specs of PreferDualStack
or RequireDualStack
, can now be accessed through IPv4 or IPv6.
NoteThe end-to-end tests for the dual-stack feature in upstream Kubernetes can fail as a cluster node advertises only its primary IP address (in this case, the IPv4 address) as its IP address.