This topic describes how to customize node networking for workload clusters, including customizing node IP addresses and configuring DHCP reservations, Node IPAM, and IPv6 on vSphere.
For new clusters on vSphere you need to create DHCP reservations for its nodes and may also need to create a DNS record for its control plane endpoint:
DHCP reservations for cluster nodes:
As a safety precaution in environments where multiple control plane nodes might power off or drop offline for extended periods, adjust the DHCP reservations for the IP addresses of a newly-created cluster’s control plane and worker nodes, so that the addresses remain static and the leases never expire.
For instructions on how to configure DHCP reservations, see your DHCP server documentation.
DNS record for control plane endpoint:
If you are using NSX Advanced Load Balancer, not Kube-Vip, for your control plane endpoint and set VSPHERE_CONTROL_PLANE_ENDPOINT
to an FQDN rather than a numeric IP address, reserve the addresses as follows:
Retrieve the control plane IP address that NSX ALB assigned to the cluster:
kubectl get cluster CLUSTER-NAME -o=jsonpath='{.spec.controlPlaneEndpoint} {"\n"}'
Record the IP address listed as "host"
in the output, for example 192.168.104.107
.
Create a DNS A
record that associates your the FQDN to the IP address you recorded.
To test the FQDN, create a new kubeconfig that uses the FQDN instead of the IP address from NSX ALB:
Generate the kubeconfig:
tanzu cluster kubeconfig get CLUSTER-NAME --admin --export-file ./KUBECONFIG-TEST
Edit the kubeconfig file KUBECONFIG-TEST
to replace the IP address with the FQDN. For example, replace this:
server: https://192.168.104.107:443
with this:
server: https://CONTROLPLANE-FQDN:443
Retrieve the cluster’s pods using the modified kubeconfig:
kubectl get pods -A --kubeconfig=./KUBECONFIG-TEST
If the output lists the pods, DNS works for the FQDN.
You can configure cluster-specific IP address blocks for nodes in standalone management clusters and the workload clusters they deploy.
On vSphere, the cluster configuration file’s VSPHERE_NETWORK
sets the VM network that Tanzu Kubernetes Grid uses for cluster nodes and other Kubernetes objects. IP addresses are allocated to nodes by a DHCP server that runs in this VM network, deployed separately from Tanzu Kubernetes Grid.
If you are using NSX networking, you can configure DHCP bindings for your cluster nodes by following Configure DHCP Static Bindings on a Segment in the VMware NSX-T Data Center documentation.
NoteFor v4.0+, VMware NSX-T Data Center is renamed to “VMware NSX.”
With Node IPAM, an in-cluster IPAM provider allocates and manages IP addresses for cluster nodes during cluster creation and scaling, eliminating any need to configure external DHCP.
You can configure Node IPAM for standalone management clusters on vSphere and the class-based workload clusters that they manage. The procedure below configures Node IPAM for class-based workload clusters; to configure Node IPAM for a management cluster, see Configure Node IPAM in Management Cluster Configuration for vSphere.
When configuring Node IPAM for a new or existing workload cluster, the user specifies an internal IP pool that the IPAM provider allocates static IP addresses from, and a gateway address.
When allocating addresses to cluster nodes, Node IPAM always picks the lowest available address in the pool.
kubectl
and the Tanzu CLI installed locallyNode IPAM has the following limitations in TKG v2.5:
A workload cluster’s Node IPAM pool can be defined by two different object types, depending on how its addresses are shared with other clusters:
InClusterIPPool
configures IP pools that are only available to workload clusters in the same management cluster namespace, such as default
.
GlobalInClusterIPPool
configures IP pools with addresses that can be allocated to workload clusters across multiple namespaces.To configure a new or existing cluster with Node IPAM:
Create an IP pool object definition file my-ip-pool.yaml
that sets a range of IP addresses from a subnet that TKG can use to allocate static IP addresses for your workload cluster. Define the object as either an InClusterIPPool
or a GlobalInClusterIPPool
based on how you want to scope the IP pool, for example:
inclusterippool
for workload clusters in the namespace
default
that contains the range
10.10.10.2-10.10.10.100
plus
10.10.10.102
and
10.10.10.104
:
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: inclusterippool
namespace: default
spec:
gateway: 10.10.10.1
addresses:
- 10.10.10.2-10.10.10.100
- 10.10.10.102
- 10.10.10.104
prefix: 24
NotePrevious TKG versions used the
valpha1
version of theInClusterIPPool
object, which only supported a contiguous IP pool range specified bystart
andend
. Upgrading clusters to v2.5.x converts their IP pools to new structure.
inclusterippool
to share across namespaces that contains the same addresses as the
InClusterIPPool
above:
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: inclusterippool
spec:
gateway: 10.10.10.1
addresses:
- 10.10.10.2-10.10.10.100
- 10.10.10.102
- 10.10.10.104
prefix: 24
Create the IP pool object:
kubectl apply -f my-ip-pool.yaml
Configure the workload cluster to use the IP pool within either a flat cluster configuration file or a Kubernetes-style object spec, as described in Configuration Files.
For example:
# The name of the InClusterIPPool object specified above
NODE_IPAM_IP_POOL_NAME: inclusterippool
CONTROL_PLANE_NODE_NAMESERVERS: 10.10.10.10,10.10.10.11
WORKER_NODE_NAMESERVERS: 10.10.10.10,10.10.10.11
Dual-Stack: To configure a workload cluster with dual-stack Node IPAM networking, include NODE_IPAM_SECONDARY_*
settings as described in Configure Node IPAM with Dual-Stack Networking.
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
spec:
topology:
variables:
- name: network
value:
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: inclusterippool
- name: controlplane
value:
network:
nameservers: [10.10.10.10,10.10.10.11]
- name: worker
value:
network:
nameservers: [10.10.10.10,10.10.10.11]
Now you can deploy your workload cluster.
To see whether cluster nodes have IP addresses assigned, run kubectl get
to list the IaaS-specific machine objects, vspherevms
, and check their IPAddressClaimed
status. True
means the node’s address claim is successful, and if the status is False
, the command output reports a Reason
why the condition is not ready:
kubectl -n CLUSTER-NAMESPACE get vspherevms
To see the IP address claims, list ipaddressclaims
. For each machine, the addressesFromPools
entry causes one IPAddressClaim
to be created:
kubectl -n CLUSTER-NAMESPACE get ipaddressclaims
To see the IP addresses, list ipaddress
. The in-cluster IPAM Provider should detect each IPAddressClaim
and create a corresponding IPAddress
object:
kubectl -n CLUSTER-NAMESPACE get ipaddress
When all claims for a given VM have been matched with IP addresses, CAPV writes the assigned IP addresses into the VM’s cloud-init metadata and creates the VM. To see the IP address reconciliation steps, see the CAPV and Cluster API IPAM Provider (CAIP) logs:
kubectl logs -n capv-system deployment/capv-controller-manager
kubectl logs -n caip-in-cluster-system deployment/caip-in-cluster-controller-manager
If you used a previous release of Tanzu Kubernetes Grid to create a class-based cluster that implements DHCP, you can update that cluster so that it uses Node IPAM instead.
For information about how to migrate an existing management cluster from DHCP to Node IPAM, see Migrate an Existing Management Cluster from DHCP to Node IPAM in Deploying and Managing Tanzu Kubernetes Grid 2.5 Standalone Management Clusters on vSphere.
To be able to migrate a cluster from DHCP to Node IPAM, you must:
ClusterClass
cluster.To migrate a cluster from DHCP to Node IPAM, perform the following steps.
Create an InClusterIPPool
or GlobalInClusterIPPool
object.
See the previous section for descriptions of InClusterIPPool
or GlobalInClusterIPPool
objects.
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: inclusterippool
# the namespace should match where the
# cluster is deployed
namespace: default
spec:
# replace the IPs below with what works for your
# environment as the following IPs only serve as
# an example
gateway: 10.10.10.1
# Addresses is an array where each entry can
# either be a single ip address, a range
# (e.g 10.0.0.2-10.0.0.100), or a CIDR (e.g
# 10.0.0.32/27).
addresses:
- 10.10.10.2-10.10.10.100
- 10.10.10.102
- 10.10.10.104
prefix: 24
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: GlobalInClusterIPPool
metadata:
name: inclusterippool
spec:
# replace the IPs below with what works for your
# environment as the following IPs only serve as
# an example
gateway: 10.10.10.1
addresses:
- 10.10.10.2-10.10.10.100
prefix: 24
Modify the cluster configuration to migrate to add references to the pool and nameservers.
You must add a pool reference to addressesFromPools
under the network variable. You must also specify nameservers under the controlplane
and worker
variables.
spec:
topology:
variables:
- name: network
value:
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: GlobalInClusterIPPool
name: inclusterippool
- name: controlplane
value:
network:
nameservers: [10.10.10.10]
- name: worker
value:
network:
nameservers: [10.10.10.10]
It is possible to migrate only the control plane nodes or only some set of worker nodes by using the variable overrides in the cluster configuration YAML file. This allows you to perform migration in smaller maintenance windows. In the following example, the control plane nodes are migrated to use IPAM while the worker nodes continue to use DHCP.
You must leave addressesFromPools
empty, to prevent them from using the variables from the controlPlane
section.
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=photon
replicas: 1
variables:
- name: network
value:
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: mgmt-cluster-nimbus
ipv6Primary: false
- name: controlplane
value:
network:
nameservers: [10.10.10.10]
[...]
workers:
machineDeployments:
- class: tkg-worker
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=photon
name: md-0
replicas: 1
strategy:
type: RollingUpdate
variables:
overrides:
- name: network
value:
addressesFromPools: []
ipv6Primary: false
You can also do the reverse, configuring the worker overrides to use IPAM while using DHCP for the management cluster.
If the pool is configured with addresses that are not routable for that network, they will be assigned IP addresses but will never become healthy. Create a new pool and update the cluster to reference this new pool.
If the configured pool for a cluster does not exist, the VM creation will run indefinitely. The CAPV logs will report that the pool cannot be found. Recreate the pool to allow reconciliation to continue.
If nameservers are not configured, the new VMs will come up but images will often fail to pull. Nameserver information typically comes from DHCP and needs to be configured when using IPAM. Add nameservers to the cluster allows the VMs to redeploy successfully.
You can run management and workload clusters in an IPv6-only single-stack networking environment on vSphere with Kube-Vip, using Ubuntu-based nodes. You can also deploy clusters with both IPv4 and IPv6 IP families.
IPv6: For information about how to create IPv6 management clusters and workload clusters, see IPv6 Networking in Deploying and Managing TKG 2.5 Standalone Management Clusters on vSphere.
Dual-Stack: For information about how to create IPv4/IPv6 dual-stack management clusters and workload clusters, see IPv4/IPv6 Dual-Stack Networking in Deploying and Managing TKG 2.5 Standalone Management Clusters on vSphere.