If you used a previous release of Tanzu Kubernetes Grid to create a management cluster that implements DHCP, you can update that management cluster so that it uses Node IPAM instead.
NoteThis procedure describes how to update an existing management cluster to use Node IPAM. For information about how to create a new management cluster that uses Node IPAM directly, see Configure Node IPAM.
To be able to migrate a management cluster from DHCP to Node IPAM, you must upgrade the management cluster to TKG v2.5.x before you perform the migration
To migrate a cluster from DHCP to Node IPAM, perform the following steps.
Create an InClusterIPPool
object.
InClusterIPPool
configures IP pools that are only available to clusters in the management cluster namespace, such as tkg-system
. The following rules apply when creating the InClusterIPPool
for a management cluster:
InClusterIPPool
must be tkg-system
. This is the same namespace as the management cluster’s Cluster
resource.InClusterIPPool
should have the same name as the management cluster. Only use this IP pool for the management cluster, not for workload clusters.InClusterIPPool
to ensure proper interaction with Cluster API on resource migration, for example when resources are migrated from the management cluster to a bootstrap cluster or the reverse.
"clusterctl.cluster.x-k8s.io/move-hierarchy": ""
"ipam.cluster.x-k8s.io/skip-validate-delete-webhook": ""
---
apiVersion: ipam.cluster.x-k8s.io/v1alpha2
kind: InClusterIPPool
metadata:
name: my-management-cluster
namespace: tkg-system
labels:
"clusterctl.cluster.x-k8s.io/move-hierarchy": ""
annotations:
"ipam.cluster.x-k8s.io/skip-validate-delete-webhook": ""
spec:
gateway: 110.10.10.1
addresses:
- 10.10.10.2-10.10.10.100
prefix: 24
Modify the cluster configuration to migrate to add references to the pool and nameservers.
You must add a pool reference to addressesFromPools
under the network variable. You must also specify nameservers under the controlplane
and worker
variables.
spec:
topology:
variables:
- name: network
value:
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: inclusterippool
- name: controlplane
value:
network:
nameservers: [10.10.10.10]
- name: worker
value:
network:
nameservers: [10.10.10.10]
It is possible to migrate only the control plane nodes or only some set of worker nodes by using the variable overrides in the cluster configuration YAML file. This allows you to perform migration in smaller maintenance windows. In the following example, the control plane nodes are migrated to use IPAM while the worker nodes continue to use DHCP.
You must leave addressesFromPools
empty, to prevent them from using the variables from the controlPlane
section.
controlPlane:
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=photon
replicas: 1
variables:
- name: network
value:
addressesFromPools:
- apiGroup: ipam.cluster.x-k8s.io
kind: InClusterIPPool
name: mgmt-cluster-nimbus
ipv6Primary: false
- name: controlplane
value:
network:
nameservers: [10.10.10.10]
[...]
workers:
machineDeployments:
- class: tkg-worker
metadata:
annotations:
run.tanzu.vmware.com/resolve-os-image: image-type=ova,os-name=photon
name: md-0
replicas: 1
strategy:
type: RollingUpdate
variables:
overrides:
- name: network
value:
addressesFromPools: []
ipv6Primary: false
You can also do the reverse, configuring the worker overrides to use IPAM while using DHCP for the management cluster.