Node Networking

This topic describes how to customize node networking for workload clusters, including customizing node IP addresses and configuring DHCP reservations, Node IPAM, and IPv6 on vSphere.

Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only)

For new clusters on vSphere you need to create DHCP reservations for its nodes and may also need to create a DNS record for its control plane endpoint:

  • DHCP reservations for cluster nodes:

    As a safety precaution in environments where multiple control plane nodes might power off or drop offline for extended periods, adjust the DHCP reservations for the IP addresses of a newly-created cluster’s control plane and worker nodes, so that the addresses remain static and the leases never expire.

    For instructions on how to configure DHCP reservations, see your DHCP server documentation.

  • DNS record for control plane endpoint:

    If you are using NSX Advanced Load Balancer, not Kube-Vip, for your control plane endpoint and set VSPHERE_CONTROL_PLANE_ENDPOINT to an FQDN rather than a numeric IP address, reserve the addresses as follows:

    1. Retrieve the control plane IP address that NSX ALB assigned to the cluster:

      kubectl get cluster CLUSTER-NAME -o=jsonpath='{.spec.controlPlaneEndpoint} {"\n"}'
      
    2. Record the IP address listed as "host" in the output, for example 192.168.104.107.

    3. Create a DNS A record that associates your the FQDN to the IP address you recorded.

    4. To test the FQDN, create a new kubeconfig that uses the FQDN instead of the IP address from NSX ALB:

      1. Generate the kubeconfig:

        tanzu cluster kubeconfig get CLUSTER-NAME --admin --export-file ./KUBECONFIG-TEST
        
      2. Edit the kubeconfig file KUBECONFIG-TEST to replace the IP address with the FQDN. For example, replace this:

        server: https://192.168.104.107:443
        

        with this:

        server: https://CONTROLPLANE-FQDN:443
        
      3. Retrieve the cluster’s pods using the modified kubeconfig:

        kubectl get pods -A --kubeconfig=./KUBECONFIG-TEST
        

        If the output lists the pods, DNS works for the FQDN.

Customize Cluster Node IP Addresses (Standalone MC)

You can configure cluster-specific IP address blocks for nodes in standalone management clusters and the workload clusters they deploy. How you do this depends on the cloud infrastructure that the cluster runs on:

vSphere

On vSphere, the cluster configuration file’s VSPHERE_NETWORK sets the VM network that Tanzu Kubernetes Grid uses for cluster nodes and other Kubernetes objects. IP addresses are allocated to nodes by a DHCP server that runs in this VM network, deployed separately from Tanzu Kubernetes Grid.

If you are using NSX networking, you can configure DHCP bindings for your cluster nodes by following Configure DHCP Static Bindings on a Segment in the VMware NSX-T Data Center documentation.

Note

For v4.0+, VMware NSX-T Data Center is renamed to “VMware NSX.”

AWS

To configure cluster-specific IP address blocks on Amazon Web Services (AWS), set the following variables in the cluster configuration file as described in the AWS table in the Configuration File Variable Reference.

  • Set AWS_PUBLIC_NODE_CIDR to set an IP address range for public nodes.
    • Make additional ranges available by setting AWS_PRIVATE_NODE_CIDR_1 or AWS_PRIVATE_NODE_CIDR_2
  • Set AWS_PRIVATE_NODE_CIDR to set an IP address range for private nodes.
    • Make additional ranges available by setting AWS_PRIVATE_NODE_CIDR_1 and AWS_PRIVATE_NODE_CIDR_2
  • All node CIDR ranges must lie within the cluster’s VPC range, which defaults to 10.0.0.0/16.

Microsoft Azure

To configure cluster-specific IP address blocks on Azure, set the following variables in the cluster configuration file as described in the Microsoft Azure table in the Configuration File Variable Reference.

  • Set AZURE_NODE_SUBNET_CIDR to create a new VNet with a CIDR block for worker node IP addresses.
  • Set AZURE_CONTROL_PLANE_SUBNET_CIDR to create a new VNet with a CIDR block for control plane node IP addresses.
  • Set AZURE_NODE_SUBNET_NAME to assign worker node IP addresses from the range of an existing VNet.
  • Set AZURE_CONTROL_PLANE_SUBNET_NAME to assign control plane node IP addresses from the range of an existing VNet.

Node IPAM

With Node IPAM, an in-cluster IPAM provider manages IP addresses for workload cluster nodes during cluster creation and scaling, eliminating any need to configure external DHCP.

Note

This procedure does not apply to TKG with a vSphere with Tanzu Supervisor or with a standalone management cluster on AWS or Azure.

When configuring Node IPAM for a new or existing workload cluster, the user specifies an internal IP pool that the IPAM provider allocates static IP addresses from, and a gateway address.

The IP pool is configured as a subnet, with optional start and end addresses to restrict the address range within the subnet CIDR.

This diagram shows how Node IPAM enables CAPV to configure static node addresses from its IP pool. Components (solid outline) and resource definitions (dashed outline) that are specific to Node IPAM are shown in green.

Node IPAM boxes and lines

Prerequisites

  • A TKG standalone management cluster
  • Nameservers for the workload cluster’s control plane and worker nodes
    • Required because cluster nodes will no longer resolve names via DHCP in vCenter
  • kubectl and the Tanzu CLI installed locally

Limitations

Node IPAM has the following limitations in TKG v2.2:

  • Only for class-based workload clusters deployed by a management cluster on vSphere.
  • No Windows node support.
  • Only for IPv4 environment, not IPv6 or dual-stack.
  • Only allocates node addresses, not the cluster control plane endpoint.
  • If you configure Node IPAM for an existing cluster, does not check whether its IP pool conflicts with a DHCP pool already used by the cluster.
  • By design, IP pools are specific to namespaces and cannot be shared across multiple namespaces.
    • Clusters that run across multiple namespaces require an IP pool in each namespace, all with non-overlapping address ranges.
    • For additional restrictions on IP pools, see Pool Rules below.

Configure Node IPAM for a Workload Cluster

To configure a new or existing cluster with Node IPAM:

  1. Create an InClusterIPPool object definition that sets a range of IP addresses from a subnet that TKG can use to allocate static IP addresses for your workload cluster. For example, to create an object inclusterippool that reserves 10.10.10.200 to 10.10.10.250 for clusters in the default namespace:

    ---
    apiVersion: ipam.cluster.x-k8s.io/v1alpha1
    kind: InClusterIPPool
    metadata:
      name: inclusterippool
      # the namespace where the workload cluster is deployed
      namespace: default
    spec:
      # replace the IPs below with what works for your environment
      subnet: 10.10.10.0/24
      gateway: 10.10.10.1
      # start and end are optional fields that restrict the allocatable address range
      # within the subnet
      start: 10.10.10.200
      end: 10.10.10.250
    
  2. Create the InClusterIPPool object:

    kubectl apply -f my-ip-pool.yaml
    
  3. Enable the custom nameservers feature in the Tanzu CLI configuration:

    tanzu config set features.cluster.custom-nameservers true
    
  4. Configure the workload cluster to use the IP pool within either a flat cluster configuration file or a Kubernetes-style object spec, as described in Configuration Files.

    For example:

    • Flat configuration file (to create new clusters):

      # The name of the InClusterIPPool object specified above
      NODE_IPAM_IP_POOL_NAME: inclusterippool
      CONTROL_PLANE_NODE_NAMESERVERS: 10.10.10.10
      WORKER_NODE_NAMESERVERS: 10.10.10.10
      
    • Object spec (to create new clusters or modify existing ones):

      ---
      apiVersion: cluster.x-k8s.io/v1beta1
      kind: Cluster
      spec:
      topology:
        variables:
        - name: network
          value:
            addressesFromPools:
            - apiGroup: ipam.cluster.x-k8s.io
              kind: InClusterIPPool
              name: inclusterippool
        - name: controlplane
          value:
            network:
              nameservers: [10.10.10.10]
        - name: worker
          value:
            network:
              nameservers: [10.10.10.10]
      

Now you can deploy your workload cluster.

Pool Rules

  • IP pool ranges must not overlap
    • Overlapping pools can cause failures, and TKG does not validate pool ranges or detect overlaps.
  • Do not update an active InClusterIPPool object with a new pool
    • TKG will not automatically update the IP addresses of the nodes that use the pool, and you will need to re-create the node to release its old IP address

Troubleshooting Node IPAM

  • To see whether cluster nodes have IP addresses assigned, run kubectl get to list the IaaS-specific machine objects, vspherevms, and check their IPAddressClaimed status. True means the node’s address claim is successful, and if the status is False, the command output reports a Reason why the condition is not ready:

    kubectl -n CLUSTER-NAMESPACE get vspherevms
    
  • To see the IP address claims, list ipaddressclaims. For each machine, the addressesFromPools entry causes one IPAddressClaim to be created:

    kubectl -n CLUSTER-NAMESPACE get ipaddressclaims
    
  • To see the IP addresses, list ipaddress. The in-cluster IPAM Provider should detect each IPAddressClaim and create a corresponding IPAddress object:

    kubectl -n CLUSTER-NAMESPACE get ipaddress
    
  • When all claims for a given VM have been matched with IP addresses, CAPV writes the assigned IP addresses into the VM’s cloud-init metadata and creates the VM. To see the IP address reconciliation steps, see the CAPV and Cluster API IPAM Provider (CAIP) logs:

    kubectl logs -n capv-system deployment/capv-controller-manager
    kubectl logs -n caip-in-cluster-system deployment/caip-in-cluster-controller-manager
    

IPv6 Networking (vSphere)

You can run management and workload clusters in an IPv6-only single-stack networking environment on vSphere with Kube-Vip, using Ubuntu-based nodes.

Notes You cannot create IPv6 clusters with a vSphere with Tanzu Supervisor Cluster, or with a standalone management cluster on vSphere 8. You cannot register IPv6 clusters with Tanzu Mission Control. NSX Advanced Load Balancer services and dual-stack IPv4/IPv6 networking are not currently supported.

Prerequisites:

Deploy an IPv6 Management Cluster

Do the following on your bootstrap machine to deploy a management cluster into an IPv6 networking environment:

  1. Configure Linux to accept router advertisements to ensure the default IPv6 route is not removed from the routing table when the Docker service starts. For more information, see Docker CE deletes IPv6 Default route. sudo sysctl net.ipv6.conf.eth0.accept_ra=2

  2. Create a masquerade rule for bootstrap cluster to send outgoing traffic from the bootstrap cluster: sudo ip6tables -t nat -A POSTROUTING -s fc00:f853:ccd:e793::/64 ! -o docker0 -j MASQUERADE For more information about masquerade rules, See MASQUERADE.

  3. Set the following variables in the configuration file for the management cluster.

    • Set TKG_IP_FAMILY to ipv6.
    • Set VSPHERE_CONTROL_PLANE_ENDPOINT to a static IPv6 address.
    • (Optional) Set the CLUSTER_CIDR and SERVICE_CIDR. Defaults to fd00:100:64::/48 and fd00:100:96::/108 respectively.
  4. Deploy the management cluster by running tanzu mc create, as described in Deploy Management Clusters from a Configuration File.

    • For IPv6 support, you must deploy the management cluster from a configuration file, not the installer interface.

Deploy an IPv6 Workload Cluster

If you have deployed an IPv6 management cluster, deploy an IPv6 workload cluster as follows:

  1. Set the following variables in the configuration file for the workload cluster.

    • Set TKG_IP_FAMILY to ipv6.
    • Set VSPHERE_CONTROL_PLANE_ENDPOINT to a static IPv6 address.
    • (Optional) Set the CLUSTER_CIDR and SERVICE_CIDR. Defaults to fd00:100:64::/48 and fd00:100:96::/108 respectively.
  2. Deploy the workload cluster as described in Create Workload Clusters.

Dual-Stack Clusters (Technical Preview)

Note

This feature is in the unsupported Technical Preview state; see TKG Feature States.

The dual-stack feature lets you deploy clusters with IPv4 and IPv6 IP families. However, the primary IP family is IPv4. Before experimenting with this feature, configure your vCenter Server to support both IPv4 and IPv6 connectivity.

The following are the limitations of the dual-stack feature in this release:

  • The dual-stack feature supports vSphere as the only infrastructure as a service (IaaS) product.

  • You cannot configure dual-stack on clusters with Photon OS nodes. Only clusters configured with an OS_NAME of ubuntu are supported.

  • You cannot configure dual-stack networking for vSphere with Tanzu Supervisor Clusters or the workload clusters that they create.

  • You cannot deploy a dual-stack management cluster with the installer interface.

  • You cannot use the dual-stack or the IPv6 services on the load balancer services provided by NSX Advanced Load Balancer (ALB). You can use kube-vip as the control plane endpoint provider for a dual-stack cluster. Using NSX ALB as the control plane endpoint provider for a dual-stack cluster has not been validated.

  • Only the core add-on components, such as Antrea, Calico, CSI, CPI, and Pinniped, have been validated for dual-stack support in this release.

To configure dual-stack on the clusters:

  1. Set the dual-stack feature flag:

    a. To enable the feature on the management cluster, run the following command:

    tanzu config set features.management-cluster.dual-stack-ipv4-primary true
    

    b. To enable the feature on the workload cluster, run the following command:

    tanzu config set features.cluster.dual-stack-ipv4-primary true
    
  2. Deploy Management Clusters or Create Workload Clusters, as required.

    In the cluster configuration file:

    • Set the IP family configuration variable TKG_IP_FAMILY: ipv4,ipv6.
    • Optionally, set the service CIDRs and cluster CIDRs.
    Note

    There are two CIDRs for each variable. The IP families of these CIDRs follow the order of the configured TKG_IP_FAMILY. The largest CIDR range that is permitted for the IPv4 addresses is /12, and the largest IPv6 SERVICE_CIDR range is /108. If you do not set the CIDRs, the default values are used.

    • Set the following configuration file parameter, if you are using Antrea as the CNI for your cluster:

      ANTREA_ENDPOINTSLICES: true
      

    The services, which have an ipFamilyPolicy specified in their specs of PreferDualStack or RequireDualStack, can now be accessed through IPv4 or IPv6.

Note

The end-to-end tests for the dual-stack feature in upstream Kubernetes can fail as a cluster node advertises only its primary IP address (in this case, the IPv4 address) as its IP address.

check-circle-line exclamation-circle-line close-line
Scroll to top icon