Node Networking

This topic describes how to customize node networking for workload clusters, including customizing node IP addresses and configuring DHCP reservations, Node IPAM, and IPv6 on vSphere.

Configure Node DHCP Reservations and Endpoint DNS Record (vSphere Only)

For new clusters on vSphere you need to create DHCP reservations for its nodes and may also need to create a DNS record for its control plane endpoint:

  • DHCP reservations for cluster nodes:

    As a safety precaution in environments where multiple control plane nodes might power off or drop offline for extended periods, adjust the DHCP reservations for the IP addresses of a newly-created cluster’s control plane and worker nodes, so that the addresses remain static and the leases never expire.

    For instructions on how to configure DHCP reservations, see your DHCP server documentation.

  • DNS record for control plane endpoint:

    If you are using NSX Advanced Load Balancer, not Kube-Vip, for your control plane endpoint and set VSPHERE_CONTROL_PLANE_ENDPOINT to an FQDN rather than a numeric IP address, reserve the addresses as follows:

    1. Retrieve the control plane IP address that NSX ALB assigned to the cluster:

      kubectl get cluster CLUSTER-NAME -o=jsonpath='{.spec.controlPlaneEndpoint} {"\n"}'
      
    2. Record the IP address listed as "host" in the output, for example 192.168.104.107.

    3. Create a DNS A record that associates your the FQDN to the IP address you recorded.

    4. To test the FQDN, create a new kubeconfig that uses the FQDN instead of the IP address from NSX ALB:

      1. Generate the kubeconfig:

        tanzu cluster kubeconfig get CLUSTER-NAME --admin --export-file ./KUBECONFIG-TEST
        
      2. Edit the kubeconfig file KUBECONFIG-TEST to replace the IP address with the FQDN. For example, replace this:

        server: https://192.168.104.107:443
        

        with this:

        server: https://CONTROLPLANE-FQDN:443
        
      3. Retrieve the cluster’s pods using the modified kubeconfig:

        kubectl get pods -A --kubeconfig=./KUBECONFIG-TEST
        

        If the output lists the pods, DNS works for the FQDN.

Customize Cluster Node IP Addresses (Standalone MC)

You can configure cluster-specific IP address blocks for nodes in standalone management clusters and the workload clusters they deploy. How you do this depends on the cloud infrastructure that the cluster runs on:

vSphere

On vSphere, the cluster configuration file’s VSPHERE_NETWORK sets the VM network that Tanzu Kubernetes Grid uses for cluster nodes and other Kubernetes objects. IP addresses are allocated to nodes by a DHCP server that runs in this VM network, deployed separately from Tanzu Kubernetes Grid.

If you are using NSX networking, you can configure DHCP bindings for your cluster nodes by following Configure DHCP Static Bindings on a Segment in the VMware NSX-T Data Center documentation.

Note

For v4.0+, VMware NSX-T Data Center is renamed to “VMware NSX.”

AWS

To configure cluster-specific IP address blocks on Amazon Web Services (AWS), set the following variables in the cluster configuration file as described in the AWS table in the Configuration File Variable Reference.

  • Set AWS_PUBLIC_NODE_CIDR to set an IP address range for public nodes.
    • Make additional ranges available by setting AWS_PRIVATE_NODE_CIDR_1 or AWS_PRIVATE_NODE_CIDR_2
  • Set AWS_PRIVATE_NODE_CIDR to set an IP address range for private nodes.
    • Make additional ranges available by setting AWS_PRIVATE_NODE_CIDR_1 and AWS_PRIVATE_NODE_CIDR_2
  • All node CIDR ranges must lie within the cluster’s VPC range, which defaults to 10.0.0.0/16.

Microsoft Azure

To configure cluster-specific IP address blocks on Azure, set the following variables in the cluster configuration file as described in the Microsoft Azure table in the Configuration File Variable Reference.

  • Set AZURE_NODE_SUBNET_CIDR to create a new VNet with a CIDR block for worker node IP addresses.
  • Set AZURE_CONTROL_PLANE_SUBNET_CIDR to create a new VNet with a CIDR block for control plane node IP addresses.
  • Set AZURE_NODE_SUBNET_NAME to assign worker node IP addresses from the range of an existing VNet.
  • Set AZURE_CONTROL_PLANE_SUBNET_NAME to assign control plane node IP addresses from the range of an existing VNet.

Node IPAM (vSphere)

With Node IPAM, an in-cluster IPAM provider allocates and manages IP addresses for cluster nodes during cluster creation and scaling, eliminating any need to configure external DHCP.

You can configure Node IPAM for standalone management clusters on vSphere and the class-based workload clusters that they manage. The procedure below configures Node IPAM for class-based workload clusters; to configure Node IPAM for a management cluster, see Configure Node IPAM in Management Cluster Configuration for vSphere.

Note

This procedure does not apply to TKG with a vSphere with Tanzu Supervisor or with a standalone management cluster on AWS or Azure.

When configuring Node IPAM for a new or existing workload cluster, the user specifies an internal IP pool that the IPAM provider allocates static IP addresses from, and a gateway address.

When allocating addresses to cluster nodes, Node IPAM always picks the lowest available address in the pool.

Prerequisites

  • A TKG standalone management cluster
  • Nameservers for the workload cluster’s control plane and worker nodes
    • Required because cluster nodes will no longer receive nameservers via DHCP to resolve names in vCenter.
  • kubectl and the Tanzu CLI installed locally

Limitations

Node IPAM has the following limitations in TKG v2.3:

  • Only for new, class-based workload clusters deployed by a management cluster on vSphere.
    • You cannot convert existing DHCP-based clusters to Node IPAM.
  • No Windows node support.
  • Only for IPv4 or IPv6 environment, not dual-stack.
  • Only allocates node addresses, not the cluster control plane endpoint.
  • Node IPAM does not check whether its IP pool conflicts with DHCP pools already in use by other clusters.

Configure Node IPAM for a Workload Cluster

A workload cluster’s Node IPAM pool can be defined by two different object types, depending on how its addresses are shared with other clusters:

  • InClusterIPPool configures IP pools that are only available to workload clusters in the same management cluster namespace, such as default.
    • This was the only type available in TKG v2.1 and v2.2.
  • GlobalInClusterIPPool configures IP pools with addresses that can be allocated to workload clusters across multiple namespaces.

To configure a new or existing cluster with Node IPAM:

  1. Create an IP pool object definition file my-ip-pool.yaml that sets a range of IP addresses from a subnet that TKG can use to allocate static IP addresses for your workload cluster. Define the object as either an InClusterIPPool or a GlobalInClusterIPPool based on how you want to scope the IP pool, for example:

    • InClusterIPPool: to create an IP pool inclusterippool for workload clusters in the namespace default that contains the range 10.10.10.2-10.10.10.100 plus 10.10.10.102 and 10.10.10.104:

      apiVersion: ipam.cluster.x-k8s.io/v1alpha2
      kind: InClusterIPPool
      metadata:
        name: inclusterippool
        namespace: default
      spec:
        gateway: 10.10.10.1
        addresses:
        - 10.10.10.2-10.10.10.100
        - 10.10.10.102
        - 10.10.10.104
        prefix: 24
      
      Note

      Previous TKG versions used the valpha1 version of the InClusterIPPool object, which only supported a contiguous IP pool range specified by start and end as described in the TKG v2.1 documentation. Upgrading clusters to v2.3 converts their IP pools to new structure.

    • GlobalInClusterIPPool: to create an IP pool inclusterippool to share across namespaces that contains the same addresses as the InClusterIPPool above:

      apiVersion: ipam.cluster.x-k8s.io/v1alpha2
      kind: GlobalInClusterIPPool
      metadata:
        name: inclusterippool
      spec:
        gateway: 10.10.10.1
        addresses:
        - 10.10.10.2-10.10.10.100
        - 10.10.10.102
        - 10.10.10.104
        prefix: 24
      
  2. Create the IP pool object:

    kubectl apply -f my-ip-pool.yaml
    
  3. Configure the workload cluster to use the IP pool within either a flat cluster configuration file or a Kubernetes-style object spec, as described in Configuration Files.

    For example:

    • Flat configuration file (to create new clusters):

      # The name of the InClusterIPPool object specified above
      NODE_IPAM_IP_POOL_NAME: inclusterippool
      CONTROL_PLANE_NODE_NAMESERVERS: 10.10.10.10,10.10.10.11
      WORKER_NODE_NAMESERVERS: 10.10.10.10,10.10.10.11
      
    • Object spec (to create new clusters or modify existing ones):

      ---
      apiVersion: cluster.x-k8s.io/v1beta1
      kind: Cluster
      spec:
      topology:
        variables:
        - name: network
          value:
            addressesFromPools:
            - apiGroup: ipam.cluster.x-k8s.io
              kind: InClusterIPPool
              name: inclusterippool
        - name: controlplane
          value:
            network:
              nameservers: [10.10.10.10,10.10.10.11]
        - name: worker
          value:
            network:
              nameservers: [10.10.10.10,10.10.10.11]
      

Now you can deploy your workload cluster.

Pool Rules

  • IP pool ranges must not overlap
    • Overlapping pools can cause failures, and TKG does not validate pool ranges or detect overlaps.
  • Do not remove a currently allocated IP address from an IP pool.

Troubleshooting Node IPAM

  • To see whether cluster nodes have IP addresses assigned, run kubectl get to list the IaaS-specific machine objects, vspherevms, and check their IPAddressClaimed status. True means the node’s address claim is successful, and if the status is False, the command output reports a Reason why the condition is not ready:

    kubectl -n CLUSTER-NAMESPACE get vspherevms
    
  • To see the IP address claims, list ipaddressclaims. For each machine, the addressesFromPools entry causes one IPAddressClaim to be created:

    kubectl -n CLUSTER-NAMESPACE get ipaddressclaims
    
  • To see the IP addresses, list ipaddress. The in-cluster IPAM Provider should detect each IPAddressClaim and create a corresponding IPAddress object:

    kubectl -n CLUSTER-NAMESPACE get ipaddress
    
  • When all claims for a given VM have been matched with IP addresses, CAPV writes the assigned IP addresses into the VM’s cloud-init metadata and creates the VM. To see the IP address reconciliation steps, see the CAPV and Cluster API IPAM Provider (CAIP) logs:

    kubectl logs -n capv-system deployment/capv-controller-manager
    kubectl logs -n caip-in-cluster-system deployment/caip-in-cluster-controller-manager
    

IPv6 Networking (vSphere)

You can run management and workload clusters in an IPv6-only single-stack networking environment on vSphere with Kube-Vip, using Ubuntu-based nodes.

Notes You cannot create IPv6 clusters with a vSphere with Tanzu Supervisor Cluster. You cannot register IPv6 clusters with Tanzu Mission Control. NSX Advanced Load Balancer services and dual-stack IPv4/IPv6 networking are not currently supported.

Prerequisites:

Deploy an IPv6 Management Cluster

Do the following on your bootstrap machine to deploy a management cluster into an IPv6 networking environment:

  1. Configure Linux to accept router advertisements to ensure the default IPv6 route is not removed from the routing table when the Docker service starts. For more information, see Docker CE deletes IPv6 Default route. sudo sysctl net.ipv6.conf.eth0.accept_ra=2

  2. Create a masquerade rule for bootstrap cluster to send outgoing traffic from the bootstrap cluster: sudo ip6tables -t nat -A POSTROUTING -s fc00:f853:ccd:e793::/64 ! -o docker0 -j MASQUERADE For more information about masquerade rules, See MASQUERADE.

  3. Set the following variables in the configuration file for the management cluster.

    • Set TKG_IP_FAMILY to ipv6.
    • Set VSPHERE_CONTROL_PLANE_ENDPOINT to a static IPv6 address.
    • (Optional) Set the CLUSTER_CIDR and SERVICE_CIDR. Defaults to fd00:100:64::/48 and fd00:100:96::/108 respectively.
  4. Deploy the management cluster by running tanzu mc create, as described in Deploy Management Clusters from a Configuration File.

    • For IPv6 support, you must deploy the management cluster from a configuration file, not the installer interface.

Deploy an IPv6 Workload Cluster

If you have deployed an IPv6 management cluster, deploy an IPv6 workload cluster as follows:

  1. Set the following variables in the configuration file for the workload cluster.

    • Set TKG_IP_FAMILY to ipv6.
    • Set VSPHERE_CONTROL_PLANE_ENDPOINT to a static IPv6 address.
    • (Optional) Set the CLUSTER_CIDR and SERVICE_CIDR. Defaults to fd00:100:64::/48 and fd00:100:96::/108 respectively.
  2. Deploy the workload cluster as described in Create Workload Clusters.

Dual-Stack Clusters (Technical Preview)

Note

This feature is in the unsupported Technical Preview state; see TKG Feature States.

The dual-stack feature lets you deploy clusters with IPv4 and IPv6 IP families. However, the primary IP family is IPv4. Before experimenting with this feature, configure your vCenter Server to support both IPv4 and IPv6 connectivity.

The following are the limitations of the dual-stack feature in this release:

  • The dual-stack feature supports vSphere as the only infrastructure as a service (IaaS) product.

  • You cannot configure dual-stack on clusters with Photon OS nodes. Only clusters configured with an OS_NAME of ubuntu are supported.

  • You cannot configure dual-stack networking for vSphere with Tanzu Supervisor Clusters or the workload clusters that they create.

  • You cannot deploy a dual-stack management cluster with the installer interface.

  • You cannot use the dual-stack or the IPv6 services on the load balancer services provided by NSX Advanced Load Balancer (ALB). You can use kube-vip as the control plane endpoint provider for a dual-stack cluster. Using NSX ALB as the control plane endpoint provider for a dual-stack cluster has not been validated.

  • Only the core add-on components, such as Antrea, Calico, CSI, CPI, and Pinniped, have been validated for dual-stack support in this release.

To configure dual-stack on the clusters:

  1. Set the dual-stack feature flag:

    a. To enable the feature on the management cluster, run the following command:

    tanzu config set features.management-cluster.dual-stack-ipv4-primary true
    

    b. To enable the feature on the workload cluster, run the following command:

    tanzu config set features.cluster.dual-stack-ipv4-primary true
    
  2. Deploy Management Clusters or Create Workload Clusters, as required.

    In the cluster configuration file:

    • Set the IP family configuration variable TKG_IP_FAMILY: ipv4,ipv6.
    • Optionally, set the service CIDRs and cluster CIDRs.
    Note

    There are two CIDRs for each variable. The IP families of these CIDRs follow the order of the configured TKG_IP_FAMILY. The largest CIDR range that is permitted for the IPv4 addresses is /12, and the largest IPv6 SERVICE_CIDR range is /108. If you do not set the CIDRs, the default values are used.

    • Set the following configuration file parameter, if you are using Antrea as the CNI for your cluster:

      ANTREA_ENDPOINTSLICES: true
      

    The services, which have an ipFamilyPolicy specified in their specs of PreferDualStack or RequireDualStack, can now be accessed through IPv4 or IPv6.

Note

The end-to-end tests for the dual-stack feature in upstream Kubernetes can fail as a cluster node advertises only its primary IP address (in this case, the IPv4 address) as its IP address.

check-circle-line exclamation-circle-line close-line
Scroll to top icon