This topic describes how VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) cluster managers can manage and use network profiles to customize NSX configuration parameters for Kubernetes clusters provisioned by TKGI on vSphere with NSX integration.

Prerequisite

TKGI supports network profiles on TKGI on vSphere with NSX only.

To work with TKGI network profiles you must be either a cluster manager or cluster administrator:

  • To create or delete a network profile, you must be a cluster administrator: pks.clusters.admin.

  • To use a network profile, you must be a cluster manager: pks.clusters.manage or a cluster administrator: pks.clusters.admin.

Overview

You can use network profiles to customize your TKGI Kubernetes clusters on vSphere with NSX. For information on when to use network profiles, see Network Profile Use Cases below.

TKGI cluster managers can apply network profiles to clusters:

To list the available network profiles:

To apply a network profile to a cluster:

TKGI cluster administrators can create and manage network profiles. To create or manage network profiles see the following in Creating and Managing Network Profiles:

List Network Profiles

To list available network profiles:

  1. Run the following command:

    tkgi network-profiles
    

    For example:

    $ tkgi network-profiles  
    
    Name                Description  
    lb-profile-medium   Network profile for medium size NSX load balancer  
    small-routable-pod  Network profile with small load balancer and two routable pod networks  
    

Create a Cluster with a Network Profile

You can assign a network profile to a TKGI-provisioned Kubernetes cluster at the time of cluster creation.

To create a Kubernetes cluster with a network profile:

  1. If you do not have a network profile with the desired configuration, have a TKGI cluster administrator define and create a new network profile. For more information, see Create a Network Profile in Creating and Managing Network Profiles.
  2. Choose a network profile for the cluster. See List Network Profiles.
  3. To create the cluster, run the following command:

    tkgi create-cluster CLUSTER-NAME --external-hostname HOSTNAME --plan PLAN-NAME --network-profile NETWORK-PROFILE-NAME
    

    Where:

    • CLUSTER-NAME is a unique name for your cluster.

      Note: Use only lowercase characters when naming your cluster if you manage your clusters with Tanzu Mission Control (TMC). Clusters with names that include an uppercase character cannot be attached to TMC.

    • HOSTNAME is your external hostname used for accessing the Kubernetes API.
    • PLAN-NAME is the name of the TKGI plan you want to use for your cluster.
    • NETWORK-PROFILE-NAME is the name of the network profile you want to use for your cluster.

Assign a Network Profile to an Existing Cluster

TKGI supports assigning a network profile to an existing cluster.

WARNING: Update the network profile only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.

To assign a network profile to a cluster that does not have a network profile already applied:

  1. If you do not have a network profile with the desired configuration, have a TKGI cluster administrator define and create a new network profile. For more information, see Create a Network Profile in Creating and Managing Network Profiles.
  2. Choose a network profile for the cluster. See List Network Profiles.
  3. If you are updating a cluster that uses a public cloud CSI driver, see Limitations on Using a Public Cloud CSI Driver in Release Notes for additional requirements.
  4. To apply the network profile to the cluster, run the following command:

    tkgi update-cluster CLUSTER-NAME --network-profile NEW-NETWORK-PROFILE-NAME
    

    Where:

    • CLUSTER-NAME is the name of the existing Kubernetes cluster.
    • NEW-NETWORK-PROFILE-NAME is the name of the new network profile you want to apply to the cluster.

Note: When you when you use tkgi update-cluster to update an existing cluster, the attached network-profile must consist of only updatable settings.

Update an Existing Network Profile

The use cases for updating an existing network profile are limited to adding to or changing the order of Pod IP Blocks on your existing cluster. For more information, see Customizing Pod Networks.

Only TKGI cluster administrators can modify an existing network profile. For information on updating an existing network profile, see Update an Existing Network Profile in Creating and Deleting Network Profiles.

Network Profile Use Cases

Network profiles let you customize configuration parameters for Kubernetes clusters provisioned by TKGI on vSphere with NSX.

You can apply a network profile to a Kubernetes cluster for the following scenarios:

Topic Description
Size a Load Balancer Customize the size of the NSX load balancer service that is created when a Kubernetes cluster is provisioned.
Customizing Pod Networks Customize Kubernetes Pod Networks, including adding pod IP addresses, subnet size, and routability.
Customize Node Networks Customize Kubernetes Node Networks, including the IP addresses, subnet size, and routability.
Customize Floating IP Pools Specify a custom floating IP pool.
Configure Bootstrap NSGroups Specify an NSX Namespace Group where the Kubernetes control plane nodes will be added to during cluster creation.
Configure Edge Router Selection Specify the NSX Tier-0 router where Kubernetes node and Pod networks will be connected to.
Specify Nodes DNS Servers Specify one or more DNS servers for Kubernetes clusters.
Configure DNS for Pre-Provisioned IPs Configure DNS lookup of the Kubernetes API load balancer or ingress controller.
Configure the TCP Layer 4 Load Balancer Configure layer 4 TCP load balancer settings; use a third-party load balancer.
Configure the HTTP/S Layer 7 Ingress Controller Configure layer 7 HTTP/S ingress controller settings; use third-party ingress controller.
Define DFW Section Markers Configure top or bottom section markers for explicit DFW rule placement.
Configure NCP Logging Configure NCP logging.
Dedicated Tier-1 Topology Use dedicated Tier-1 routers, rather than a shared router, for each cluster’s Kube node, Namespace, and NSX load balancer.
check-circle-line exclamation-circle-line close-line
Scroll to top icon