This topic describes how VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) administrators can create and delete network profiles for TKGI-provisioned Kubernetes clusters on vSphere with NSX integration.

This topic also describes the use cases for when a TKGI administrator must use a network profile.



Prerequisite

TKGI supports network profiles on TKGI on vSphere with NSX only.

To work with TKGI network profiles you must be either a cluster manager or cluster administrator:

  • To create or delete a network profile, you must be a cluster administrator: pks.clusters.admin.

  • To use a network profile, you must be a cluster manager: pks.clusters.manage or a cluster administrator: pks.clusters.admin.

Note: If a cluster manager, pks.clusters.manage, attempts to create or delete a network profile, the following error occurs: “You do not have enough privileges to perform this action. Please contact the TKGI administrator.



Overview

You can use network profiles to customize your TKGI Kubernetes clusters on vSphere with NSX. For information on when to use network profiles, see Network Profile Use Cases below.

TKGI cluster administrators can administer network profiles in the following ways:

TKGI cluster administrators can also use network profiles in all the ways that a cluster manager can:

For information on managing network profiles, see Using and Managing Network Profiles.



Create a Network Profile

The following is the basic structure of a network profile JSON configuration:

{
    "name": "PROFILE-NAME",
    "description": "PROFILE-DESCRIP",
    "parameters": {
        TOP-LEVEL-PARAMETERS,
        "cni_configurations": {
            "type": "nsxt",
            "parameters": {
                CNI-CONFIGURATIONS-PARAMETERS
            }
        }
    }
}

Where:

  • PROFILE-NAME is the internal name of the network profile.
  • PROFILE-DESCRIP is an internal description of the network profile.
  • TOP-LEVEL-PARAMETERS are one or more comma-delimited top-level parameters in a Network Profile. For more information, see Top-Level Parameters below.
  • CNI-CONFIGURATIONS-PARAMETERS are one or more comma-delimited cni_configurations parameters in a Network Profile. For more information, see cni_configurations Parameters below.

To create a network profile in TKGI:

  1. Create a network profile configuration JSON file with the following content:

    {
        "name": "PROFILE-NAME",
        "description": "PROFILE-DESCRIP",
        "parameters": {
    
            "cni_configurations": {
                "type": "nsxt",
                "parameters": {
    
                    "extensions":{
                        "ncp":{
                            "nsx_v3":{
                            },
                            "coe":{
                            },
                            "ha":{
                            },
                            "k8s":{
                            }
                        },
                        "nsx-node-agent":{
                        }
                    }
                }
            }
        }
    }
    

    Where:

    • PROFILE-NAME is the internal name for your network profile.
    • PROFILE-DESCRIP is an internal description for your network profile.
  2. Edit the file to specify your network parameters. For information about the available network parameters, see Network Profile Parameters below.

  3. Review your network profile configuration carefully. If you are modifying an existing cluster, ensure that you are modifying only parameters that support modification. For information on which network profile parameters are updateable in this version of TKGI, see the network profile parameters tagged Updatable in the Network Profile Parameters tables below. You cannot modify any other network profile parameters on an existing cluster.

  4. To create a network profile from your network profile configuration, run the following TKGI CLI command:

    tkgi create-network-profile PATH-TO-YOUR-NETWORK-PROFILE-CONFIGURATION
    

    Where PATH-TO-YOUR-NETWORK-PROFILE-CONFIGURATION is the path to your network profile configuration file.

    For example:

    $ tkgi create-network-profile np-routable-pods.json
    
    Network profile example-network-profile successfully created
    
  5. Store a copy of your network profile configuration in case you need to modify the network profile in the future.

Cluster managers can create new clusters with your network profile and assign your network profile to existing clusters. For information on managing network profiles, see Using and Managing Network Profiles.


Network Profile Example

The following is an example of a complete network profile JSON configuration:

{
    "name": "example-network-profile",
    "description": "Example Network Profile with All Available Parameters -- FOR ILLUSTRATION PURPOSES ONLY",
    "parameters": {
        "lb_size": "large",
        "pod_ip_block_ids": [
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee56" ],
        "pod_subnet_prefix": 27,
        "pod_routable": true,
        "fip_pool_ids": [
            "e50e8f6e-1a7a-45dc-ad49-3a607baa7fa0",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55" ],
        "t0_router_id": "5a7a82b2-37e2-4d73-9cb1-97a8329e1a90",
        "master_vms_nsgroup_id": "9b8d535a-d3b6-4735-9fd0-56305c4a5293",
        "node_ip_block_ids": [
            "2250dc43-63c8-4bb8-b8cf-c6e12ccfb7de", "3d577e5c-dcaf-4921-9458-d12b0e1318e6" ],
        "node_routable": true,
        "node_subnet_prefix": 20,
        "nodes_dns": [
            "8.8.8.8", "192.168.115.1", "192.168.116.1" ],      
        "dns_lookup_mode": "API_INGRESS",
        "ingress_prefix": "ingress",
        "single_tier_topology": true,
        "infrastructure_networks": [
            "30.0.0.0/24",
            "192.168.111.0/24",
            "192.168.115.1" ],
        "failover_mode": "PREEMPTIVE",        
        "cni_configurations": {
            "type": "nsxt",
            "parameters": {
                "nsx_lb": false, 
                "x_forwarded_for": "insert",
                "ingress_ip": "192.168.160.212",
                "log_settings": {
                    "log_level": "DEBUG",
                    "log_firewall_traffic": "ALL" },
                "ingress_persistence_settings": {
                    "persistence_type": "cookie",
                    "persistence_timeout": 1 },
                "max_l4_lb_service": 10,
                "l4_persistence_type": "source_ip",
                "l4_lb_algorithm": "weighted_round_robin",
                "top_firewall_section_marker":"section-id",
                "bottom_firewall_section_marker":"section-id",
                "lb_http_request_header_size":60,
                "lb_http_response_header_size":45,
                "lb_http_response_timeout":30,
                "connect_retry_timeout":30,
                "enable_hostport": true, 
                "enable_nodelocaldns": true,
                "client_ssl_profile": "example_ssl_profile_ID",
                "lb_connection_multiplexing_enabled": true,
                "lb_connection_multiplexing_number": 80,
                "extensions":{
                    "ncp":{
                        "nsx_v3":{
                            "retries":"10"
                        },
                        "coe":{
                            "profiling":"False"
                        },
                        "ha":{
                            "heartbeat_period":6
                        },
                        "k8s":{
                            "lb_ip_allocation":"relaxed"
                        }
                    },
                    "nsx-node-agent":{
                        "connect_retry_timeout":60
                    }

                }  
            }
        }
    }
}

Note: This example network profile is for illustration purposes only. It is not intended to be used as a template for a network profile configuration.



Update an Existing Network Profile

To update an existing cluster’s network profile:

  1. Confirm the Network Profile Property Supports Updates
  2. Create a Modified Network Profile Configuration
  3. Create a Modified Network Profile
  4. Update the Cluster With a Modified Network Profile


Confirm the Network Profile Property Supports Updates

After you create a cluster, you can modify only specific network profile parameters. Ensure that you modify only parameters that support modification.

For information on which network profile parameters are updateable in this version of TKGI, see the network profile parameters tagged Updatable in the Network Profile Parameters tables below. You cannot modify any other network profile parameters on an existing cluster.

For more information, see Update-Cluster Network Profile Validation Rules below.


Create a Modified Network Profile Configuration

To create a modified network profile configuration file:

  1. Make a copy of your original network profile configuration file.

    If it is not possible to obtain the original network profile, create a new network profile with the original values in all of the fields.
  2. Change the name field to a unique name.
  3. If you are updating the pod_ip_block_ids field, reorder the IP Block IDs or add additional IP Block IDs.

    For example, the following network profile has two pod_ip_block_ids, the first is the original IP block used when creating the cluster, and the second is the new IP block to use for pods.

    {
        "description": "Example network profile for adding pod IP addresses to an existing cluster",
        "name": "pod-ips-add",
        "parameters": {
          "pod_ip_block_ids": [
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee55",
            "ebe78a74-a5d5-4dde-ba76-9cf4067eee56"
          ]
        }
    }
    

    Note: Update only network profile properties that support being updated.

    For more information on configuring a network profile, see Network Profile Parameters above.

  4. Review and save the network profile configuration file.

  5. Store a copy of your network profile configuration in case you need to modify the network profile in the future.


Create a Modified Network Profile

To create a network profile from a configuration file:

  1. Run the following TKGI CLI command:

    tkgi create-network-profile PATH-TO-YOUR-NETWORK-PROFILE-CONFIGURATION
    

    Where PATH-TO-YOUR-NETWORK-PROFILE-CONFIGURATION is the path to your network profile configuration file.


Update the Cluster with a Modified Network Profile

To update a cluster with a modified network profile:

  1. If you are updating a cluster that uses a public cloud CSI driver, see Limitations on Using a Public Cloud CSI Driver in Release Notes for additional requirements.
  2. To apply the network profile created above to your cluster, run the following command::

    tkgi update-cluster CLUSTER-NAME --network-profile NETWORK-PROFILE-NAME
    

    Where:

    • CLUSTER-NAME is the unique name of your cluster.
    • NETWORK-PROFILE-NAME is the name of the network profile you want to use for your cluster.

TKGI validates the network profile before updating the cluster with the new network profile. For more information, see Update-Cluster Network Profile Validation Rules below.

WARNING: Update the network profile only on a TKGI cluster that has been upgraded to the current TKGI version. For more information, see Tasks Supported Following a TKGI Control Plane Upgrade in About Tanzu Kubernetes Grid Integrated Edition Upgrades.


Update-Cluster Network Profile Validation Rules

TKGI uses strict validation rules before applying a network profile to a cluster with an existing network profile:

  • If a field in the original network profile is empty, the system ignores the empty field even if the field is included in the new network profile.
  • If a field in the new network profile is empty, the system ignores the field even if the field is not empty in the original network profile.
  • If the pod_ip_block_ids field in the new network profile contains the same entries as the existing network profile, the entry passes validation.
  • If a field in the new network profile conflicts with the field in the existing network profile, the system reports the conflict and fails the validation.



Add a New Network Profile to a Cluster that Does Not Have a Network Profile

If a cluster already exists and it does not use a network profile, eventually all pod IPs in the pod IP block assigned to the TKGI tile might become exhausted. In this case, you can create a new network profile and add it to the existing cluster.

Important

To add pod IPs to a cluster that is already using a network profile, see Add Pod IPs.

However, when you run update-cluster to add a new network profile with a new pod IP block to the cluster, the following validation error occurs:

unknown-error","log_level":2,"data":{"error":"Error processing update parameters: field pod_ip_block_ids has conflict"}}

To remediate this error, if you add a new pod IP block to a network profile, add the pod IP block from the tile to the pod_ip_block_ids array. The first pod IP block listed in the pod_ip_block_ids array is used first. When this pod IP block is exhausted, the next one in the array is used.



Delete a Network Profile

TKGI administrators can delete a network profile that is not in use.

To delete a network profile:

  1. Run the following TKGI CLI command:

    tkgi delete-network-profile NETWORK-PROFILE-NAME
    

    Where NETWORK-PROFILE-NAME is the name of the network profile you want to delete.

Note: You cannot delete a network profile that is in use.



Network Profile Parameters

The Top-Level Parameters and cni_configurations Parameters sections below describe the parameters you can add to a Network Profile.


Top-Level Parameters

TKGI supports the following top-level network profile parameters:

Parameter Type Description
name String User-defined name of the network profile.
description String User-defined description for the network profile.
parameters Map Map containing one or more name-value pairs.
cni_configurations Map
Updatable
Map containing type and parameters key-value pairs for configuring NCP (see table below).
dns_lookup_mode String DNS lookup mode.
Values: “API”, “API_INGRESS”.
For Kubernetes API LB: “API”.
For Ingress controller: “API_INGRESS”.
failover_mode String If the preferred node fails and recovers, enable the node to preempt a peer as the active node.
Values: “PREEMPTIVE”, “NON_PREEMPTIVE”.
Default: “PREEMPTIVE”.
fip_pool_ids String
Updatable
Array of floating IP pool UUIDs defined in NSX.
infrastructure_networks String Array of IP addresses and subnets for Node Networks for use with a Shared Tier-1 topology in a Multi-Tier-0 environment.
ingress_prefix String Ingress controller hostname prefix for DNS lookup. If DNS mode is set to API_INGRESS, TKGI creates the cluster with ingress_prefix.hostname as the Kubernetes control plane FQDN. TKGI confirms that the ingress subdomain can be resolved as a subdomain prefix on the host before creating new clusters.
lb_size String Size of the NSX load balancer service.
Values: “small”, “medium”, “large”.
Default: “small”.
master_vms_nsgroup_id String Namespace Group UUID as defined in NSX.
nodes_dns String
Updatable
Array (up to 3) of DNS server IP addresses for lookup of Kubernetes nodes and pods.
pod_ip_block_ids String
Updatable
Array of Pod IP Block UUIDs.
pod_routable Boolean Make the Pods subnet routable.
Values: true, false.
Default: false.
pod_subnet_prefix Integer Size of the Pods IP Block subnet.
single_tier_topology Boolean Use a single Tier-1 Router per cluster (shared).
Values: true, false.
Default: true.
t0_router_id String Tenant Tier-0 Router UUID defined in NSX.

Note: On an existing cluster, you can modify only the network profile parameters that are labeled Updatable.


cni_configurations Parameters

TKGI supports the following cni_configurations parameters:




Parameter Type Description
type Constant
String
Values: “nsxt”.
parameters Map Map containing one or more key-value pairs for NCP settings.
bottom_firewall_section_marker String
Updatable
UUID of the bottom section-id for the distributed firewall (DFW) section as defined in NSX.
See also: top_firewall_section_marker below and Define DFW Section Markers.
client_ssl_profile String
Updatable
The NSX client-side ssl profile to use, exposed by NCP as client_ssl_profile.
Default: The default NCP client SSL profile. For more information, see client_ssl_profile below.
connect_retry_timeout Integer
Updatable
Configure HTTP LoadBalancer connection retry timeout.
Example Value:30.
See also: lb_http_response_timeout and persistence_timeout.
enable_hostport Boolean
Updatable
Enable NCP support for Kubernetes Host Port.
Values: true, false.
Default: false.
enable_nodelocaldns Boolean
Updatable
Enable NCP support for Kubernetes NodeLocal DNSCache.
Values: true, false.
Default: false.
extensions String
Updatable
Additional NCP and NSX Node Agent settings not included as explicit cni_configurations parameters.
For more information see extensions below.
ingress_ip String IP address to use for the ingress controller load balancer.
ingress_persistence_settings String
Updatable
Map containing one or more key-value pairs for customizing Layer 7 persistence.
See also: persistence_timeout and persistence_type
l4_lb_algorithm String
Updatable
Layer 4 load balancer behavior.
Values: “round_robin”, “least_connection”,“ip_hash”, “weighted_round_robin”.
Default: “round_robin”.
See also: l4_persistence_type and max_l4_lb_service.
l4_persistence_type String
Updatable
Connection stickiness based on source_ip.
Values: “source_ip”.
See also: l4_lb_algorithm and max_l4_lb_service.
lb_connection_multiplexing_enabled Boolean
Updatable
Enable NSX load balancer TCP multiplexing.
Values: true, false.
Default: false.
lb_connection_multiplexing_number Integer
Updatable
The maximum number of NSX load balancer TCP multiplexing connections.
Default: 6.
lb_http_request_header_size Integer
Updatable
Configure HTTP LoadBalancer request header size.
Example Value:60.
lb_http_response_header_size Integer
Updatable
Configure HTTP LoadBalancer response header size.
Example Value:45.
lb_http_response_timeout Integer
Updatable
Configure HTTP LoadBalancer response timeout.
Example Value:30.
See also: connect_retry_timeout and persistence_timeout.
log_dropped_traffic Boolean
Updatable
Deprecated.
Use log_firewall_traffic instead.
Log dropped firewall traffic.
Values: true, false.
Default: false.
A log_settings parameter. See also: log_firewall_traffic, log_level, log_settings.
log_firewall_traffic String
Updatable
Log firewall traffic.
Values: ALL, ALLOW, DENY.
Default: DENY.
A log_settings parameter. See also: log_level, log_settings.
log_level String
Updatable
Values: “INFO”, “WARNING”, “DEBUG”, “ERROR”, “CRITICAL”.
A log_settings parameter. See also: log_firewall_traffic, log_settings.
log_settings Map
Updatable
Parameters for configuring NCP logging.
See also: log_dropped_traffic, log_firewall_traffic, log_level.
max_l4_lb_service Integer
Updatable
Limit the maximum number of layer 4 virtual servers per cluster.
Minimum Value:1.
See also: l4_lb_algorithm and l4_persistence_type.
nsx_ingress_controller Boolean Deprecated.
Use NSX layer 7 virtual server as the ingress controller for the Kubernetes cluster.
Values: true, false.
Default: true.
nsx_lb Boolean
Updatable
Use NSX layer 4 virtual server for each Kubernetes service of type LoadBalancer.
Values: true, false.
Default: true.
persistence_timeout Integer
Updatable
An ingress_persistence_settings parameter. Persistence timeout interval in seconds.
See also: connect_retry_timeout and lb_http_response_timeout.
persistence_type String
Updatable
An ingress_persistence_settings parameter. Specify the ingress persistence type.
Values: “none”, “cookie”, “source_ip”.
Default: “none”.
top_firewall_section_marker String
Updatable
UUID of the top section-id for the distributed firewall (DFW) section as defined in NSX.
See also: bottom_firewall_section_marker above and Define DFW Section Markers.
x_forwarded_for String
Updatable
Sets the original client source IP in the request header. Enabling the network profile x_forwarded_for parameter automatically enables the x_forwarded_port and x_forward_protocol parameters.
Accepts “none”, “insert”, “replace”.
Default: “none”.

Note: On an existing cluster, you can modify only the network profile parameters that are labeled Updatable.


cni_configurations Extensions Parameters

Configure the less commonly configured NCP and NSX Node Agent settings in the network profile CNI configuration extensions field.

Use the network profiles extensions field to configure an NCP ConfigMap or NSX Node Agent ConfigMap property that is applicable to TKGI but is not explicitly supported as a cni_configurations parameter.

NCP and NSX Node Agent settings supported as explicit Network Profiles parameters cannot be configured through extensions.

For more information about any of these parameters, see The nsx-ncp-config ConfigMap and The nsx-node-agent-config ConfigMap in the VMware NSX Container Plugin documentation.



Parameter Type Description
ncp.k8s.enable_namespace_subnets Boolean
Updatable
Policy Only
Allow user to set ncp/subnets annotation on namespace to specify the subnets for no-snat namespace.
Values: TRUE, FALSE.
Default: FALSE.
ncp.k8s.http_ingress_port Integer
Updatable
Port for HTTP ingress.
Default: 80.
ncp.k8s.https_ingress_port Integer
Updatable
Port for HTTPS ingress.
Default: 443.
ncp.k8s.label_filtering_regex_list String List
Updatable
List of regex expressions defining the labels that must not be converted to NSX tags. See label_filtering Settings below for guidance.
Default: "".
ncp.k8s.lb_ip_allocation String
Updatable
Allow a virtual IP that is not in the range of external_ip_pools_lb specified in Kubernetes service spec.loadBalancerIP.
Values: relaxed, strict.
Default: relaxed.
ncp.k8s.statefulset_ip_range Boolean
Updatable
Policy Only
Locate Pod IP of StatefulSet in the StatefulSet annotation IP range.
Values: TRUE, FALSE.
Default: FALSE.
ncp.nsx_v3.cluster_unavailable_retry Boolean
Updatable
Skip fatal errors and retry the request instead when no endpoint in the NSX management cluster is available to serve a request.
Default: FALSE.
ncp.nsx_v3.concurrent_connections Integer
Updatable
Maximum concurrent connections to all NSX managers.
Default: 10.
ncp.nsx_v3.conn_idle_timeout Integer
Updatable
Time in seconds to wait before ensuring connectivity to the NSX manager.
Default: 10.
ncp.nsx_v3.http_retries Integer
Updatable
Maximum number of times to retry an HTTP connection.
Default: 3.
ncp.nsx_v3.http_timeout Integer
Updatable
The time in seconds before aborting a HTTP connection to a NSX Manager.
Default: 10.
ncp.nsx_v3.l4_lb_auto_scaling Boolean
Updatable
L4 load balancer auto scaling mode.
Values: TRUE, FALSE.
Default: TRUE.
Updating from TRUE to FALSE is not recommended.
Must be set to FALSE before activating transparent mode load balancing via annotations in NCP. Transparent mode load balancing also requires a single-tier NSX Policy API topology. For additional transparent mode requirements, see Limitations in VMware NSX Container Plugin 4.1.0 Release Notes in the NCP documentation.
Requires NCP v4.0.0 or later.
ncp.nsx_v3.members_per_medium_lbs Integer Policy Only
Limit for medium load balancer.
Default: 2000.
Requires NCP v4.0.1 or later.
ncp.nsx_v3.members_per_small_lbs Integer Policy Only
Limit for small load balancer.
Default: 2000.
Requires NCP v4.0.1 or later.
ncp.nsx_v3.natfirewallmatch String List Policy Only
How firewall is applied to a traffic packet coming through NAT.
Values: BYPASS, MATCH_EXTERNAL_ADDRESS, MATCH_INTERNAL_ADDRESS.
Default: MATCH_INTERNAL_ADDRESS.
Requires NCP v4.0.1 or later.
ncp.nsx_v3.ncp_enforced_pool_member_limit String List Policy Only
Strategy for limiting pool members when scale validation is relaxed.
Values: ACTIVATE, CRD_LB_ONLY, DEACTIVATE.
Default: DEACTIVATE.
Requires NCP v4.0.1 or later.
ncp.nsx_v3.relax_scale_validation String
Updatable
Policy Only
Suspend validation of the number of load balancer pools and virtual servers.
Values: TRUE, FALSE.
Default: FALSE.
Requires NCP v4.0.1 or later.
ncp.nsx_v3.retries Integer
Updatable
Maximum number of times to retry API requests upon stale revision errors.
Default: 10
ncp.nsx_v3.snat_rule_logging String
Updatable
Enable logging for snat rule.
Values: none, basic, extended.
Default: none.
nsx_node_agent.connect_retry_timeout Integer
Updatable
The time in seconds for nsx_node_agent to recover the Hyperbus connection.
Default: 220.

Note: On an existing cluster, you can modify only the network profile parameters that are labeled Updatable.

label_filtering Settings

In your network profile under parameters.cni_configurations.parameters.extensions, setting ncp.k8s.label_filtering_regex_list to a string list of regex expressions defines the labels that must not be converted to NSX tags in Kubernetes.

Disable tag generation: To completely disable generating NSX tags from labels, set label_filtering_regex_list to .*:

  "ncp": {
    "k8s": {
      "label_filtering_regex_list": ".*"
    }

Filter out TAP labels: If you are using Tanzu Application Platform (TAP), TKGI and TAP together may create more Kubernetes tags than are allowed by NSX. To address this known issue, set label_filtering_regex_list to filter out labels generated by TAP:

  "ncp": {
    "k8s": {
      "label_filtering_regex_list": "^app.kubernetes.io.*, ^app.tanzu.vmware.com.*, ^carto.run.*, ^image.kpack.io.*, ^kapp.k14s.io.*, ^networking.internal.knative.dev.*, ^networking.knative.dev.*, .*scanning.apps.tanzu.vmware.com.*, ^services.conventions.carto.run.*, ^serving.knative.dev.*, ^statefulset.kubernetes.io.*, ^target.*, ^tanzu.app.live.view.*, ^tekton.dev.*"
    }

Parameter Descriptions

The following describes commonly used network profile parameters:

client_ssl_profile

The primary use case for updating the network profile client_ssl_profile field is to configure which NSX client-side ssl profile is to be used by NCP.

You can change the client_ssl_profile parameter to a valid NSX client-side ssl profile ID.

Configure client_ssl_profile in the cni_configurations parameters section of your network profile.

If client_ssl_profile is blank or incorrect, the default NCP client SSL profile value is used instead:

  • In Management API mode the default is nsx-default-client-ssl-profile.
  • In Policy API mode the default is default-balanced-client-ssl-profile.

Both of these default NCP client SSL profiles use balanced-level pre-defined ciphers.

enable_hostport

The primary use case for updating the network profile enable_hostport field is to activate or deactivate NCP support for Kubernetes Host Port.

You can change the enable_hostport parameter to either true or false. Configure enable_hostport in the cni_configurations parameters section of your network profile.

The Kubernetes Host Port feature allows you to expose an application to be externally accessible through a single port from outside of your cluster. For more information on Host Port, see Pod Security Policies in the Kubernetes documentation.

enable_nodelocaldns

The primary use case for updating the network profile enable_nodelocaldns field is to activate or deactivate NCP support for Kubernetes NodeLocal DNSCache.

You can change the enable_nodelocaldns parameter to either true or false. Configure enable_nodelocaldns in the cni_configurations parameters section of your network profile.

NodeLocal DNSCache improves cluster DNS performance by running a dns caching agent on cluster nodes as a DaemonSet. For more information on NodeLocal DNSCache, see Using NodeLocal DNSCache in Kubernetes clusters in the Kubernetes documentation.

extensions

The primary use case for configuring the network profile CNI configuration extensions field is to configure the less commonly configured NCP and NSX Node Agent settings.

Use the network profiles extensions field to configure an NCP ConfigMap or NSX Node Agent ConfigMap property that is applicable to TKGI but is not explicitly supported as a cni_configurations parameter.

NCP and NSX Node Agent settings supported as explicit Network Profiles parameters cannot be configured through extensions.

For more information, see cni_configurations Extensions Parameters above.

fip_pool_ids

The primary use case for updating the network profile fip_pool_ids field is to add additional NSX floating IP pool UUIDs, for a cluster or load balancer, to a cluster.

To add NSX floating IP pool UUIDs to a cluster:

  • If creating a new cluster:

    • To use only the default floating IP Pool, leave the fip_pool_ids field empty or add the UUID for the default floating IP Pool to the fip_pool_ids field.
    • To use both the default and your defined floating IP Pools, add the UUIDs for both to the network profile fip_pool_ids field.
    • To replace the default floating IP Pool with your defined floating IP Pool, add only the UUIDs for your defined floating IP Pools to the network profile fip_pool_ids field.
  • If adding a fip_pool_ids parameter array to the network profile of an existing cluster:

    • You must include the UUID of the default floating IP pool in your fip_pool_ids parameter array.
  • If modifying an existing fip_pool_ids parameter array on the network profile of an existing cluster:

    • You can add more floating IP Pool UUIDs to the array.
    • You can reorder the floating IP Pool UUIDs in the array.
    • You cannot remove any of the floating IP Pool UUIDs from an existing fip_pool_ids parameter array. For example, do not create a copy of a network profile, remove fip_pool_ids array values, and assign the new profile to the cluster that has the original profile assigned.

Note: TKGI allocates IP Addresses from the start of the floating IP pool range. To avoid conflicts with internal TKGI functions, always use IP addresses from the end of the floating IP pool. For more information, see Failed to Allocate FIP from Pool in General Troubleshooting.

For information on modifying a network profile fip_pool_ids field, see Customize Floating IP Pools. For more information on the fip_pool_ids field, see Network Profile Parameters above.

lb_connection_multiplexing_enabled

The primary use case for updating the network profile lb_connection_multiplexing_enabled field is to enable NSX load balancer TCP multiplexing.

You can change the lb_connection_multiplexing_enabled parameter to either true or false.

Configure lb_connection_multiplexing_enabled in the cni_configurations parameters section of your network profile.

lb_connection_multiplexing_number

The primary use case for updating the network profile lb_connection_multiplexing_number field is to configure the maximum number of NSX load balancer TCP multiplexing connections.

You can change the lb_connection_multiplexing_number parameter to an integer value.

Configure lb_connection_multiplexing_number in the cni_configurations parameters section of your network profile.

nodes_dns

The primary use case for updating the network profile nodes_dns field is to update the DNS server configuration for a cluster.

You can configure the network profile nodes_dns field to add, modify or remove IP addresses from a cluster DNS server configuration. For more information on the network profile nodes_dns field and an example of a nodes_dns configuration, see Specify Nodes DNS Servers.

Note: If you modify a DNS server configuration, do not exceed the maximum of three DNS server IP addresses.

pod_ip_block_ids

The primary use case for updating the network profile pod_ip_block_ids field is to add additional IP addresses for pods when a cluster is at or near the point exhausting all available public IP addresses for pods.

You can change the pod_ip_block_ids parameter array as follows:

  • Add more IP Block IDs to the array.
  • Reorder the IP Block IDs in the array.

You cannot remove any of the IP Block IDs from an existing pod_ip_block_ids parameter array. Do not create a copy of a network profile, remove pod_ip_block_ids array values, and assign the new profile to a cluster that has the original profile assigned.

For more information on modifying a network profile pod_ip_block_ids field, see Add Pod IPs in Customizing Pod Networks. For more information on the pod_ip_block_ids field, see Network Profile Parameters above.


Network Profile Use Cases

Network profiles let you customize configuration parameters for Kubernetes clusters provisioned by TKGI on vSphere with NSX.

You can apply a network profile to a Kubernetes cluster for the following scenarios:

Topic Description
Size a Load Balancer Customize the size of the NSX load balancer service that is created when a Kubernetes cluster is provisioned.
Customizing Pod Networks Customize Kubernetes Pod Networks, including adding pod IP addresses, subnet size, and routability.
Customize Node Networks Customize Kubernetes Node Networks, including the IP addresses, subnet size, and routability.
Customize Floating IP Pools Specify a custom floating IP pool.
Configure Bootstrap NSGroups Specify an NSX Namespace Group where the Kubernetes control plane nodes will be added to during cluster creation.
Configure Edge Router Selection Specify the NSX Tier-0 router where Kubernetes node and Pod networks will be connected to.
Specify Nodes DNS Servers Specify one or more DNS servers for Kubernetes clusters.
Configure DNS for Pre-Provisioned IPs Configure DNS lookup of the Kubernetes API load balancer or ingress controller.
Configure the TCP Layer 4 Load Balancer Configure layer 4 TCP load balancer settings; use a third-party load balancer.
Configure the HTTP/S Layer 7 Ingress Controller Configure layer 7 HTTP/S ingress controller settings; use third-party ingress controller.
Define DFW Section Markers Configure top or bottom section markers for explicit DFW rule placement.
Configure NCP Logging Configure NCP logging.
Dedicated Tier-1 Topology Use dedicated Tier-1 routers, rather than a shared router, for each cluster’s Kube node, Namespace, and NSX load balancer.
check-circle-line exclamation-circle-line close-line
Scroll to top icon