When you deploy Supervisor, you configure the default settings for the workload network. If you have enabled Supervisor with NSX networking, you can override the default workload network settings when you create a vSphere Namespace. Overridden workload network settings apply only to this vSphere Namespace segment.

Override Workload Network Settings (NSX Only)

When you create a vSphere Namespace, a network segment is created. By default, this network segment is derived from the Workload Network configured in Supervisor. See vSphere Namespace Network for more information.

If Supervisor is configured with NSX networking, during the creation of a vSphere Namespace, you have the option of choosing to Override cluster network settings for the vSphere Namespace. Choosing this option lets you customize the vSphere Namespace network by adding CIDRs to the Ingress, Egress, and Namespace Network fields. The new CIDRs you add override existing CIDRs for this vSphere Namespace instance.

If you have configured NSX version 4.1.1 or later, and have installed, configured, and registered the NSX Advanced Load Balancer version 22.1.4 or later with Enterprise license on NSX, the load balancer that is used with NSX is NSX Advanced Load Balancer. If you have configured versions of NSX earlier than 4.1.1, the NSX load balancer is used. For more information, see Supervisor Networking in vSphere IaaS Control Plane Concepts and Planning.

The typical use case for overriding Supervisor network settings is to provision a TKG cluster with routable pod networking. Refer to the configuration settings in the table for details on how to do this and links to examples.

Table 1. vSphere Namespace Network Planning Considerations
Consideration Description
NSX Required To override Supervisor network settings for a particular vSphere Namespace, Supervisor must be configured with NSX networking.
NSX Installation To override Supervisor network settings for a particular vSphere Namespace, the installation of NSX must include an Edge Cluster dedicated for Tier-0 Gateways (routers) and another Edge Cluster dedicated for Tier-1 Gateways. Refer to the NSX installation guide Installing and Configuring vSphere with Tanzu.
IPAM Required If you override Supervisor network settings for a particular vSphere Namespace, the new vSphere Namespace network must specify Ingress, Egress, and Namespace Network subnets that are unique from Supervisor and from any other vSphere Namespace network. You will need to manage IP address allocation accordingly.
Supervisor Routing Supervisor must be able to route directly to the TKG cluster nodes and ingress subnets. When selecting a Tier-0 Gateway for the vSphere Namespace, you have two options for configuring the required routing:
  • Use a Virtual Routing and Forwarding (VRF) Gateway to inherit the configuration from the Supervisor Tier-0 Gateway
  • Use the Border Gateway Protocol (BGP) to configure routes between the Supervisor Tier-0 Gateway and the dedicated Tier-0 Gateway

Refer to the NSX Tier-0 Gateways documentation for details on these options.

Configuration fields for overriding Supervisor network settings.
Table 2. vSphere Namespace Configuration Options for Overriding Workload Network Settings
Component Configuration
Tier-0 Gateway

The NSX Tier-0 Gateway connects Supervisor to the physical network. The selected Tier-0 Gateway will be associated with the Tier-1 Gateway that is created for the vSphere Namespace.

Selecting a new Tier-0 Gateway overrides the Tier-0 Gateway configured when Supervisor was enabled. In this case you must configure new CIDR ranges. If you select a VRF Gateway that is linked to the Tier-0 Gateway, the network and subnets are automatically configured.

Once you select a Tier-0 Gateway and complete its configuration, you cannot change the Tier-0 Gateway.

Load Balancer Size

Select the size of the load balancer instance on the Tier-1 gateway for the vSphere Namespace.

Set the size of the load balancer to small (default), medium or large. Only a set number of Load balancer instances per Edge Node can be defined. Refer to Config Max for details.

Note:

If you are using the NSX Advanced Load Balancer, this setting is supported beginning with vSphere 8 U2 and avi-22.1.5.

NAT Mode

NAT mode is selected by default. This means that the Namespace Network subnet is expected to be non-routable, and you must configure the Namespace Network, Ingress, and Egress CIDRs.

Deselecting NAT mode tells the system you are going to provide a routable CIDR range for the Namespace Network. If you deselect NAT mode, the TKG cluster node IP addresses are directly accessible from outside the Tier-0 Gateway, and you do not have to configure the Egress CIDR.

To provision a cluster using no NAT mode, deselect NAT mode and refer to the examples: Provisioning TKG Service Clusters.

Namespace Network CIDR

The Namespace Network CIDR is a subnet that operates as an IP pool whereby the Namespace Subnet Prefix describes the size of any subsequent CIDR block that is carved out from that IP pool.

Each time a vSphere Namespace is created, a subnet from the Namespace Network is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per vSphere Namespace. Refer to Config Max for details.

The Namespace Network CIDR is used to allocate IP addresses to TKG clusters attached to vSphere Namespace segments.

If NAT mode is selected, the CIDR is expected to be non-routable. If NAT mode is unchecked, Namespace Network CIDR must be routable.

Namespace Subnet Prefix

Enter the subnet prefix that specifies the size of the subnet reserved for namespaces segments. Default is 28.

Namespace subnet prefix defines the IP subnet created for each vSphere Namespace segment. For example, setting a /24 prefix would result in a vSphere Namespace segment with an IP subnet of 254 IP addresses to allocate to TKG clusters deployed there.

Additional example:

Namespace Network CIDR = 192.168.1.0/24

Namespace Subnet Prefix = /28

In this case, TKGS can provide 16x 192.168.1.x/28 CIDR blocks from the 192.168.1.0/24 subnet. This allows the instantiation of 16 TKGS Namespace networks to which your TKG Service-managed VMs (TKC, VMS, vSphere Pods) are connected to. For example, every TKC receives a dedicated Namespace CIDR and in this case that could be 192.168.1.0/28 and the next TKC namespace subnet would be 192.168.0.16/28 and so on.

Ingress

Ingress IP CIDR block is used to allocate IP addresses for Kubernetes services that are published by a service type load balancer and by an ingress controller across all vSphere Namespaces. TKG cluster services and ingress get their IP addresses from this CIDR block.

Enter a CIDR annotation that determines the ingress IP range for the virtual IP addresses published by the load balancer service for TKG clusters.

Note: This setting is not applicable for the NSX Advanced Load Balancer.
Egress

Egress IP CIDR is used to allocate IP addresses for SNAT (Source Network Address Translation) for traffic exiting the vSphere Namespace to access external services.

Enter a CIDR annotation that determines the egress IP range for the SNAT IP addresses.