This topic provides instructions for installing and configuring NSX-T Data Center v3.0 for use with VMware Tanzu Kubernetes Grid Integrated Edition on vSphere.

Prerequisites for Installing NSX-T Data Center v3.0 for Tanzu Kubernetes Grid Integrated Edition

To perform a new installation of NSX-T Data Center for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented.

  1. Verify NSX-T v3.0 support for your TKGI version. For more information, see the Release Notes for the TKGI version you are installing.

  2. Read the topics in the Preparing to Install Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Data Center section of the documentation.

  3. Read the Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI topic.

Install the NSX-T Management Hosts

Create the NSX-T Management cluster by installing three NSX-T Manager appliances and configuring a VIP address.

NSX-T Management Cluster

Deploy NSX-T Manager 1

Deploy the NSX-T Manager OVA in vSphere. Download the OVA from the VMware software download site.

  1. Using the vSphere Client, right-click the vCenter cluster and select Deploy OVF Template.
  2. At the Select an OVF Template screen, browse to and select the NSX Unified Appliance OVA file.
  3. At the Select a name and folder screen, select the target Datacenter object.
  4. At the Select a compute resource screen, select the target vCenter cluster.
  5. Review the details.
  6. At the Configuration screen, select at least Medium for the configuration size.
  7. At the Select storage screen, choose Thin Provision and the desired datastore.
  8. For Network1, enter the VLAN management network. For example, PG-MGMT-VLAN-1548.
  9. Enter strong passwords for all user types.
  10. Enter the hostname. For example, nsx-manager-1.
  11. Enter the rolename. For example, NSX Manager.
  12. Enter the Gateway IP address. For example, 10.173.62.253.
  13. Enter a public IP address for the VM. For example, 10.173.62.44.
  14. Enter the Netmask. For example, 255.255.255.0.
  15. Enter the DNS server. For example, 10.172.40.1.
  16. Enter the NTP server. For example, 10.113.60.176.
  17. Select the Enable SSH checkbox.
  18. Select the Allow SSH root logins checkbox.
  19. Click Finish. The NSX-T Manager 1 starts deploying.
  20. Monitor the deployment in the Recent Tasks pane.
  21. When the deployment completes, select the VM to power the VM on.
  22. Access the NSX-T Manager 1 web console by navigating to the URL. For example, https://10.173.62.44/.
  23. Log in and verify the installation. Note the system message that a “3 node cluster” is recommended.

Add vCenter as the Compute Manager

A compute manager is required for NSX-T environments with multiple NSX-T Manager nodes. A compute manager is an application that manages resources such as hosts and VMs. For TKGI, use the vCenter Server as the compute manager.

Complete the following steps to add vCenter as the Compute Manager. For more information, see the NSX-T documentation.

  1. In the NSX Management console, navigate to System > Appliances.
  2. Select Compute Managers.
  3. Click Add.
  4. Enter a Name. For example, vCenter.
  5. Enter an IP address. For example, 10.173.62.43.
  6. Enter the vCenter username. For example, [email protected].
  7. Set the Enable Trust toggle to Yes.
  8. Click Add.
  9. Click Add again at the thumbprint warning.
  10. Verify that the Compute Manager is added and registered.

Deploy NSX-T Manager 2

Use the NSX-T Management Console to deploy an additional NSX-T Manager node as part of the NSX-T Management layer. For more information, see the NSX-T documentation.

  1. In the NSX Management Console, navigate to System > Appliances.
  2. Select Add NSX Appliance.
  3. Enter a hostname. For example, nsx-manager-2.
  4. Enter the Management IP/netmask. For example, 10.173.62.45/24.
  5. Enter the Gateway. For example, 10.173.62.253.
  6. For the Node size, select medium.
  7. For the Compute Manager, select vCenter.
  8. For the Compute Cluster, enter MANAGEMENT-cluster.
  9. For the Datastore, select the datastore. For example, datastore2.
  10. For the Virtual Disk Format, select thin provision.
  11. For the Network, select the VLAN management network. For example, PG-MGMT-VLAN-1548.
  12. Select Enable SSH.
  13. Select Enable root access.
  14. Enter a strong password.
  15. Click Install Appliance.
  16. Verify that the NSX-T Manager 2 appliance is added.


Deploy NSX-T Manager 3

Use the NSX-T Management Console to deploy a third NSX-T Manager node as part of the NSX-T Management layer. For more information, see the NSX-T documentation.

  1. In the NSX Management Console, navigate to System > Appliances.
  2. Select Add NSX Appliance.
  3. Enter a hostname. For example, nsx-manager-3.
  4. Enter the Management IP/netmask. For example, 10.173.62.46/24.
  5. Enter the Gateway. For example, 10.173.62.253.
  6. For the Node size, select medium.
  7. For the Compute Manager, select vCenter.
  8. For the Compute Cluster, enter MANAGEMENT-cluster.
  9. For the Datastore, select the datastore. For example, datastore2.
  10. For the Virtual Disk Format, select thin provision.
  11. For the Network, select the VLAN management network. For example, PG-MGMT-VLAN-1548.
  12. Select Enable SSH.
  13. Select Enable root access.
  14. Enter a strong password.
  15. Click Install Appliance.
  16. Verify that the NSX-T Manager 3 appliance is added.

Configure the NSX-T Management VIP

The NSX-T Management layer includes three NSX-T Manager nodes. To support a single access point, assign a virtual IP Address (VIP) to the NSX-T Management layer. Once the VIP is assigned, any UI and API requests to NSX-T are redirected to the virtual IP address of the cluster, which is owned by the leader node. The leader node then routes the request forward to the other components of the appliance.

Using a VIP makes the NSX Management Cluster highly-available. If you need to scale, an alternative to the VIP is to provision a load balancer for the NSX-T Management Cluster. Provisioning a load balancer requires that NSX-T be fully installed and configured. VMware recommends that you configure the VIP now, then install a load balancer after NSX-T is installed and configured, if needed.

Complete the following instructions to create a VIP for the NSX Management Cluster. The IP address you use for the VIP must be part of the same subnet as the NSX-T Management nodes.

  1. In the NSX Management Console, navigate to System > Appliances.
  2. Click the Set Virtual IP button.
  3. Enter a Virtual IP address. For example, 10.173.62.47.
  4. Verify that the VIP is added.

  5. Access the NSX-T Management console using the VIP. For example, https://10.173.62.47/.

Enable the NSX-T Manager Interface

The NSX Management Console provides two user interfaces: Policy and Manager. TKGI requires the Manager interface for configuring networking and security objects. Do NOT use the Policy interface for TKGI objects.

  1. In the NSX-T Manager console, navigate to System > User Interface Settings.
  2. Click Edit.
  3. For the Toggle Visibility field, select Visible to all Users.
  4. For the Default Mode field, select Manager.
  5. Click Save.

  6. Refresh the NSX-T Manager Console and navigate to an area of the console that is not listed under System.
  7. In the upper-right area of the console, verify that the Manager option is enabled.

Add the NSX-T Manager License

If you do not add the proper NSX-T license, you will receive an error when you try to deploy an Edge Node VM.

  1. In the NSX-T Manager console, navigate to System > Licenses.
  2. Add the NSX Data Center Advanced (CPU) license.
  3. Verify that the license is added.

Generate and Register the NSX-T Management SSL Certificate and Private Key

An SSL certificate is automatically created for each NSX-T Manager. You can verify this by SSHing to one of the NSX-T Manager nodes and running the following command.

nsx-manager-1> get certificate cluster

You will see that Subject Alternative Name (SAN) listed in the certificate is the hostname of the appliance, for example SAN=nsx-manager-1. This means the cluster certificate is linked to a particular NSX-T Manager, in this case NSX-T Manager 1.

If you examine System > Certificates, you will see that there is no certificate for the NSX-T Manager VIP. You must generate a new SSL certificate that uses the NSX-T Management VIP address so that the cluster certificate contains SAN=VIP-ADDRESS.

Complete the following steps to generate and register a SSL certificate and private key that uses the VIP address. The following steps assume that you are working on a Linux host where OpenSSL is installed.

Generate the SSL Certificate and Private Key

  1. Create a certificate signing request file named nsx-cert.cnf and populate it with the contents below.

    [ req ]
    default_bits = 2048
    default_md = sha256
    prompt = no
    distinguished_name = req_distinguished_name
    x509_extensions = SAN
    req_extensions = v3_ca
    
    [ req_distinguished_name ]
    countryName = US
    stateOrProvinceName = California
    localityName = CA
    organizationName = NSX
    commonName = VIP-ADDRESS  #CAN ONLY USE IF SAN IS ALSO USED
    
    [ SAN ]
    basicConstraints = CA:false
    subjectKeyIdentifier = hash
    authorityKeyIdentifier=keyid:always,issuer:always
    
    [ v3_ca ]
    subjectAltName = DNS:NSX-VIP-FQDN,IP:VIP-ADDRESS  #MUST USE
    

    Where:

    • NSX-VIP-FQDN is your NSX VIP FQDN.
    • VIP-ADDRESS is the VIP address for the NSX-T Management cluster.

    Note: At a minimum you must use the SAN field for identifying the NSX Management VIP. You can also use the CN field, as long as the SAN field is populated. If you use only the CN field, the certificate will not be valid for TKGI.

  2. Copy the nsx-cert.cnf file to a machine with openssl if yours does not have it.

  3. Use OpenSSL to generate the SSL certificate and private key.

    openssl req -newkey rsa -nodes -days 1100 -x509 -config nsx-cert.cnf -keyout nsx.key -out nsx.crt
    
  4. Verify that you see the following:

    Generating a 2048 bit RSA private key
    ...............+++
    ................+++
    writing new private key to 'nsx.key'
    
  5. Verify certificate and key generation by running the ls command.

    You should see three files: the initial signing request, and the certificate and private key generated by running the signing request.

    nsx-cert.cnf  nsx.crt  nsx.key
    
  6. Run the following command to verify the certificate and private key.

    openssl x509 -in nsx.crt -text -noout
    

    You should see that the Subject Alternative Name (SAN) and common name (CN) (if used) are both the VIP address. For example:

    Subject: C=US, ST=California, L=CA, O=NSX, CN=myvip.mydomain.com
    ...
    X509v3 extensions:
        X509v3 Subject Alternative Name:
            DNS:myvip.mydomain.com, IP Address:10.11.12.13
    

Import the SSL Certificate and Private Key to the NSX-T Management Console

Import certificate and private key to NSX-T by completing the following steps. These steps require populating the NSX-T Management Console fields with the certificate and private key. You can copy/paste the contents, or if you save the nsx.crt and nsx.key files to your local machine, you can import them.

  1. In the NSX-T Management Console, navigate to the System > Certificates page.
  2. Click Import > Import Certificate. The Import Certificate screen is displayed.

    Note: Be sure to select Import Certificate and not Import CA Certificate.

  3. Enter a Name. For example, CERT-NSX-T-VIP.
  4. Copy and paste the Certificate Contents from the nsx.crt file. Or, import the nsx.crt file clicking Browse and selecting it.
  5. Copy and paste the Private Key from the nsx.key file. Or, import the nsx.key file by clicking Browse and selecting it.
  6. For the Service Certificate option, make sure to select No.
  7. Click Import.
  8. Verify that you see the certificate in the list of Certificates.

Register the SSL Certificate and Private Key with the NSX-T API Server

To register the imported VIP certificate with the NSX-T Management Cluster Certificate API, complete the following steps:

  1. In the NSX-T Management Console, navigate to the System > Certificates page.
  2. View the UUID of the certificate from the NSX-T Management Console > Certificates screen.
  3. Copy the UUID to the clipboard. For example, 170a6d52-5c61-4fef-a9e0-09c6229fe833.
  4. Create the following environment variables. Replace the IP address with your VIP address and the UUID with the UUID of the imported certificate.

    export NSX_MANAGER_IP_ADDRESS=10.173.62.47
    export CERTIFICATE_ID=170a6d52-5c61-4fef-a9e0-09c6229fe833
    
  5. Post the certificate to the NSX-T Manager API.

    curl --insecure -u admin:'VMware1!VMware1!' -X POST "https://$NSX_MANAGER_IP_ADDRESS/api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=$CERTIFICATE_ID"
    {
      "certificate_id": "170a6d52-5c61-4fef-a9e0-09c6229fe833"
    }
    
  6. (Optional) If you are running TKGI in a test environment and you are not using a multi-node NSX Management cluster, then you must also post the certificate to the Nodes API.

    curl --insecure -u admin:'VMware1!VMware1!' -X POST https://$NSX_MANAGER_IP_ADDRESS/api/v1/node/services/http?action=apply_certificate&certificate_id=$CERTIFICATE_ID
    {
      "certificate_id": "170a6d52-5c61-4fef-a9e0-09c6229fe833"
    }
    

    Note: Using a single-node NSX Management cluster is an unsupported configuration.

  7. Verify by SSHing to one of the NSX-T Management nodes and running the following command.

    The certificate that is returned should match the generated one.

    nsx-manager-1> get certificate cluster
    

Create an IP Pool for TEP

Tunnel endpoints (TEPs) are the source and destination IP addresses used in the external IP header to identify the ESXi hosts that originate and end the NSX-T encapsulation of overlay frames. The TEP addresses do not need to be routable so you can use any random IP addressing scheme you want. For more information, see the NSX-T Data Center documentation.

  1. In the NSX-T Management Console, select the Manager interface (upper right).
  2. Navigate to Networking > IP Address Pool.
  3. Click Add.
  4. Enter a Name. For example, TEP-IP-POOL.
  5. Enter an IP range. For example, 192.23.213.1 - 192.23.213.10.
  6. Enter a CIDR address. For example, 192.23.213.0/24.
  7. Click Add.
  8. Verify that the pool is added.

Configure Transport Zones

See Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI.

Configure vSphere Networking for ESXi Hosts

In this section, you configure the vSphere networking and port groups for ESXi hosts (the vSwitch). If you have created separate vSphere clusters for Management and Compute, perform this operation on each ESXi host in the Management cluster. If you have not created separate vSphere clusters, perform this operation on each ESXi host in the cluster.

The following instructions describe how to configure a vSphere Virtual Standard vSwitch (VSS). For production environments, it is recommended that you configure a Virtual Distributed vSwitch (VDS). You configure the VDS from the vCenter Networking tab and then add the ESXi hosts to the VDS. The configuration settings for the VDS are similar to the VSS configuration described below. For instructions on configuring the VDS, see Create a vSphere Distributed Switch in the vSphere 7 documentation.

For more information, see the Release Notes for details about TKGI support for vSphere 7 VDS for NSX-T transport node traffic.

Create vSwitch Port-Groups for Edge Nodes

Create vSwitch Port-Groups for the Edge Nodes on the ESXi hosts in the MANAGEMENT-cluster.

For each ESXi host in the MANAGEMENT-cluster, create the following vSwitch Port Groups:

  • EDGE-VTEP-PG: VLAN 3127
  • EDGE-UPLINK-PG: VLAN trunk (All (4095))

  • Log in to the vCenter Server.

  • Select the ESXi host in the MANAGEMENT-cluster.
  • Select Configure > Virtual switches.
  • Select Add Networking (upper right).
  • Select the option Virtual Machine Port Group for a Standard Switch and click Next.
  • Select the existing standard switch named vSwitch0 and click Next.
  • Enter a Network Label. For example, EDGE-VTEP-PG.
  • Enter a VLAN ID. For example, 3127.
  • Click Finish.
  • Verify that you see the newly created port group.
  • Select Add Networking (upper right).
  • Select the option Virtual Machine Port Group for a Standard Switch and click Next.
  • Select the existing standard switch named vSwitch0 and click Next.
  • Enter a Network Label. For example, EDGE-UPLINK-PG.
  • For the VLAN ID, select All (4095) from the drop-down.
  • Click Finish.
  • Verify that you see the newly created port group.

Set vSwitch0 with MTU at 9000

For each ESXi host in the MANAGEMENT-cluster, or each ESXi host in the vCenter cluster if you have not created separate Management and Compute clusters, you must enable the virtual switch with jumbo MTU, that is, set vSwitch0 with MTU=9000. If you do not do this, network overlay traffic will jam. The TEP interface for the NSX-T Edge Nodes must be connected to a port group that supports > 1600 bytes. The default is 1500.

  1. Select the Virtual Switch on each ESXi host in the MANAGEMENT-cluster, or each host in the vCenter cluster.
  2. Click Edit.
  3. For the MTU (bytes) setting, enter 9000.
  4. Click OK to complete the operation.

Deploy NSX-T Edge Nodes

In this section you deploy two NSX-T Edge Nodes.

NSX-T Edge Nodes provide the bridge between the virtual network environment implemented using NSX-T and the physical network. Edge Nodes for Tanzu Kubernetes Grid Integrated Edition run load balancers for TKGI API traffic, Kubernetes load balancer services, and ingress controllers. See Load Balancers in Tanzu Kubernetes Grid Integrated Edition for more information.

In NSX-T, a load balancer is deployed on the Edge Nodes as a virtual server. The following virtual servers are required for Tanzu Kubernetes Grid Integrated Edition:

  • 1 TCP Layer 4 virtual server for each Kubernetes service of type:LoadBalancer
  • 2 Layer 7 global virtual servers for Kubernetes pod ingress resources (HTTP and HTTPS)
  • 1 global virtual server for the TKGI API

The number of virtual servers that can be run depends on the size of the load balancer which depends on the size of the Edge Node. Tanzu Kubernetes Grid Integrated Edition supports the medium and large VM Edge Node form factor, as well as the bare metal Edge Node. The default size of the load balancer deployed by NSX-T for a Kubernetes cluster is small. The size of the load balancer can be customized using Network Profiles.

For this installation, we use the Large VM form factor for the Edge Node. See VMware Configuration Maximums for more information.

Install and Configure Edge Node 1

Deploy the Edge Node 1 VM using the NSX-T Manager interface.

  1. From your browser, log in with admin privileges to NSX-T Manager at https://NSX-MANAGER-IP-ADDRESS.

  2. In NSX-T Manager, go to System > Fabric > Nodes > Edge Transport Nodes.

  3. Click Add Edge VM.

  4. Configure the Edge VM as follows:

    • Name: edge-node-1
    • Host name/FQDN: edge-node-1.lab.com
    • Form Factor: Large
  5. Configure Credentials as follows:

    • CLI User Name: admin
    • CLI Password: Enter a strong password for the admin user that complies with the NSX-T requirements.
    • Enable SSH Login: Yes
    • System Root Password: Enter a strong password for the root user that complies with the NSX-T requirements.
    • Enable Root SSH Login: Yes
    • Audit Credentials: Enter an audit user name and password.
  6. Configure the deployment as follows:

    • Compute Manager: vCenter
    • Cluster: MANAGEMENT-Cluster
    • Datastore: Select the datastore
  7. Configure the node settings as follows:

    • IP Assignment: Static
    • Management IP: 10.173.62.49/24, for example
    • Default Gateway: 10.173.62.253, for example
    • Management Interface: PG-MGMT-VLAN-1548, for example

Configure the N-VDS Switch or Switches for Edge Node 1

You can configure the N-VDS switch and transport zones for NSX Edge Node 2.

To configure N-VDS the switch and transport zones: - If you are using the default Transport Zones, use a single N-VDS switch. - If you are using custom Transport Zones, use a multiple N-VDS switches.

For more information, see Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI.

Complete the Edge Node 1 Installation

  1. Click Finish to complete the configuration. The installation begins.

  2. In vCenter, use the Recent Tasks panel at the bottom of the page to verify that you see the Edge Node 1 VM being deployed.

  3. Once the process completes, you should see the Edge Node 1 deployed successfully in NSX-T Manager.

  4. Click the N-VDS link and verify that you see the switch or switches.

  5. In vCenter verify that the Edge Node is created.

Install and Configure Edge Node 2

Deploy the Edge Node 2 VM using the NSX-T Manager interface.

  1. In NSX-T Manager, go to System > Fabric > Nodes > Edge Transport Nodes.

  2. Click Add Edge VM.

  3. Configure the Edge VM as follows:

    • Name: edge-node-2
    • Host name/FQDN: edge-node-2.lab.com
    • Form Factor: Large
  4. Configure Credentials as follows:

    • CLI User Name: admin
    • CLI Password: Enter a strong password for the admin user that complies with the NSX-T requirements.
    • Enable SSH Login: Yes
    • System Root Password: Enter a strong password for the root user that complies with the NSX-T requirements.
    • Enable Root SSH Login: Yes
    • Audit Credentials: Enter an audit user name and password.
  5. Configure the deployment as follows:

    • Compute Manager: vCenter
    • Cluster: MANAGEMENT-Cluster
    • Datastore: Select the datastore
  6. Configure the node settings as follows:

    • IP Assignment: Static
    • Management IP: 10.173.62.58/24, for example
    • Default Gateway: 10.173.62.253, , for example
    • Management Interface: PG-MGMT-VLAN-1548, for example

Configure the N-VDS Switch or Switches for Edge Node 2

You can configure the N-VDS switch and transport zones for NSX Edge Node 2.

To configure N-VDS the switch and transport zones: - If you are using the default Transport Zones, use a single N-VDS switch. - If you are using custom Transport Zones, use a multiple N-VDS switches.

For more information, see Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI.

Complete the Installation of Edge Node 2

  1. Click Finish to complete the configuration. The installation begins.

  2. In vCenter, use the Recent Tasks panel at the bottom of the page to verify that you see the Edge Node 1 VM being deployed.

  3. Once the process completes, you should see the Edge Node 2 deployed successfully in NSX-T Manager.

  4. Click the N-VDS link and verify that you see the N-VDS switch or switches.

  5. In vCenter verify that Edge Node 2 is created.

  6. In NSX-T Manager, verify that you see both Edge Nodes.

Create Uplink Profile for ESXi Transport Node

To configure the TEP, we used the default profile named nsx-default-uplink-hostswitch-profile. However, because the TEP is on VLAN 3127, you must modify the uplink profile for the ESXi Transport Node (TN). NSX-T does not allow you to edit settings for the default uplink profile, so we create a new one.

  1. Go to System > Fabric > Profiles > Uplink Profiles.

  2. Click Add.

  3. Configure the New Uplink Profile as follows:

    • Name: nsx-esxi-uplink-hostswitch-profile
    • Teaming Policy: Failover Order
    • Active Uplinks: uplink-1
    • Transport vLAN: 3127
  4. Click Add.

  5. Verify that the Uplink Profile is created.

Deploy ESXi Host Transport Nodes Using VDS

Deploy each ESXi host in the COMPUTE-cluster as an ESXi host transport node (TN) in NSX-T. If you have not created a separate COMPUTE-cluster for ESXi hosts, deploy each ESXi host in the vSphere cluster as a host transport node in NSX-T.

  1. Go to System > Fabric > Nodes > Host Transport Nodes.

  2. Expand the Compute Manager and select the ESXi host in the COMPUTE-cluster, or each ESXi host in the vSphere cluster.

  3. Click Configure NSX.

  4. In the Host Details tab, enter a name. For example, 10.172.210.57.

  5. In the Configure NSX tab, configure the transport node as follows:

    • Type: VDS (do not select the N-VDS option)
    • Name: switch-overlay (you must use the same switch name that was configured for tz-overlay transport zone)
    • Transport Zone: tz-overlay
    • NIOC Profile: nsx-default-nioc-hostswitch-profile
    • Uplink Profile: nsx-esxi-uplink-hostswitch-profile
    • LLDP Profile: LLDP [Send Packet Disabled]
    • IP Assignment: Use IP Pool
    • IP Pool: TEP-IP-POOL
    • Teaming Policy Switch Mapping
      • Uplinks: uplink-1
      • Physical NICs: vmnic1
  6. Click Finish.

  7. Verify that the host TN is configured.

Verify TEP to TEP Connectivity

To avoid any overlay communication in the future due to MTU issue, test TEP to TEP connectivity and verify that it is working.

  1. SSH to edge-node-1 and get the local TEP IP address. For example, 192.23.213.1. Use the command get vteps to get the IP.

  2. SSH to edge-node-2 and get the local TEP IP address, ushc as 192.23.213.2. Use the command get vteps to get the IP.

  3. SSH to the ESXi host and get the TEP IP address. For example, 192.23.213.3. Use the command esxcfg-vmknic -l to get the IP. The interface will be vmk10 and the NetStack will be vxlan.

  4. From each ESXi transport node, test the connections to each NSX-T Edge Node, for example:

    # vmkping ++netstack=vxlan 192.23.213.1 -d -s 1572 -I vmk10: OK
    # vmkping ++netstack=vxlan 192.23.213.2 -d -s 1572 -I vmk10: OK
    
    1. Test the connection from NSX-T Edge Node 1 and Edge Node 2 to ESXi TN:

      > vrf 0
      > ping 192.23.213.1 size 1572 dfbit enable: OK
      
    2. Test the connection from NSX-T Edge Node 1 to NSX-T Edge Node 2:

      > vrf 0
      > ping 192.23.213.2 size 1572 dfbit enable: OK
      

Create NSX-T Edge Cluster

  1. Go to System > Fabric > Nodes > Edge Clusters.

  2. Click Add.

    • Enter a name. For example, edge-cluster-1.
    • Add members, including edge-node-1 and edge-node-2.
  3. Click Add.

  4. Verify.

Create Uplink Logical Switch

Create an uplink Logical Switch to be used for the Tier-0 Router.

  1. At upper-right, select the Manager tab.

  2. Go to Networking > Logical Switches.

  3. Click Add.

  4. Configure the new logical switch as follows:

    • Name: LS-T0-uplink
    • Transport Zone: tz-vlan
    • VLAN: 1548
  5. Click Add.

  6. Verify.

Create Tier-0 Router

  1. Select Networking from the Manager tab.

  2. Select Tier-0 Logical Router.

  3. Click Add.

  4. Configure the new Tier-0 Router as follows:

    • Name: T0-router
    • Edge Cluster: edge-cluster-1
    • HA mode: Either Active-Active or Active-Standby
    • Failover mode: Non-Preemptive

    Note: Configuring Failover mode is optional if HA mode is configured as Active-Active. For more information on NSX-T HA mode configuration, see Add a Tier-0 Gateway in the VMware NSX-T Data Center documentation.


  5. Click Save and verify.

  6. Select the T0 router.

  7. Select Configuration > Router Ports.

  8. Click Add.

  9. Configure a new router port as follows:

    • Name: T0-uplink-1
    • Type: uplink
    • Transport Node: edge-node-1
    • Logical Switch: LS-T0-uplink
    • Logical Switch Port: Attach to a new switch port
    • Subnet: 10.173.62.50 / 24
  10. Click Add and verify.

  11. Select the T0 router.

  12. Select Configuration > Router Ports.

  13. Add a second uplink by creating a second router port for edge-node-2:

    • Name: T0-uplink-1
    • Type: uplink
    • Transport Node: edge-node-2
    • Logical Switch: LS-T0-uplink
    • Logical Switch Port: Attach to a new switch port
    • Subnet: 10.173.62.51 / 24
  14. Once completed, verify that you have two connected router ports.

Configure and Test the Tier-0 Router

Create an HA VIP for the T0 router, and a default route for the T0 router. Then test the T0 router.

  1. Select the Tier-0 Router you created.

  2. Select Configuration > HA VIP.

  3. Click Add.

  4. Configure the HA VIP as follows:

    • VIP address: 10.173.62.52/24, for example
    • Uplink ports: T0-uplink-1 and T0-uplink-2
  5. Click Add and verify.

  6. Select Routing > Static Routes.

  7. Click Add.

    • Network: 0.0.0.0/0
    • Next Hop: 10.173.62.253
  8. Click Add and verify.

  9. Verify the Tier 0 router by making sure the T0 uplinks and HA VIP are reachable from your laptop.

For example:

> ping 10.173.62.50
PING 10.173.62.50 (10.173.62.50): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 10.173.62.50: icmp_seq=1 ttl=58 time=71.741 ms
64 bytes from 10.173.62.50: icmp_seq=0 ttl=58 time=1074.679 ms

> ping 10.173.62.51
PING 10.173.62.51 (10.173.62.51): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 10.173.62.51: icmp_seq=0 ttl=58 time=1156.627 ms
64 bytes from 10.173.62.51: icmp_seq=1 ttl=58 time=151.413 ms

> ping 10.173.62.52
PING 10.173.62.52 (10.173.62.52): 56 data bytes
64 bytes from 10.173.62.52: icmp_seq=0 ttl=58 time=6.864 ms
64 bytes from 10.173.62.52: icmp_seq=1 ttl=58 time=7.776 ms

Create IP Blocks and Pool for Compute Plane

TKGI requires a Floating IP Pool for NSX-T load balancer assignment and the following two IP blocks for Kubernetes pods and nodes:

  • TKGI-POD-IP-BLOCK: 172.16.0.0/16
  • TKGI-NODE-IP-BLOCK: 172.23.0.0/16

  • In the Manager interface, go to Networking > IP Address Pools > IP Block.

  • Click Add.

  • Configure the Pod IP Block as follows:

    • Name: TKGI-POD-IP-BLOCK
    • CIDR: 172.16.0.0/16
  • Click Add and verify.

  • Repeat same operation for the Node IP Block.

    • Name: TKGI-NODE-IP-BLOCK
    • CIDR: 172.23.0.0/16
  • Click Add and verify.

  • Select IP Pools tab.

  • Click Add.

  • Configure the IP pool as follows:

    • Name: TKGI-FLOATING-IP-POOL
    • IP ranges: 10.173.62.111 - 10.173.62.150
    • CIDR: 10.173.62.0/24
  • Click Add and verify.

Create Management Plane

Networking for the TKGI Management Plane consists of a Tier-1 Router and Switch with NAT Rules for the Management Plane VMs.

Create Tier-1 Router and Switch

Create Tier-1 Logical Switch and Router for TKGI Management Plane VMs. Complete the configuration by enabling Route Advertisement on the T1 router.

  1. In the NSX Management console, navigate to Networking > Logical Switches.

  2. Click Add.

  3. Create the LS for TKGI Management plane VMs:

    • Name: LS-TKGI-MGMT
    • Transport Zone: tz-overlay
  4. Click Add and verify creation of the T1 logical switch.

  5. Go to Networking > Tier-1 Logical Router.

  6. Click Add.

  7. Configure the Tier-1 logical router as follows:

    • Name: T1-TKGI-MGMT
    • To router: T0-router
    • Edge Cluster: edge-cluster-1
    • Edge Cluster Members: edge-node-1 and edge-node-2
  8. Click Add and verify.

  9. Select the T1 router and go to Configuration > Router port.

  10. Click Add.

  11. Configure the T1 router port as follows:

    • Name: T1-TKGI-MGMT-port
    • Logical Switch: LS-TKGI-MGMT
    • Subnet: 10.1.1.1/24
  12. Click Add and verify.

  13. Select Routing tab.

  14. Click Edit and configure route advertisement as follows:

    • Status: Enabled
    • Advertise All Connected Routes: Yes
  15. Click Save and verify.

Create NAT Rules

You need to create the following NAT rules on the Tier-0 router for the TKGI Management Plane VMs.

  • DNAT: 10.173.62.220 (for example) to access Ops Manager
  • DNAT: 10.173.62.221 (for example) to access Harbor
  • SNAT: 10.173.62.222 (for example) for all TKGI management plane VM traffic destined to the outside world

  • In the NSX Management console, navigate to Networking > NAT.

  • In the Logical Router field, select the T0-router you defined for TKGI.

  • Click Add.

  • Configure the Ops Manager DNAT rule as follows:

    • Priority: 1000
    • Action: DNAT
    • Protocol: Any Protocol
    • Destination IP: 10.173.62.220, for example
    • Translated IP: 10.1.1.2, for example
  • Click Add and verify.

  • Add a second DNAT rule for Harbor by repeating the same operation.

    • Priority: 1000
    • Action: DNAT
    • Protocol: Any Protocol
    • Destination IP: 10.173.62.221, for example
    • Translated IP: 10.1.1.6, for example
  • Verify the creation of the DNAT rules.

  • Create the SNAT rule for the management plane traffic as follows:

    • Priority: 9024
    • Action: SNAT
    • Protocol: Any Protocol
    • Source IP: 10.1.1.0/24, for example
    • Translated IP: 10.173.62.222, for example
  • Verify the creation of the SNAT rule.

Configure the NSX-T Password Interval (Optional)

The default NSX-T password expiration interval is 90 days. After this period, the NSX-T passwords will expire on all NSX-T Manager Nodes and all NSX-T Edge Nodes. To avoid this, you can extend or remove the password expiration interval, or change the password if needed.

Note: For existing Tanzu Kubernetes Grid Integrated Edition deployments, anytime the NSX-T password is changed you must update the BOSH and TKGI tiles with the new passwords. See Adding Infrastructure Password Changes to the Tanzu Kubernetes Grid Integrated Edition Tile for more information.

Update the NSX-T Manager Password and Password Interval

To update the NSX-T Manager password, perform the following actions on one of the NSX-T Manager nodes. The changes will be propagated to all NSX-T Manager nodes.

SSH into the NSX-T Manager Node

To manage user password expiration, you use the CLI on one of the NSX-T Manager nodes.

To access a NSX-T Manager node, from Unix hosts use the command ssh USERNAME@IP_ADDRESS_OF_NSX_MANAGER.

For example:

ssh [email protected]

On Windows, use PuTTY and provide the IP address for NSX-T Manager. Enter the user name and password that you defined during the installation of NSX-T.

Retrieve the Password Expiration Interval

To retrieve the password expiration interval, use the following command:

get user USERNAME password-expiration

For example:

NSX CLI (Manager, Policy, Controller 3.0.0.0.0.15946739). Press ? for cost or enter: help
nsx-mgr-1> get user admin password-expiration
Password expires 90 days after last change

Update the Admin Password

To update the user password, use the following command:

set user USERNAME password NEW-PASSWORD old-password OLD-PASSWORD.

For example:

set user admin password my-new-pwd old-password my-old-pwd

Set the Admin Password Expiration Interval

To set the password expiration interval, use the following command:

set user USERNAME password-expiration PASSWORD-EXPIRATION.

For example, the following command sets the password expiration interval to 120 days:

set user admin password-expiration 120

Remove the Admin Password Expiration Interval

To remove password expiration, use the following command:

clear user USERNAME password-expiration.

For example:

clear user admin password-expiration

To verify:

nsx-mgr-1> clear user admin password-expiration
nsx-mgr-1> get user admin password-expiration
Password expiration not configured for this user

Update the Password for NSX-T Edge Nodes

To update the NSX-T Edge Node password, perform the following actions on each NSX-T Edge Node.

Note: Unlike the NSX-T Manager nodes, you must update the password or password interval on each Edge Node.

Enable SSH

SSH on the Edge Node is disabled by default. You have to enable SSH on the Edge Node using the Console from vSphere.

start service ssh
set service ssh start-on-boot

SSH to the NSX-T Edge Node

For example:

ssh [email protected]

Get the Password Expiration Interval for the Edge Node

For example:

nsx-edge> get user admin password-expiration
Password expires 90 days after last change

Update the User Password for the Edge Node

For example:

nsx-edge> set user admin password my-new-pwd old-password my-old-pwd

Set the Password Expiration Interval

For example, the following command sets the password expiration interval to 120 days:

nsx-edge> set user admin password-expiration 120

Remove the Password Expiration Interval

For example:

NSX CLI (Edge 3.0.0.0.0.15946012). Press ? for command list or enter: help
nsx-edge-2> get user admin password-expiration
Password expires 90 days after last change. Current password will expire in 7 days.

nsx-edge-2> clear user admin password-expiration
nsx-edge-2> get user admin password-expiration
Password expiration not configured for this user

Next Steps

Once you have completed the installation of NSX-T v3.0, return to the TKGI installation workflow and proceed with the next phase of the process. See Install Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Using Ops Manager.

check-circle-line exclamation-circle-line close-line
Scroll to top icon