This topic describes how to install and configure NSX Data Center v3.0 for use with VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere.
Before completing this section, make sure you have completed the following sections:
To perform a new installation of VMware NSX for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented.
Deploy each ESXi host in the COMPUTE-cluster as an ESXi host transport node (TN) in NSX. If you have not created a separate COMPUTE-cluster for ESXi hosts, deploy each ESXi host in the vSphere cluster as a host transport node in NSX.
Go to System > Fabric > Nodes > Host Transport Nodes.
Expand the Compute Manager and select the ESXi host in the COMPUTE-cluster, or each ESXi host in the vSphere cluster.
Click Configure NSX.
In the Host Details tab, enter a name, such as 10.172.210.57
.
In the Configure NSX tab, configure the transport node as follows:
VDS
(do not select the N-VDS option)switch-overlay
(you must use the same switch name that was configured for tz-overlay
transport zone)tz-overlay
nsx-default-nioc-hostswitch-profile
nsx-esxi-uplink-hostswitch-profile
LLDP [Send Packet Disabled]
Use IP Pool
TEP-IP-POOL
uplink-1
vmnic1
Click Finish.
Verify that the host TN is configured.
To avoid any overlay communication in the future due to MTU issue, test TEP to TEP connectivity and verify that it is working.
SSH to edge-node-1 and get the local TEP IP address, such as 192.23.213.1
. Use the command get vteps
to get the IP.
SSH to edge-node-2 and get the local TEP IP address, ushc as 192.23.213.2
. Use the command get vteps
to get the IP.
SSH to the ESXi host and get the TEP IP address, such as 192.23.213.3
. Use the command esxcfg-vmknic -l
to get the IP. The interface will be vmk10
and the NetStack will be vxlan
.
From each ESXi transport node, test the connections to each NSX Edge Node, for example:
# vmkping ++netstack=vxlan 192.23.213.1 -d -s 1572 -I vmk10: OK
# vmkping ++netstack=vxlan 192.23.213.2 -d -s 1572 -I vmk10: OK
Test the connection from NSX edge node 1 and edge node 2 to ESXi TN:
> vrf 0
> ping 192.23.213.1 size 1572 dfbit enable: OK
Test the connection from NSX edge node 1 to NSX edge node 2:
> vrf 0
> ping 192.23.213.2 size 1572 dfbit enable: OK
Go to System > Fabric > Nodes > Edge Clusters.
Click Add.
edge-cluster-1
.edge-node-1
and edge-node-2
. Click Add.
Verify.
Create an uplink Logical Switch to be used for the Tier-0 Router.
At upper-right, select the Manager tab.
Go to Networking > Logical Switches.
Click Add.
Configure the new logical switch as follows:
LS-T0-uplink
tz-vlan
1548
Click Add.
Verify.
Select Networking from the Manager tab.
Select Tier-0 Logical Router.
Click Add.
Configure the new Tier-0 Router as follows:
T0-router
edge-cluster-1
Active-Active
or Active-Standby
Non-Preemptive
Note: Configuring Failover mode is optional if HA mode is configured as Active-Active
. For more information on NSX HA mode configuration, see Add a Tier-0 Gateway in the VMware NSX-T Data Center documentation.
Click Save and verify.
Select the T0 router.
Select Configuration > Router Ports.
Click Add.
Configure a new router port as follows:
Click Add and verify.
Select the T0 router.
Select Configuration > Router Ports.
Add a second uplink by creating a second router port for edge-node-2:
Once completed, verify that you have two connected router ports.
Create NSX Objects for Kubernetes Clusters Provisioned by TKGI.