This topic provides instructions for installing and configuring NSX-T Data Center v3.0 for use with VMware Tanzu Kubernetes Grid Integrated Edition on vSphere.
To perform a new installation of NSX-T Data Center for Tanzu Kubernetes Grid Integrated Edition, complete the following steps in the order presented.
Verify NSX-T v3.0 support for your TKGI version. For more information, see the Release Notes for the TKGI version you are installing.
Read the topics in the Preparing to Install Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Data Center section of the documentation.
Read the Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI topic.
Create the NSX-T Management cluster by installing three NSX-T Manager appliances and configuring a VIP address.
Deploy the NSX-T Manager OVA in vSphere. Download the OVA from the VMware software download site.
PG-MGMT-VLAN-1548
.nsx-manager-1
.NSX Manager
.10.173.62.253
.10.173.62.44
.255.255.255.0
.10.172.40.1
.NTP server
. For example, 10.113.60.176
.https://10.173.62.44/
.A compute manager is required for NSX-T environments with multiple NSX-T Manager nodes. A compute manager is an application that manages resources such as hosts and VMs. For TKGI, use the vCenter Server as the compute manager.
Complete the following steps to add vCenter as the Compute Manager. For more information, see the NSX-T documentation.
10.173.62.43
.[email protected]
.Use the NSX-T Management Console to deploy an additional NSX-T Manager node as part of the NSX-T Management layer. For more information, see the NSX-T documentation.
nsx-manager-2
.10.173.62.45/24
.10.173.62.253
.medium
.vCenter
.MANAGEMENT-cluster
.datastore2
.thin provision
.PG-MGMT-VLAN-1548
.Use the NSX-T Management Console to deploy a third NSX-T Manager node as part of the NSX-T Management layer. For more information, see the NSX-T documentation.
nsx-manager-3
.10.173.62.46/24
.10.173.62.253
.medium
.vCenter
.MANAGEMENT-cluster
.datastore2
.thin provision
.PG-MGMT-VLAN-1548
.The NSX-T Management layer includes three NSX-T Manager nodes. To support a single access point, assign a virtual IP Address (VIP) to the NSX-T Management layer. Once the VIP is assigned, any UI and API requests to NSX-T are redirected to the virtual IP address of the cluster, which is owned by the leader node. The leader node then routes the request forward to the other components of the appliance.
Using a VIP makes the NSX Management Cluster highly-available. If you need to scale, an alternative to the VIP is to provision a load balancer for the NSX-T Management Cluster. Provisioning a load balancer requires that NSX-T be fully installed and configured. VMware recommends that you configure the VIP now, then install a load balancer after NSX-T is installed and configured, if needed.
Complete the following instructions to create a VIP for the NSX Management Cluster. The IP address you use for the VIP must be part of the same subnet as the NSX-T Management nodes.
10.173.62.47
.https://10.173.62.47/
.The NSX Management Console provides two user interfaces: Policy and Manager. TKGI requires the Manager interface for configuring networking and security objects. Do NOT use the Policy interface for TKGI objects.
If you do not add the proper NSX-T license, you will receive an error when you try to deploy an Edge Node VM.
An SSL certificate is automatically created for each NSX-T Manager. You can verify this by SSHing to one of the NSX-T Manager nodes and running the following command.
nsx-manager-1> get certificate cluster
You will see that Subject Alternative Name (SAN) listed in the certificate is the hostname of the appliance, for example SAN=nsx-manager-1
. This means the cluster certificate is linked to a particular NSX-T Manager, in this case NSX-T Manager 1.
If you examine System > Certificates, you will see that there is no certificate for the NSX-T Manager VIP. You must generate a new SSL certificate that uses the NSX-T Management VIP address so that the cluster certificate contains SAN=VIP-ADDRESS
.
Complete the following steps to generate and register a SSL certificate and private key that uses the VIP address. The following steps assume that you are working on a Linux host where OpenSSL is installed.
Create a certificate signing request file named nsx-cert.cnf
and populate it with the contents below.
[ req ]
default_bits = 2048
default_md = sha256
prompt = no
distinguished_name = req_distinguished_name
x509_extensions = SAN
req_extensions = v3_ca
[ req_distinguished_name ]
countryName = US
stateOrProvinceName = California
localityName = CA
organizationName = NSX
commonName = VIP-ADDRESS #CAN ONLY USE IF SAN IS ALSO USED
[ SAN ]
basicConstraints = CA:false
subjectKeyIdentifier = hash
authorityKeyIdentifier=keyid:always,issuer:always
[ v3_ca ]
subjectAltName = DNS:NSX-VIP-FQDN,IP:VIP-ADDRESS #MUST USE
Where:
NSX-VIP-FQDN
is your NSX VIP FQDN.VIP-ADDRESS
is the VIP address for the NSX-T Management cluster.Note: At a minimum you must use the SAN field for identifying the NSX Management VIP. You can also use the CN field, as long as the SAN field is populated. If you use only the CN field, the certificate will not be valid for TKGI.
Copy the nsx-cert.cnf
file to a machine with openssl
if yours does not have it.
Use OpenSSL to generate the SSL certificate and private key.
openssl req -newkey rsa -nodes -days 1100 -x509 -config nsx-cert.cnf -keyout nsx.key -out nsx.crt
Verify that you see the following:
Generating a 2048 bit RSA private key
...............+++
................+++
writing new private key to 'nsx.key'
Verify certificate and key generation by running the ls
command.
You should see three files: the initial signing request, and the certificate and private key generated by running the signing request.
nsx-cert.cnf nsx.crt nsx.key
Run the following command to verify the certificate and private key.
openssl x509 -in nsx.crt -text -noout
You should see that the Subject Alternative Name (SAN) and common name (CN) (if used) are both the VIP address. For example:
Subject: C=US, ST=California, L=CA, O=NSX, CN=myvip.mydomain.com
...
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:myvip.mydomain.com, IP Address:10.11.12.13
Import certificate and private key to NSX-T by completing the following steps. These steps require populating the NSX-T Management Console fields with the certificate and private key. You can copy/paste the contents, or if you save the nsx.crt
and nsx.key
files to your local machine, you can import them.
Note: Be sure to select Import Certificate and not Import CA Certificate.
CERT-NSX-T-VIP
.nsx.crt
file. Or, import the nsx.crt
file clicking Browse and selecting it.nsx.key
file. Or, import the nsx.key
file by clicking Browse and selecting it.To register the imported VIP certificate with the NSX-T Management Cluster Certificate API, complete the following steps:
170a6d52-5c61-4fef-a9e0-09c6229fe833
.Create the following environment variables. Replace the IP address with your VIP address and the UUID with the UUID of the imported certificate.
export NSX_MANAGER_IP_ADDRESS=10.173.62.47
export CERTIFICATE_ID=170a6d52-5c61-4fef-a9e0-09c6229fe833
Post the certificate to the NSX-T Manager API.
curl --insecure -u admin:'VMware1!VMware1!' -X POST "https://$NSX_MANAGER_IP_ADDRESS/api/v1/cluster/api-certificate?action=set_cluster_certificate&certificate_id=$CERTIFICATE_ID"
{
"certificate_id": "170a6d52-5c61-4fef-a9e0-09c6229fe833"
}
(Optional) If you are running TKGI in a test environment and you are not using a multi-node NSX Management cluster, then you must also post the certificate to the Nodes API.
curl --insecure -u admin:'VMware1!VMware1!' -X POST https://$NSX_MANAGER_IP_ADDRESS/api/v1/node/services/http?action=apply_certificate&certificate_id=$CERTIFICATE_ID
{
"certificate_id": "170a6d52-5c61-4fef-a9e0-09c6229fe833"
}
Note: Using a single-node NSX Management cluster is an unsupported configuration.
Verify by SSHing to one of the NSX-T Management nodes and running the following command.
The certificate that is returned should match the generated one.
nsx-manager-1> get certificate cluster
Tunnel endpoints (TEPs) are the source and destination IP addresses used in the external IP header to identify the ESXi hosts that originate and end the NSX-T encapsulation of overlay frames. The TEP addresses do not need to be routable so you can use any random IP addressing scheme you want. For more information, see the NSX-T Data Center documentation.
TEP-IP-POOL
.192.23.213.1 - 192.23.213.10
.192.23.213.0/24
.See Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI.
In this section, you configure the vSphere networking and port groups for ESXi hosts (the vSwitch). If you have created separate vSphere clusters for Management and Compute, perform this operation on each ESXi host in the Management cluster. If you have not created separate vSphere clusters, perform this operation on each ESXi host in the cluster.
The following instructions describe how to configure a vSphere Virtual Standard vSwitch (VSS). For production environments, it is recommended that you configure a Virtual Distributed vSwitch (VDS). You configure the VDS from the vCenter Networking tab and then add the ESXi hosts to the VDS. The configuration settings for the VDS are similar to the VSS configuration described below. For instructions on configuring the VDS, see Create a vSphere Distributed Switch in the vSphere 7 documentation.
For more information, see the Release Notes for details about TKGI support for vSphere 7 VDS for NSX-T transport node traffic.
Create vSwitch Port-Groups for the Edge Nodes on the ESXi hosts in the MANAGEMENT-cluster.
For each ESXi host in the MANAGEMENT-cluster, create the following vSwitch Port Groups:
EDGE-UPLINK-PG: VLAN trunk (All (4095))
Log in to the vCenter Server.
vSwitch0
and click Next.EDGE-VTEP-PG
.3127
.vSwitch0
and click Next.EDGE-UPLINK-PG
.All (4095)
from the drop-down.For each ESXi host in the MANAGEMENT-cluster, or each ESXi host in the vCenter cluster if you have not created separate Management and Compute clusters, you must enable the virtual switch with jumbo MTU, that is, set vSwitch0 with MTU=9000. If you do not do this, network overlay traffic will jam. The TEP interface for the NSX-T Edge Nodes must be connected to a port group that supports > 1600 bytes. The default is 1500.
9000
.In this section you deploy two NSX-T Edge Nodes.
NSX-T Edge Nodes provide the bridge between the virtual network environment implemented using NSX-T and the physical network. Edge Nodes for Tanzu Kubernetes Grid Integrated Edition run load balancers for TKGI API traffic, Kubernetes load balancer services, and ingress controllers. See Load Balancers in Tanzu Kubernetes Grid Integrated Edition for more information.
In NSX-T, a load balancer is deployed on the Edge Nodes as a virtual server. The following virtual servers are required for Tanzu Kubernetes Grid Integrated Edition:
LoadBalancer
The number of virtual servers that can be run depends on the size of the load balancer which depends on the size of the Edge Node. Tanzu Kubernetes Grid Integrated Edition supports the medium
and large
VM Edge Node form factor, as well as the bare metal Edge Node. The default size of the load balancer deployed by NSX-T for a Kubernetes cluster is small
. The size of the load balancer can be customized using Network Profiles.
For this installation, we use the Large VM form factor for the Edge Node. See VMware Configuration Maximums for more information.
Deploy the Edge Node 1 VM using the NSX-T Manager interface.
From your browser, log in with admin privileges to NSX-T Manager at https://NSX-MANAGER-IP-ADDRESS
.
In NSX-T Manager, go to System > Fabric > Nodes > Edge Transport Nodes.
Click Add Edge VM.
Configure the Edge VM as follows:
edge-node-1
edge-node-1.lab.com
Large
Configure Credentials as follows:
admin
admin
user that complies with the NSX-T requirements.root
user that complies with the NSX-T requirements.audit
user name and password.Configure the deployment as follows:
Configure the node settings as follows:
You can configure the N-VDS switch and transport zones for NSX Edge Node 2.
To configure N-VDS the switch and transport zones: - If you are using the default Transport Zones, use a single N-VDS switch. - If you are using custom Transport Zones, use a multiple N-VDS switches.
For more information, see Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI.
Click Finish to complete the configuration. The installation begins.
In vCenter, use the Recent Tasks panel at the bottom of the page to verify that you see the Edge Node 1 VM being deployed.
Once the process completes, you should see the Edge Node 1 deployed successfully in NSX-T Manager.
Click the N-VDS link and verify that you see the switch or switches.
In vCenter verify that the Edge Node is created.
Deploy the Edge Node 2 VM using the NSX-T Manager interface.
In NSX-T Manager, go to System > Fabric > Nodes > Edge Transport Nodes.
Click Add Edge VM.
Configure the Edge VM as follows:
edge-node-2
edge-node-2.lab.com
Configure Credentials as follows:
admin
admin
user that complies with the NSX-T requirements.root
user that complies with the NSX-T requirements.audit
user name and password.Configure the deployment as follows:
Configure the node settings as follows:
You can configure the N-VDS switch and transport zones for NSX Edge Node 2.
To configure N-VDS the switch and transport zones: - If you are using the default Transport Zones, use a single N-VDS switch. - If you are using custom Transport Zones, use a multiple N-VDS switches.
For more information, see Configuring NSX-T Data Center v3.1 Transport Zones and Edge Node Switches for TKGI.
Click Finish to complete the configuration. The installation begins.
In vCenter, use the Recent Tasks panel at the bottom of the page to verify that you see the Edge Node 1 VM being deployed.
Once the process completes, you should see the Edge Node 2 deployed successfully in NSX-T Manager.
Click the N-VDS link and verify that you see the N-VDS switch or switches.
In vCenter verify that Edge Node 2 is created.
In NSX-T Manager, verify that you see both Edge Nodes.
To configure the TEP, we used the default profile named nsx-default-uplink-hostswitch-profile
. However, because the TEP is on VLAN 3127, you must modify the uplink profile for the ESXi Transport Node (TN). NSX-T does not allow you to edit settings for the default uplink profile, so we create a new one.
Go to System > Fabric > Profiles > Uplink Profiles.
Click Add.
Configure the New Uplink Profile as follows:
nsx-esxi-uplink-hostswitch-profile
Failover Order
uplink-1
3127
Click Add.
Verify that the Uplink Profile is created.
Deploy each ESXi host in the COMPUTE-cluster as an ESXi host transport node (TN) in NSX-T. If you have not created a separate COMPUTE-cluster for ESXi hosts, deploy each ESXi host in the vSphere cluster as a host transport node in NSX-T.
Go to System > Fabric > Nodes > Host Transport Nodes.
Expand the Compute Manager and select the ESXi host in the COMPUTE-cluster, or each ESXi host in the vSphere cluster.
Click Configure NSX.
In the Host Details tab, enter a name. For example, 10.172.210.57
.
In the Configure NSX tab, configure the transport node as follows:
VDS
(do not select the N-VDS option)switch-overlay
(you must use the same switch name that was configured for tz-overlay
transport zone)tz-overlay
nsx-default-nioc-hostswitch-profile
nsx-esxi-uplink-hostswitch-profile
LLDP [Send Packet Disabled]
Use IP Pool
TEP-IP-POOL
uplink-1
vmnic1
Click Finish.
Verify that the host TN is configured.
To avoid any overlay communication in the future due to MTU issue, test TEP to TEP connectivity and verify that it is working.
SSH to edge-node-1 and get the local TEP IP address. For example, 192.23.213.1
. Use the command get vteps
to get the IP.
SSH to edge-node-2 and get the local TEP IP address, ushc as 192.23.213.2
. Use the command get vteps
to get the IP.
SSH to the ESXi host and get the TEP IP address. For example, 192.23.213.3
. Use the command esxcfg-vmknic -l
to get the IP. The interface will be vmk10
and the NetStack will be vxlan
.
From each ESXi transport node, test the connections to each NSX-T Edge Node, for example:
# vmkping ++netstack=vxlan 192.23.213.1 -d -s 1572 -I vmk10: OK
# vmkping ++netstack=vxlan 192.23.213.2 -d -s 1572 -I vmk10: OK
Test the connection from NSX-T Edge Node 1 and Edge Node 2 to ESXi TN:
> vrf 0
> ping 192.23.213.1 size 1572 dfbit enable: OK
Test the connection from NSX-T Edge Node 1 to NSX-T Edge Node 2:
> vrf 0
> ping 192.23.213.2 size 1572 dfbit enable: OK
Go to System > Fabric > Nodes > Edge Clusters.
Click Add.
edge-cluster-1
.edge-node-1
and edge-node-2
.Click Add.
Verify.
Create an uplink Logical Switch to be used for the Tier-0 Router.
At upper-right, select the Manager tab.
Go to Networking > Logical Switches.
Click Add.
Configure the new logical switch as follows:
LS-T0-uplink
tz-vlan
1548
Click Add.
Verify.
Select Networking from the Manager tab.
Select Tier-0 Logical Router.
Click Add.
Configure the new Tier-0 Router as follows:
T0-router
edge-cluster-1
Active-Active
or Active-Standby
Non-Preemptive
Note: Configuring Failover mode is optional if HA mode is configured as Active-Active
. For more information on NSX-T HA mode configuration, see Add a Tier-0 Gateway in the VMware NSX-T Data Center documentation.
Click Save and verify.
Select the T0 router.
Select Configuration > Router Ports.
Click Add.
Configure a new router port as follows:
Click Add and verify.
Select the T0 router.
Select Configuration > Router Ports.
Add a second uplink by creating a second router port for edge-node-2:
Once completed, verify that you have two connected router ports.
Create an HA VIP for the T0 router, and a default route for the T0 router. Then test the T0 router.
Select the Tier-0 Router you created.
Select Configuration > HA VIP.
Click Add.
Configure the HA VIP as follows:
Click Add and verify.
Select Routing > Static Routes.
Click Add.
0.0.0.0/0
10.173.62.253
Click Add and verify.
Verify the Tier 0 router by making sure the T0 uplinks and HA VIP are reachable from your laptop.
For example:
> ping 10.173.62.50
PING 10.173.62.50 (10.173.62.50): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 10.173.62.50: icmp_seq=1 ttl=58 time=71.741 ms
64 bytes from 10.173.62.50: icmp_seq=0 ttl=58 time=1074.679 ms
> ping 10.173.62.51
PING 10.173.62.51 (10.173.62.51): 56 data bytes
Request timeout for icmp_seq 0
64 bytes from 10.173.62.51: icmp_seq=0 ttl=58 time=1156.627 ms
64 bytes from 10.173.62.51: icmp_seq=1 ttl=58 time=151.413 ms
> ping 10.173.62.52
PING 10.173.62.52 (10.173.62.52): 56 data bytes
64 bytes from 10.173.62.52: icmp_seq=0 ttl=58 time=6.864 ms
64 bytes from 10.173.62.52: icmp_seq=1 ttl=58 time=7.776 ms
TKGI requires a Floating IP Pool for NSX-T load balancer assignment and the following two IP blocks for Kubernetes pods and nodes:
TKGI-NODE-IP-BLOCK: 172.23.0.0/16
In the Manager interface, go to Networking > IP Address Pools > IP Block.
Click Add.
Configure the Pod IP Block as follows:
Click Add and verify.
Repeat same operation for the Node IP Block.
Click Add and verify.
Select IP Pools tab.
Click Add.
Configure the IP pool as follows:
Click Add and verify.
Networking for the TKGI Management Plane consists of a Tier-1 Router and Switch with NAT Rules for the Management Plane VMs.
Create Tier-1 Logical Switch and Router for TKGI Management Plane VMs. Complete the configuration by enabling Route Advertisement on the T1 router.
In the NSX Management console, navigate to Networking > Logical Switches.
Click Add.
Create the LS for TKGI Management plane VMs:
Click Add and verify creation of the T1 logical switch.
Go to Networking > Tier-1 Logical Router.
Click Add.
Configure the Tier-1 logical router as follows:
Click Add and verify.
Select the T1 router and go to Configuration > Router port.
Click Add.
Configure the T1 router port as follows:
Click Add and verify.
Select Routing tab.
Click Edit and configure route advertisement as follows:
Click Save and verify.
You need to create the following NAT rules on the Tier-0 router for the TKGI Management Plane VMs.
10.173.62.220
(for example) to access Ops Manager10.173.62.221
(for example) to access HarborSNAT: 10.173.62.222
(for example) for all TKGI management plane VM traffic destined to the outside world
In the NSX Management console, navigate to Networking > NAT.
In the Logical Router field, select the T0-router you defined for TKGI.
Click Add.
Configure the Ops Manager DNAT rule as follows:
1000
DNAT
Any Protocol
10.173.62.220
, for example10.1.1.2
, for exampleClick Add and verify.
Add a second DNAT rule for Harbor by repeating the same operation.
1000
DNAT
Any Protocol
10.173.62.221
, for example10.1.1.6
, for exampleVerify the creation of the DNAT rules.
Create the SNAT rule for the management plane traffic as follows:
9024
SNAT
Any Protocol
10.1.1.0/24
, for example10.173.62.222
, for exampleVerify the creation of the SNAT rule.
The default NSX-T password expiration interval is 90 days. After this period, the NSX-T passwords will expire on all NSX-T Manager Nodes and all NSX-T Edge Nodes. To avoid this, you can extend or remove the password expiration interval, or change the password if needed.
Note: For existing Tanzu Kubernetes Grid Integrated Edition deployments, anytime the NSX-T password is changed you must update the BOSH and TKGI tiles with the new passwords. See Adding Infrastructure Password Changes to the Tanzu Kubernetes Grid Integrated Edition Tile for more information.
To update the NSX-T Manager password, perform the following actions on one of the NSX-T Manager nodes. The changes will be propagated to all NSX-T Manager nodes.
To manage user password expiration, you use the CLI on one of the NSX-T Manager nodes.
To access a NSX-T Manager node, from Unix hosts use the command ssh USERNAME@IP_ADDRESS_OF_NSX_MANAGER
.
For example:
ssh [email protected]
On Windows, use PuTTY and provide the IP address for NSX-T Manager. Enter the user name and password that you defined during the installation of NSX-T.
To retrieve the password expiration interval, use the following command:
get user USERNAME password-expiration
For example:
NSX CLI (Manager, Policy, Controller 3.0.0.0.0.15946739). Press ? for cost or enter: help
nsx-mgr-1> get user admin password-expiration
Password expires 90 days after last change
To update the user password, use the following command:
set user USERNAME password NEW-PASSWORD old-password OLD-PASSWORD.
For example:
set user admin password my-new-pwd old-password my-old-pwd
To set the password expiration interval, use the following command:
set user USERNAME password-expiration PASSWORD-EXPIRATION.
For example, the following command sets the password expiration interval to 120 days:
set user admin password-expiration 120
To remove password expiration, use the following command:
clear user USERNAME password-expiration.
For example:
clear user admin password-expiration
To verify:
nsx-mgr-1> clear user admin password-expiration
nsx-mgr-1> get user admin password-expiration
Password expiration not configured for this user
To update the NSX-T Edge Node password, perform the following actions on each NSX-T Edge Node.
Note: Unlike the NSX-T Manager nodes, you must update the password or password interval on each Edge Node.
SSH on the Edge Node is disabled by default. You have to enable SSH on the Edge Node using the Console from vSphere.
start service ssh
set service ssh start-on-boot
For example:
ssh [email protected]
For example:
nsx-edge> get user admin password-expiration
Password expires 90 days after last change
For example:
nsx-edge> set user admin password my-new-pwd old-password my-old-pwd
For example, the following command sets the password expiration interval to 120 days:
nsx-edge> set user admin password-expiration 120
For example:
NSX CLI (Edge 3.0.0.0.0.15946012). Press ? for command list or enter: help
nsx-edge-2> get user admin password-expiration
Password expires 90 days after last change. Current password will expire in 7 days.
nsx-edge-2> clear user admin password-expiration
nsx-edge-2> get user admin password-expiration
Password expiration not configured for this user
Once you have completed the installation of NSX-T v3.0, return to the TKGI installation workflow and proceed with the next phase of the process. See Install Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T Using Ops Manager.