To install the NSX Application Platform successfully and to activate the NSX features that it hosts, you must prepare the deployment environment so that it meets the minimum required resources.
You must satisfy the prerequisites listed in the following sections before you start deploying the NSX Application Platform.
NSX version requirement
Confirm that the NSX product version you are using is compatible with the NSX Application Platform version that you plan to deploy, along with its related NSX features (NSX Intelligence, NSX Network Detection and Response, NSX Malware Prevention, and NSX Metrics).
The versioning of the NSX features that are hosted on the NSX Application Platform matches the NSX Application Platform version number, and not the NSX product version number.
In an NSX Federation environment, you can deploy the NSX Application Platform on Local Managers only. You cannot deploy the NSX Application Platform using Global Managers. You can access the NSX Application Platform using a Local Manager only.
To determine which NSX Application Platform version you can deploy with which NSX version, use the following compatibility matrix.
NSX version |
Compatible NSX Application Platform version |
---|---|
3.2.x |
3.2.0, 3.2.1, 4.0.1 |
4.0.0.1 |
3.2.1, 4.0.1 |
4.0.1 |
4.0.1 |
-
If you need to install a brand new NSX installation, see the NSX Installation Guide for version 3.2 or later in the VMware NSX Documentation set for installation instructions.
-
For information about upgrading from NSX Application Platform version 3.2.x installation, see Upgrade the NSX Application Platform.
-
If you are upgrading from NSX 3.1.x or earlier without NSX Intelligence installed, see the NSX Upgrade Guide in the VMware NSX Documentation set.
-
If you are upgrading from NSX 3.1.x or earlier with an installation of NSX Intelligence 1.2.x or earlier, you must prepare your current NSX Intelligence installation before you upgrade to NSX Intelligence 3.2.x or later, and NSX 3.2.x or later. See the Activating and Upgrading VMware NSX Intelligence documentation for version 3.2 or later in the VMware NSX Intelligence Documentation set.
Valid NSX or NSX Data Center license requirement
To deploy the NSX Application Platform, the current NSX Manager session in use must have a valid license in effect during the NSX Application Platform deployment.
See License Requirement for NSX Application Platform Deployment for the list of valid licenses.
Valid NSX user role
To deploy the NSX Application Platform, you must have Enterprise Admin role privileges.
Valid CA-signed certificates
-
If your NSX Manager appliance uses CA-signed certificates with partial chain on the NSX Manager Unified Appliance cluster, you must replace the certificate with a full certificate chain. See VMware Knowledge Base article 78317 for more information.
-
When using multiple NSX Manager appliances, your environment must meet one of the following certificate prerequisites.
-
All the appliances must share the same SSL certificate.
-
A dedicated SSL certificate must be issued for each appliance, where the certificate Common Name (CN) must be unique across all nodes.
-
When using a Virtual IP (VIP), the cluster certificate must either be the same as shared by all individual appliances or must be unique from all the nodes.
-
Required resources for the Kubernetes cluster
-
VMware supports an NSX Application Platform deployment on a Tanzu Kubernetes Grid (TKG) Cluster on Supervisor or an Upstream Kubernetes cluster.Important: Upstream Kubernetes refers to the vanilla, open-source Kubernetes maintained by the Cloud Native Computing Foundation and does not cover any distributions or releases of Kubernetes which are not explicitly listed in the following table.
For TKG Cluster on Supervisor installation and configuration information, see Installing and Configuring vSphere with Tanzu (version 8.0) or VMware vSphere Documentation website for other versions.
VMware tested and supports the following versions.NSX Application Platform version TKG Cluster on Supervisor version Upstream Kubernetes cluster version 3.2.0, 3.2.1
1.17 - 1.21
1.17 1.21
4.0.1
1.20 - 1.22
1.20 - 1.24
-
Your infrastructure administrator must configure the Kubernetes cluster on which you can deploy the NSX Application Platform and the NSX features that the platform hosts. Enough resources must be allocated to the Kubernetes cluster you are using to deploy the NSX Application Platform pods. Because each supported NSX feature has specific resource requirements, determine which of the hosted NSX feature you plan to use.
See the NSX Application Platform System Requirements topic for details about the supported form factors and their resource requirements.
-
Important: The Kubernetes guest cluster used for the NSX Application Platform must use a default Service Domain of cluster.local. This is the default value and is defined in the cluster configuration:
settings: network: serviceDomain: cluster.local
For NSX Application Platform, do not change this value or set a non-default service domain. -
Your infrastructure administrator must also install and configure the following infrastructures in advance.
-
Container Network Interface (CNI), such as Antrea, Calico, or Flannel.
-
Container Storage Interface (CSI). To provision dynamic volumes, you must have an available storage class in the Kubernetes cluster you are using. To scale up the data storage volume, the storage class must support volume resize.
-
Internet access requirement
Ensure that your NSX system can access the public VMware-hosted registry and repository where you can obtain the packaged NSX Application Platform Helm chart and Docker images. The direct Internet access is only required during the installation and upgrade operations. This access is limited to the outbound access on TCP Port 443 (HTTPS) to https://projects.registry.vmware.com for the purpose of accessing the NSX Application Platform Helm charts and Docker images. No inbound access or permanent outbound access is required.
Outbound Internet access is required for both the NSX Unified Appliance VMs and NSX Application Platform guest cluster worker nodes.
If you configured your NSX environment to use an Internet proxy server using the tab, note that the NSX Application Platform can not be deployed using an Internet proxy server. If your Kubernetes cluster does not have access to the Internet or you have security restrictions, see the next optional requirement for an optional Private container registry with chart repository service.
(Optional Requirement) Private container registry with chart repository service
To simplify the NSX Application Platform deployment process, use the VMware-hosted registry and repository. This deployment process uses an outbound connection only and does not retain customer data.
(Optional) If your Kubernetes cluster does not have access to the Internet or you have security restrictions, your infrastructure administrator must set up a private container registry with a chart repository service. Use this private container registry to upload the NSX Application Platform Helm charts and Docker images required to deploy the NSX Application Platform. VMware used Harbor to validate the deployment process that uses a private container registry, however, the NSX Application Platform deployment is standards-based. See Upload the NSX Application Platform Docker Images and Helm Charts to a Private Container Registry for details.
(Optional Requirement) URL for a private container registry
If you are using a private container registry, obtain from your infrastructure administrator the URL for that registry. You use this URL during the deployment process.
Required Kubernetes configuration file
You must also obtain the Kubernetes configuration file from your infrastructure administrator. To access the Kubernetes cluster that you are using securely, you need the kubeconfig file during the NSX Application Platform deployment for the NSX Manager. To access all the resources of the Kubernetes cluster you are using, the kubeconfig file must have all the privileges.
The default kubeconfig file in a VMware vSphere® with Tanzu Kubernetes Guest Cluster contains a token which expires after ten hours by default. While this expired token does not impact functionality, it results in a warning message regarding out-of-date credentials. To avoid the warning, before you deploy the NSX Application Platform on a TKG Cluster on Supervisor, work with your infrastructure administrator to create a long-lived token that you can use during the NSX Application Platform deployment. See Generate a TKG Cluster on Supervisor Configuration File with a Non-Expiring Token for details on how to extract the token.
Required Service Name or Interface Service Name (FQDN)
During the NSX Application Platform deployment, you provide a fully qualified domain name (FQDN) for the Service Name text box in an NSX 3.2.0 deployment or for the Interface Service Name text box in an NSX 3.2.1 or later deployment.
The Service Name or Interface Service Name value is used as the HTTPS endpoint to connect to the NSX Application Platform.
To obtain the FQDN value, use one of the following workflows.
-
You must configure the FQDN with a static IP address in the DNS server before the NSX Application Platform deployment. The Kubernetes cluster infrastructure you are using must be able to assign a static IP address. The following are the supported Kubernetes environments.
-
MetalLB - an external load balancer for Upstream Kubernetes cluster.
-
VMware Tanzu® Kubernetes for VMware vSphere® 7.0 U2 and VMware vSphere 7.0 U3 with NSX.
-
VMware vSphere with Tanzu using vSphere networking. See the VMware vSphere document, Enable Workload Management with vSphere Networking, for more information.
-
-
If you have the External DNS installed (see the Kubernetes SIGs - External DNS webpage), you only have to provide the FQDN when prompted for a Service Name. Your Kubernetes infrastructure automatically configures the FQDN with a dynamic IP address in the DNS server.
The Workload Control Plane (WCP) with External DNS installed is the supported Kubernetes environment.
Required Messaging Service Name (for NSX 3.2.1 or later deployments)
The Messaging Service Name value is an FQDN for the HTTPS endpoint that is used to receive the streamlined data from the NSX data sources.
Required ports and protocols
Verify that the required ports on your Kubernetes cluster host are open for the NSX Application Platform to access. See the VMware Ports and Protocols webpage.
Required communication from your Kubernetes cluster nodes
Confirm that the Kubernetes cluster nodes you are using can reach the NSX Manager appliance.
System times synchronization requirement
Synchronize the system times on the Kubernetes cluster nodes and the NSX Manager appliance that you are using.