To install the NSX Application Platform successfully and to activate the NSX features that it hosts, you must prepare the deployment environment so that it meets the minimum required resources.

Requirements List

The following tables lists the prerequisites that you must satisfy before you start deploying the NSX Application Platform.

Requirement

Details

NSX-T Data Center 3.2 or later, with a valid license

  • The NSX Application Platform is available beginning with the NSX-T Data Center 3.2 release.

  • For a brand new NSX-T Data Center installation, see the NSX-T Data Center Installation Guide for version 3.2 or later at https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html for installation instructions.

  • If you are upgrading from NSX-T Data Center 3.1.x or earlier without NSX Intelligence installed, see the NSX-T Data Center Upgrade Guide at https://docs.vmware.com/en/VMware-NSX-T-Data-Center/index.html for the upgrade information.

  • If you are upgrading from NSX-T Data Center 3.1.x or earlier with an installation of NSX Intelligence 1.2.x or earlier, you must prepare your current NSX Intelligence installation before you upgrade to NSX Intelligence 3.2 and NSX-T Data Center 3.2. See the Activating and Upgrading VMware NSX Intelligence document for version 3.2 at https://docs.vmware.com/en/VMware-NSX-Intelligence/index.html.

  • Confirm that the NSX-T Data Center version is compatible with the NSX Application Platform version that you plan to deploy. See the VMware Product Interoperability Matrices at https://interopmatrix.vmware.com.

Valid NSX-T or NSX Data Center license

To deploy the NSX Application Platform, the current NSX Manager session in use must have a valid license in effect during the NSX Application Platform deployment.

See License Requirement for NSX Application Platform Deployment for the list of valid licenses.

Valid NSX-T Data Center user role

To deploy the NSX Application Platform, you must have Enterprise Admin role privileges.

Certificate

  • If your NSX Manager appliance uses CA-signed certificates with partial chain on the NSX Manager Unified Appliance cluster, you must replace the certificate with a full certificate chain. See VMware Knowledge Base article 78317 for more information.

  • When using multiple NSX Manager appliances, your environment must meet one of the following certificate prerequisites.

    • All the appliances must share the same SSL certificate.

    • A dedicated SSL certificate must be issued for each appliance, where the certificate Common Name (CN) must be unique across all nodes.

    • When using a Virtual IP (VIP), the cluster certificate must either be the same as shared by all individual appliances or must be unique from all the nodes.

Resources for Tanzu Kubernetes cluster (TKC) or upstream Kubernetes cluster

  • VMware supports an NSX Application Platform deployment on a TKC or an upstream Kubernetes cluster. The following table lists the versions supported and tested by VMware.

    • TKC versions:

      • 1.17.17 (See the following Note)

      • 1.18.19 (See the following Note)

      • 1.19.11

      • 1.20.7

      • 1.21.2

      • 1.21.6

    • Upstream Kubernetes cluster versions:

      • 1.17

      • 1.18

      • 1.19

      • 1.20

      • 1.21

    Note:

    The NSX Application PlatformScale Up operation is not supported on TKC versions 1.17.x and 1.18.x. See Failed to Increase the Volume Size of the Data Storage Disk for details.

  • Your infrastructure administrator must configure the TKC or upstream Kubernetes cluster on which you can deploy the NSX Application Platform and the NSX features that the platform hosts. Enough resources must be allocated to the TKC or upstream Kubernetes cluster to deploy the NSX Application Platform pods. Since each supported NSX feature has specific resource requirements, determine which of the hosted NSX feature you plan to use.

    See the NSX Application Platform System Requirements topic for details about the supported form factors and their resource requirements.

  • Your infrastructure administrator must also install and configure the following infrastructures in advance.

    • Container Network Interface (CNI), such as Antrea, Calico, and Flannel.

    • Container Storage Interface (CSI). You must have an available storage class in the TKC or upstream Kubernetes cluster to provision dynamic volumes. To scale up the data storage volume, the storage class must support volume resize.

(Optional) Private container registry with chart repository service

To simplify the NSX Application Platform deployment process, use the VMware-hosted registry and repository. This deployment process uses an outbound connection only and does not retain customer data.

(Optional) If your Kubernetes cluster does not have access to the Internet or you have security restrictions, your infrastructure administrator must set up a private container registry with a chart repository service. Use this private container registry to upload the NSX Application Platform Helm charts and Docker images required to deploy the NSX Application Platform. VMware used Harbor to validate the deployment process that uses a private container registry, however, the NSX Application Platform deployment is standards-based. See Upload the NSX Application Platform Docker Images and Helm Charts to a Private Container Registry for details.

(Optional) URL for a private container registry

If you are using a private container registry, obtain from your infrastructure administrator the URL for that registry. You use this URL during the deployment process.

Kubernetes configuration file

You must also obtain the Kubernetes configuration file from your infrastructure administrator. You need the kubeconfig file during the NSX Application Platform deployment for the NSX Manager to securely access your TKC or upstream Kubernetes cluster. The kubeconfig file must have all the privileges to access all the resources of the TKC or upstream Kubernetes cluster.

Important:

The default kubeconfig file in a VMware vSphere® with Tanzu Guest Kubernetes Cluster contains a token which expires after ten hours by default. While this expired token does not impact functionality, it results in a warning message regarding out-of-date credentials. To avoid the warning, before you deploy the NSX Application Platform on a TKC, work with your infrastructure administrator to create a long-lived token you can use during the platform deployment. See Generate a Tanzu Kubernetes Cluster Configuration File with a Non-Expiring Token for details on how to extract the token.

Service Name or Interface Service Name (FQDN)

During the NSX Application Platform deployment, you provide a fully qualified domain name (FQDN) for the Service Name text box in an NSX-T 3.2.0 deployment or for the Interface Service Name text box in an NSX-T 3.2.1 or later deployment. The Service Name or Interface Service Name is used as the HTTPS endpoint to connect to the NSX Application Platform.

Use one of the following workflows to obtain the FQDN value.

  • You must configure FQDN with a static IP address in the DNS server before the NSX Application Platform deployment. The TKC or upstream Kubernetes cluster infrastructure must be able to assign a static IP address. The following are the supported Kubernetes environments.

    • MetalLB - an external load balancer for upstream Kubernetes cluster.

    • VMware Tanzu® Kubernetes for VMware vSphere® 7.0 U2 and VMware vSphere 7.0 U3 with NSX-T Data Center.

    • VMware vSphere with Tanzu using vSphere networking. See the VMware vSphere document, Enable Workload Management with vSphere Networking, for more information.

  • If you have the External DNS installed (see https://github.com/kubernetes-sigs/external-dns/), you only have to provide the FQDN when prompted for a Service Name. Your Kubernetes infrastructure automatically configures the FQDN with a dynamic IP address in the DNS server.

    The Workload Control Plane (WCP) with External DNS installed is the supported Kubernetes environment.

Messaging Service Name (for NSX-T 3.2.1 or later deployments)

The Messaging Service Name value is an FQDN for the HTTPS endpoint that is used to receive the streamlined data from the NSX data sources.

Ports and Protocols

Verify that the required ports on your TKC or upstream Kubernetes cluster host are open for the NSX Application Platform to access. See https://ports.esp.vmware.com/.

Communication from your TKC or upstream Kubernetes cluster nodes

Confirm that the TKC or upstream Kubernetes cluster nodes are able to reach the NSX Manager appliance.

System times synchronization

Synchronize the system times on the TKC or upstream Kubernetes cluster nodes and the NSX Manager appliance.