Manual installation of the NSX Application Platform requires your deployment environment to meet the minimum required resources.

Required Kubernetes Cluster Resources

  • For TKG Cluster on Supervisor installation and configuration information, see Installing and Configuring vSphere with Tanzu (version 8.0) or VMware vSphere Documentation website for other versions. For tested and supported versions, see the Interoperability Matrix.

  • Your infrastructure administrator must configure the Kubernetes cluster on which you can deploy the NSX Application Platform and the NSX features that the platform hosts. Enough resources must be allocated to the Kubernetes cluster you are using to deploy the NSX Application Platform pods. Because each supported NSX feature has specific resource requirements, determine which of the hosted NSX feature you plan to use.

    See the NSX Application Platform System Requirements topic for details about the supported form factors and their resource requirements.

  • Important: The Kubernetes guest cluster used for the NSX Application Platform must use a default Service Domain of cluster.local. This is the default value and is defined in the cluster configuration:
    settings: network: serviceDomain: cluster.local 
    For NSX Application Platform, do not change this value or set a non-default service domain.
  • Your infrastructure administrator must also install and configure the following infrastructures in advance.

    • Container Network Interface (CNI), such as Antrea, Calico, or Flannel.

    • Container Storage Interface (CSI). To provision dynamic volumes, you must have an available storage class in the Kubernetes cluster you are using. The storage class must also support volume resizing to accommodate scaling or a system upgrade.

(Optional Requirement) Private Container Registry with Chart Repository Service

To simplify the NSX Application Platform deployment process, use the VMware-hosted registry and repository.

(Optional) If your Kubernetes cluster does not have access to the Internet or you have security restrictions, your infrastructure administrator must set up a private container registry with a chart repository service. Use this private container registry to upload the NSX Application Platform Helm charts and Docker images required to deploy the NSX Application Platform. VMware uses Harbor to validate the deployment process that has a private container registry, however, the NSX Application Platform deployment is standards-based. See Upload the NSX Application Platform Docker Images and Helm Charts to a Private Container Registry for details.

(Optional Requirement) URL for a Private Container Registry

If you are using a private container registry, obtain from your infrastructure administrator the URL for that registry. You use this URL during the deployment process.

Required Kubernetes Configuration File

You must also obtain the Kubernetes configuration file from your infrastructure administrator. To access the Kubernetes cluster that you are using securely, you need the kubeconfig file during the NSX Application Platform deployment for the NSX Manager. To access all the resources of the Kubernetes cluster you are using, the kubeconfig file must have all the privileges.

Important:

The default kubeconfig file in a VMware vSphere® with Tanzu Guest Kubernetes Cluster contains a token which expires after ten hours by default. While this expired token does not impact functionality, it results in a warning message regarding out-of-date credentials. To avoid the warning, before you deploy the NSX Application Platform on a TKG Cluster on Supervisor, work with your infrastructure administrator to create a long-lived token that you can use during the NSX Application Platform deployment. See Generate a TKG Cluster on Supervisor Configuration File with a Non-Expiring Token for details on how to extract the token.

Required Service Name or Interface Service Name (FQDN)

During the NSX Application Platform deployment, you provide a fully qualified domain name (FQDN) for the Service Name text box in an NSX 3.2.0 deployment or for the Interface Service Name text box in an NSX 3.2.1 or later deployment.

The Service Name or Interface Service Name value is used as the HTTPS endpoint to connect to the NSX Application Platform.

To obtain the FQDN value, use one of the following workflows.

  • You must configure the FQDN with a static IP address in the DNS server before the NSX Application Platform deployment. The Kubernetes cluster infrastructure you are using must be able to assign a static IP address. The following are the supported Kubernetes environments.

    • MetalLB - an external load balancer for Upstream Kubernetes cluster.

    • VMware Tanzu® Kubernetes for VMware vSphere® 7.0 U2 and VMware vSphere 7.0 U3 with NSX.

    • VMware vSphere with Tanzu using vSphere networking. See the VMware vSphere document, Enable Workload Management with vSphere Networking, for more information.

  • If you have the External DNS installed (see the Kubernetes SIGs - External DNS web page), you only have to provide the FQDN when prompted for a Service Name. Your Kubernetes infrastructure automatically configures the FQDN with a dynamic IP address in the DNS server.

    The Workload Control Plane (WCP) with External DNS installed is the supported Kubernetes environment.

Required Messaging Service Name (for NSX 3.2.1 or Later Deployments)

The Messaging Service Name value is an FQDN for the HTTPS endpoint that is used to receive the streamlined data from the NSX data sources.

Required Communication from your Kubernetes Cluster Nodes

Confirm that the Kubernetes cluster nodes you are using can reach the NSX Manager appliance.

System Times Synchronization Requirement

Synchronize the system times on the Kubernetes cluster nodes and the NSX Manager appliance that you are using.

Availability and Resiliency Best Practices

To avoid data loss when the worker nodes fail, do not use the storage classes that have any persistence volume local to the worker nodes. Consider using a remote, independent, and distributed storage class, such as VMware vSAN volumes.

Important Note for Production Deployments:
The following best practices apply to any production deployment for either Advanced form factors.
  • For increased availability and resiliency, 3 control plane nodes are required.
  • VM classes used for the NSX Application Platform control plane nodes and worker nodes must use a guaranteed reservation of 100% for CPU and Memory resources to avoid any resource overcommitment.

There must be no resource contention for Storage I/O operations per second (IOPS) or Network bandwidth. The vSphere Storage and Network I/O Control are platform features that can help prioritize resources for the NSX Application Platform.

Load Balancing Requirement

When deploying the NSX Application Platform, you must configure your TKG Cluster on Supervisor or Upstream Kubernetes cluster to have a load balancer (LB) IP pool with at least five IP addresses. To finish an NSX Application Platform deployment successfully, the platform requires at least five available IP addresses. If you plan to scale out your NSX Application Platform deployment later, your TKG Cluster on Supervisor or Kubernetes cluster LB IP pool must contain one more IP address per Kubernetes node used by the platform. Consider configuring your TKG Cluster on Supervisor or Upstream Kubernetes cluster LB IP pool with a total of 15 IP addresses, since VMware only supports a maximum of 10 additional Kubernetes nodes after scaling out the platform.

Additional Volume Requirement

For NSX Application Platform, your guest cluster worker nodes require an additional volume of at least 64 GiB for ephemeral storage.

To specify the disk and storage parameters for each node type, use the information in the following table. For more information, see the v1alpha3 Example: TKC with Default Storage and Node Volumes topic in the Using Tanzu Kubernetes Grid 2.0 on Supervisor with vSphere with Tanzu 8 documentation.

Table 2: Volume Requirement

Node Type

Volume Name

Volume Capacity

Volume mountPath

Worker Node

containerd

64 GiB

/var/lib/containerd

Control Plane Node Size Requirement

If you are deploying the NSX Application Platform and NSX features, like Security Intelligence, on a TKG Cluster on Supervisor, the default virtual machine class type of guaranteed-small size (2 vCPUs and 4 GB RAM) might not be sufficient for the TKG Cluster on Supervisor control plane node. Consider using the following class type that has a bigger size for the Upstream Kubernetes cluster or TKG Cluster on Supervisor control plane node.

  • guaranteed-medium (2 vCPUs and 8 GB RAM)

Consult the Virtual Machine Classes for Tanzu Kubernetes Clusters documentation for more information.