Before you start implementing the components of the Advanced Load Balancing for VMware Cloud Foundation validated solution, you must set up an environment that has a specific compute, storage, and network configuration, and that provides external services to the components of the solution.

You need to review the Planning and Preparation of Advanced Load Balancing for VMware Cloud Foundation documentation ahead of deployment of NSX Advanced Load Balancer to avoid costly rework and delays.

Hardware Requirements

To implement the NSX Advanced Load Balancer from this design, your hardware must meet certain requirements.

Component

Requirement per Region

Servers

BIOS Configuration

  • Advanced Encryption Standard-New Instructions (AES-NI) Enabled

Network Interfaces

Minimum of 10GB

Software Requirements

To implement the VMware NSX Advanced Load Balancer from this design, your software must have the requirements specified in the Solution Interoperability of Advanced Load Balancing for VMware Cloud Foundation section.

SCP Backup Target

You can choose to setup a Secure Copy Protocol (SCP) service for remote backups of NSX Advanced Load Balancer before you deploy the components of this design.

Dedicate space on a remote server to save data backups for NSX Advanced Load Balancer over SCP.

Requirement

Description

Backup Target

A backup target for the Controller VMs in the SDDC. The server must support SCP connections.

VLANs and IP Subnets

This validated solution requires that you allocate certain VLAN IDs and IP subnets for the traffic types in the SDDC.

For the Controllers, it is recommended to share the port-group used for core VMware Cloud Foundation management services. This means that Controller VMs should use the same port-group as used by vCenter Server(s) and NSX Manager(s).

For the Service Engines, an VLAN-backed NSX segment(s) can be used for:

  • The management network for the Service Engines for both types of NSX-T Cloud Connector integrations i.e. overlay-backed and VLAN-backed on the NSX Advanced Load Balancer.

  • The data network(s) for the Service Engines for NSX-T Cloud Connector integration of type VLAN on the NSX Advanced Load Balancer.

Overlay-backed NSX Segments and IP Subnets

If an overlay-backed NSX segment is being used in the VI workload domains, this design requires that you allocate certain overlay-backed NSX segments connected to a Tier-1 logical router and IP subnets for the Service Engine(s) to service traffic.

Cluster

Overlay-backed NSX Segment Function

Logical Segment Name

Subnet

VI workload domain

Management network for the Service Engines.

sfo-w01-cl01-vds01-pg-avimgmt

VI workload domain

Data Network for the Service Engines.

Note:

More can be added as required.

sfo-w01-cl01-vds01-pg-avidata01

Note:

Alternatively, a NSX VLAN segment could be used as the management network for the Service Engines.

Host Names and IP Addresses

Before you deploy the NSX Advanced Load Balancer by following this design, you must define the host names and IP addresses for the Controller VMs and configure them in DNS with fully qualified domain names (FQDN) that map the host names to their IP addresses.

Table 1. Example

Component

Host Name

DNS Zone

IP Address

Description

The Controller cluster VIP

10.10.10.100

The Controller cluster VIP Interface

The Controller instances

10.10.10.101

The Controller instances for the management cluster

10.10.10.102

10.10.10.103

Workload Footprint

Before you deploy the NSX Advanced Load Balancer, you must provide sufficient compute and storage resources to meet the footprint requirements of the Controller cluster and the Service Engines.

Note:

It is required that the Controller VMs are created in the management workload domain of the VMware Cloud Foundation.

Workload Footprint for Management Domain

Workload

vCPUs

vRAM (GB)

Storage (GB)

NSX Advanced Load Balancer Controller cluster

Total

Total with 30% free storage capacity

Workload Footprint for VI Workload Domain

Workload

vCPUs

vRAM (GB)

Storage (GB)

Service Engines

Total

Total with 30% free storage capacity