VMware NSX Advanced Load Balancer (formerly Avi Networks) is a software-defined platform that provides a centrally managed dynamic pool of load-balancing resources on commodity x86 servers, VMs, or containers to deliver granular services close to individual applications.
NSX Advanced Load Balancer has three core components:
Admin Console
Controller
Service Engines
NSX Advanced Load Balancer - Admin Console
The NSX Advanced Load Balancer Admin Console is a modern web-based user interface that provides role-based access to control, manage, and monitor applications. Its capabilities are also available through CLI and REST API that can be integrated with other systems such as Tanzu Kubernetes Grid.
NSX Advanced Load Balancer Controller
The NSX Advanced Load Balancer Controller is the single point of management and control that serves as the "brain" of the platform. It supports high availability and is deployed as a three-node cluster. The leader node performs load-balancing configuration management for the cluster. The follower nodes collaborate with the leader node to perform data collection from Service Engines and process analytic data.
The NSX Advanced Load Balancer controllers continually exchange information securely with the service engines and with one another. The health of servers, client connection statistics, and client-request logs collected by the service engines are regularly offloaded to the controllers. These controllers share the work of processing the logs and aggregating analytics. The controllers also send commands such as configuration changes to the service engines. Controllers and service engines communicate using their management IP addresses.
NSX Advanced Load Balancer Service Engine
NSX Advanced Load Balancer Service Engines (Service Engines) are VM-based applications that handle all data plane operations by receiving and executing instructions from the Controller. The Service Engines perform load balancing for all client- and server-facing network interactions. It also collects real-time application telemetry from application traffic flows.
The hardware requirements of a Service Engine can be customized in a specific Service Engine group.
Virtual Service
Within NSX Advancer Load Balancer, each Load Balancer service is associated with one or multiple Virtual Services (VSs)- that serve this load balancer. The Virtual Services are placed on Service Engines and their affinity depends on the Virtual Service placement setting in a specific group.
Virtual service placement mechanisms:
Compact: In this mechanism, NSX Advanced Load Balancer prefers to spin up and use the minimum number of Service Engines. The Virtual Services are placed on Service Engines that are already running.
Distributed: In this mechanism, NSX Advanced Load Balancer maximizes Virtual Service performance by avoiding placements on existing Service Engines. Instead, it places Virtual Services on newly spun-up Service Engines, up to the maximum number of Service Engines per group.
If a Virtual Service is scaled out across multiple Service Engines, the Virtual Service placement setting is used to determine which Service Engines to use.
Recommended NSX Advanced Load Balancer Design
Design Recommendation |
Design Justification |
Design Implication |
Domain Applicability |
---|---|---|---|
Deploy NSX ALB controller cluster with three controllers. |
Provides high availability on the control plane |
Requires additional networking and resource capacity |
Management domain, Compute clusters VNF, CNF, C-RAN, Near/Far Edge, and NSX Edge Not applicable to RAN sites |
Deploy an NSX ALB controller cluster with three controllers in each domain in a multi-site deployment. |
Provides control plane services in the local site even during the primary site failure |
Resources are required to deploy the NSX ALB controller cluster in each site. |
|
Use the Write access mode that allows NSX ALB to dynamically create and scale Service Engines. |
Enables automation of Service Engines deployments |
Users with sufficient permissions must be provided in vCenter SSO. Sufficient resources must be available for dynamic scaling. |
|
Set the Virtual Service placements over Service Engines to distributed. |
Provides more efficient use of Service Engines Optimizes throughput performance of the load balancer |
Requires more resources |
|
Minimum of two Virtual Services for each specific load balancer service. |
Increases the availability of the Virtual Service |
Requires more resources |
|
BGP with ECMP and RHI are used to advertise VIPs. |
BGP and RHI are required for the dynamic advertisement in the event of load balancer scaling. |
Specific configuration is needed both on NSX ALB and on peer routers to set up the BGP. |
|
N+M mode is used for Service Engine Groups. |
Provides better compromise of availability and scaling for the Virtual Services. |
Minimum number of Service Engines per Service Engine group must be three. |
|
To tolerate one Service Engine failure, M is set to 1 (M=1). This means that the equivalent resources of one Service Engine in a Service Engine Group are reserved for an HA event. |
Provides better availability and performance during a Service Engine failure event |
Additional resources are required to host the HA Service Engine. |
|
Service Engine data-path failure detection is not enabled. BGP+BFD is used instead. |
Faster failure detection and load distribution using ECMP. |
BGP needs to be used in the environment, and BFD timers must be aligned. |