The NSX Edge load balancer enables high-availability service and distributes the network traffic load among multiple servers. It distributes incoming service requests evenly among multiple servers in such a way that the load distribution is transparent to users. Load balancing thus helps in achieving optimal resource utilization, maximizing throughput, minimizing response time, and avoiding overload. NSX Edge provides load balancing up to Layer 7.

You map an external, or public, IP address to a set of internal servers for load balancing. The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on the external IP address and decides which internal server to use. Port 80 is the default port for HTTP and port 443 is the default port for HTTPs.

You must have a working NSX Edge instance before you can configure load balancing. For information on setting up NSX Edge, see NSX Edge Configuration.

For information on configuring an NSX Edge certificate, see Working with Certificates.

NSX load balancing features are as follows:

  • Protocols: TCP, UDP, HTTP, HTTPS

  • Algorithms: Weighted round robin, IP hash, URI, least connection

  • SSL termination with AES-NI acceleration

  • SSL bridging (cient-side SSL + server-side SSL)

  • SSL certificates management

  • X-header forwarding for client identification

  • L4/L7 transparent mode

  • Connection throttling

  • Enable/disable individual servers (pool members) for maintenance

  • Health check methods (TCP, UDP, HTTP, HTTPS)

  • Enhanced health check monitor

  • Persistence/sticky methods: SourceIP, MSRDP, COOKIE, SSLSESSIONID

  • One-arm mode

  • Inline mode

  • URL rewrite and redirection

  • Application Rules for advanced traffic management

  • HA session sticky support for L7 proxy load balancing

  • IPv6 support

  • Enhanced load balancer CLI for troubleshooting

  • Available on all flavors of an NSX edge services gateway, with a recommendatory of X-Large or Quad Large for production traffic

Topologies

There are two types of load balancing services to configure in NSX, a one-armed mode, also known as a proxy mode, or the Inline mode, otherwise known as the transparent mode.

NSX Logical Load Balancing: Inline Topology

Inline or Transparent mode deploys the NSX edge inline to the traffic destined to the server farm. Transparent mode traffic flow is processed as follows:

  • The external client sends traffic to the virtual IP address (VIP) exposed by the load balancer.

  • The load balancer – a centralized NSX edge – performs only destination NAT (DNAT) to replace the VIP with the IP address of one of the servers deployed in the server farm.

  • The server in the server farm replies to the original client IP address. The traffic is received again by the load balancer since it is deployed inline, usually as the default gateway for the server farm.

  • The load balancer performs source NAT to send traffic to the external client, leveraging its VIP as source IP address.

NSX Logical Load Balancing: One-Armed Topology

One-Armed or Proxy mode consists of deploying an NSX edge directly connected to the logical network where load-balancing services are required.

  • The external client sends traffic to the Virtual IP address (VIP) exposed by the load balancer.

  • The load balancer performs two address translations on the original packets received from the client: destination NAT (DNAT) to replace the VIP with the IP address of one of the servers deployed in the server farm, and source NAT (SNAT) to replace the client IP address with the IP address identifying the load balancer itself. SNAT is required to force through the load balancer the return traffic from the server farm to the client.

  • The server in the server farm replies by sending the traffic to the load balancer per SNAT functionality.

  • The load balancer again performs a source and destination NAT service to send traffic to the external client, leveraging its VIP as source IP address.