In general, a load balancer (NSX Advanced Load Balancer) performs address translation for the incoming requests and outgoing requests. Return packets go through the load balancer, and the destination and the source address is changed as per the configuration on the load balancer.

Note:

Layer 2 and Layer 3 Direct Server Return (DSR) are supported.

The following is the packet flow when Direct Server Return (DSR) is enabled: The load balancer does not perform any address translation for the incoming requests.

  • The traffic is passed to the pool members without any changes in the source and the destination address.

  • The packet arrives at the server with the virtual IP address as the destination address.

  • The server responds with the virtual IP address as the source address. The return path to the client does not flow back through the load balancer and thus the term, Direct Server Return.

Note:

This feature is only supported for IPv4.

Use Case

DSR is often applicable to audio and video applications as these applications are sensitive to latency.

Supported Modes

Refer to the following table for the supported modes for DSR:

DSR Type

Encapsulation

How it works

Layer 2 DSR

MAC-based Translation

NSX Advanced Load Balancer Controller rewrites the source MAC address with Service Engine Interface MAC address and destination MAC address with server MAC address.

Layer 3 DSR

IP-in-IP

An IP-in-IP tunnel is created from the NSX Advanced Load Balancer to the pool members, which can be a router hop(s) away.

The incoming packets from clients are encapsulated in IP-in-IP with source as the Service Engine’s interface IP and destination as the back-end server IP address.

Refer to the following table for the specification of supported features for DSR:

Feature

Support

Encapsulation

IP-in-IP, MAC-based translation

Ecosystem

VMware write, VMware No-access, and Linux server cloud

Dataplane drivers

DPDK and PCAP support for Linux server cloud

BGP

VIP placement using BGP in front-end

Load balancing algorithm

Only Consistent Hash is supported for L2 and L3 DSR

TCP UDP

Support for both TCP Fast Path and UDP Fast Path in L2 and L3 DSR

High Availability (SE)

N+M, active-active, and active-standby

Layer 2 DSR

  • Destination MAC address for the incoming packets is changed to server MAC address.

  • Supported modes: DSR over TCP and UDP.

  • Health monitoring of TCP Layer2 DSR is supported as well.

Packet Flow Diagram

The following diagram exhibits a packet flow diagram for Layer 2 DSR:



Packet Flow
  • Clients send requests to a Virtual IP (VIP) served by the Load Balancer (Step 1)

  • LB determines real server to forward the request to

  • LB performs MAC Address Translation (Step 2)

  • The server responds directly to the client, bypassing the LB (Step 3)

Layer 2 - DSR
  • Servers must be on directly attached networks to the Load Balancer

  • LB and server need to be on the same L2 network segment

  • The server's loopback IP should be configured for VIP IP

Configuring Network Profile for Layer 2 DSR

Login to the NSX Advanced Load Balancer CLI and use the configure networkprofile <profile name> command to enter into TCP fast-path profile mode. For Layer 2 DSR, enter the value for the DSR type as dsr_type_l2.

[admin:10-X-X-X]: > configure networkprofile <profile name>
[admin:10-X-X-X]: networkprofile> profile
[admin:10-X-X-X]: networkprofile profile> tcp_fast_path_profile
[admin:10-X-X-X]: networkprofile profile:tcp_fast_path_profile>dsr_profile dsr_type dsr_type_l2
[admin:10-X-X-X]: networkprofile profile:dsr_profile> save
[admin:10-X-X-X]: networkprofile> save

Once the network profile is created, create an L4 application virtual service with the DSR network profile created above and attach DSR-capable servers to the pool associated with the virtual service.

Configuring Server

ifconfig lo:0 <VIP ip> netmask 255.255.255.255 -arp up
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 >/proc/sys/net/ipv4/conf/<Intraface of pool server ip configured>/rp_filter

sysctl -w net.ipv4.ip_forward=1