In general, a load balancer (NSX Advanced Load Balancer) performs address translation for the incoming requests and outgoing requests. Return packets go through the load balancer, and the destination and the source address is changed as per the configuration on the load balancer.
The following is the packet flow when Direct Server Return (DSR) is enabled:
The load balancer does not perform any address translation for the incoming requests.
The traffic is passed to the pool members without any changes in the source and the destination address.
The packet arrives at the server with the virtual IP address as the destination address.
The server responds with the virtual IP address as the source address. The return path to the client does not flow back through the load balancer and thus the term, Direct Server Return.
This feature is only supported for IPv4.
Using pool sharing for DSR is not supported.
Use Case
DSR is often applicable to audio and video applications as these applications are sensitive to latency.
Supported Modes
The supported modes for DSR are as follows:
DSR Type |
Encapsulation |
How it works |
---|---|---|
Layer 2 DSR |
MAC-based Translation |
NSX Advanced Load Balancer Controller rewrites the source MAC address with Service Engine Interface MAC address and destination MAC address with server MAC address. |
Layer 3 DSR |
IP-in-IP |
An IP-in-IP tunnel is created from the NSX Advanced Load Balancer to the pool members, which can be a router hop(s) away. The incoming packets from clients are encapsulated in IP-in-IP with source as the Service Engine’s interface IP and destination as the back-end server IP address. |
Layer 3 DSR |
GRE |
Generic Routing Encapsulation (GRE) tunnel is supported for layer 3 DSR. In this case, the incoming packets from clients are encapsulated in a GRE header, followed by the outer IP header (delivery header). |
Feature |
Support |
---|---|
Encapsulation |
IP-in-IP, MAC-based translation |
Ecosystem |
VMware write, VMware No-access, and Linux server cloud |
Dataplane drivers |
DPDK and PCAP support for Linux server cloud |
BGP |
VIP placement using BGP in front-end |
Load balancing algorithm |
Only Consistent Hash is supported for L2 and L3 DSR |
TCP UDP |
Support for both TCP Fast Path and UDP Fast Path in L2 and L3 DSR |
High Availability (SE) |
N+M, active-active, and active-standbyUsing pool sharing for DSR is not supported.Using pool sharing for DSR is not supported. |
Using pool sharing for DSR is not supported.
Layer 2 DSR
Destination MAC address for the incoming packets is changed to server MAC address.
Supported modes: DSR over TCP and UDP.
Health monitoring of TCP Layer2 DSR is supported as well.
The following diagram exhibits a packet flow diagram for Layer 2 DSR:
- Packet Flow
-
Clients send requests to a Virtual IP (VIP) served by the Load Balancer (Step 1).
LB determines real server to forward the request to.
LB performs MAC Address Translation (Step 2).
The server responds directly to the client, bypassing the LB (Step 3).
- Layer 2 - DSR
-
Servers must be on directly attached networks to the Load Balancer.
LB and server need to be on the same L2 network segment.
The server's loopback IP must be configured for VIP IP.
Configuring Network Profile for Layer 2 DSR
Login to the NSX Advanced Load Balancer CLI and use the configure networkprofile <profile name> command to enter into TCP fast-path profile mode. For Layer 2 DSR, enter the value for the DSR type as dsr_type_l2
.
[admin:10-X-X-X]: > configure networkprofile <profile name> [admin:10-X-X-X]: networkprofile> profile [admin:10-X-X-X]: networkprofile profile> tcp_fast_path_profile [admin:10-X-X-X]: networkprofile profile:tcp_fast_path_profile>dsr_profile dsr_type dsr_type_l2 [admin:10-X-X-X]: networkprofile profile:dsr_profile> save [admin:10-X-X-X]: networkprofile> save
Once the network profile is created, create an L4 application virtual service with the DSR network profile created above and attach DSR-capable servers to the pool associated with the virtual service.
Configuring Server
ifconfig lo:0 <VIP ip> netmask 255.255.255.255 -arp up echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 >/proc/sys/net/ipv4/conf/<Interface of pool server ip configured>/rp_filter sysctl -w net.ipv4.ip_forward=1