The Edge Services Gateway (ESG) can be thought of as a proxy for the incoming client traffic.

The image is described in the surrounding text.

In proxy mode, the load balancer uses its own IP address as the source address to send requests to a back-end server. The back-end server views all traffic as being sent from the load balancer and responds to the load balancer directly. This mode is also called SNAT mode or non-transparent mode. For more information, refer to NSX Administration Guide.

A typical NSX one-armed load balancer is deployed on the same subnet with its back-end servers, apart from the logical router. The NSX load balancer virtual server listens on a virtual IP for incoming requests from client and dispatches the requests to back-end servers. For the return traffic, reverse NAT is required to change the source IP address from the back-end server to a virtual IP (VIP) address and then send the virtual IP address to the client. Without this operation, the connection to the client can break.

After the ESG receives the traffic, it performs the following two operations:
  • Destination Network Address Translation (DNAT) to change the VIP address to the IP address of one of the load balanced machines.
  • Source Network Address Translation (SNAT) to exchange the client IP address with the ESG IP address.

Then the ESG server sends the traffic to the load balanced server and the load balanced server sends the response back to the ESG, and then back to the client. This option is much easier to configure than the inline mode, but has two potentials caveats. The first is that this mode requires a dedicated ESG server, and the second is that the load balancer servers are not aware of the original client IP address. One workaround for HTTP or HTTPS applications is to enable the Insert X-Forwarded-For option in the HTTP application profile so that the client IP address is carried in the X-Forwarded-For HTTP header in the request that is sent to the back-end server.

If client IP address visibility is required on the back-end server for applications other than HTTP or HTTPS, you can configure the IP pool to be transparent. If clients are not on the same subnet as the back-end server, inline mode is recommended. Otherwise, you must use the load balancer IP address as the default gateway of the back-end server.

Note:
Usually, there are three methods to guarantee connection integrity:
  • Inline/transparent mode
  • SNAT/proxy/non-transparent mode (discussed above)
  • Direct server return (DSR) - Currently, this is unsupported
In DSR mode, the back-end server responds directly to the client. Currently, NSX load balancer does not support DSR.

The following procedure explains the configuration of a one-armed load balancer with HTTPS offloading (SSL offloading) application profile type.

Procedure

  1. Log in to the vSphere Web Client.
  2. Click Networking & Security > NSX Edges.
  3. Double-click an NSX Edge.
  4. Click Manage > Settings > Certificate.
    For this scenario, add a self-signed certificate.
  5. Enable the load balancer service.
    1. Click Manage > Load Balancer > Global Configuration.
    2. Click Edit and enable the load balancer.
  6. Create an HTTPS application profile.
    1. Click Manage > Load Balancer > Application Profiles.
    2. Click Add and specify the application profile parameters.
      Version Procedure
      NSX 6.4.5 and later
      1. In the Application Profile Type drop-down menu, select HTTPS Offloading.
      2. In the Name text box, enter the name of the profile. For example, enter Web-SSL-Profile.
      3. Click Client SSL > Service Certificates.
      4. Select the self-signed certificate that you added earlier.
      NSX 6.4.4 and earlier
      1. In the Type drop-down menu, select HTTPS.
      2. In the Name text box, enter the name of the profile. For example, Web-SSL-Profile.
      3. Select the Configure Service Certificate check box.
      4. Select the self-signed certificate that you added earlier.
  7. (Optional) Click Manage > Load Balancer > Service Monitoring. Edit the default service monitoring to change it from basic HTTP or HTTPS to specific URL or URIs, as required.
  8. Create a server pool.
    1. Click Manage > Load Balancer > Pools, and then click Add.
    2. In the Name text box, enter a name for the server pool. For example, enter Web-Tier-Pool-01.
    3. In the Algorithm drop-down menu, select Round-Robin.
    4. In the Monitors drop-down menu, select default_https_monitor.
    5. Add two members to the pool.
      For example, specify the following configuration settings.
      State Name IP Address Weight Monitor Port Port Max Connections Min Connections
      Enabled web-01a 172.16.10.11 1 443 443 0 0
      Enabled web-02a 172.16.10.12 1 443 443 0 0
    6. To use the SNAT mode, ensure that the Transparent option is not enabled.
  9. Click Show Status or Show Pool Statistics and verify that the status of the Web-Tier-Pool-01 pool is UP.
    Select the pool and ensure that the status of both members in this pool is UP.
  10. Create a virtual server.
    1. Click Manage > Load Balancer > Virtual Servers, and then click Add.
    2. Specify the virtual server parameters.

      For example, specify the following configuration settings.

      Option Description
      Virtual Server Enable the virtual server.
      Acceleration If you want to use the L4 load balancer for UDP or higher-performance TCP, enable acceleration. If you enable this option, ensure that the firewall status is enabled on the NSX Edge load balancer because a firewall is required for L4 SNAT.
      Application Profile Enter OneArmWeb-01.
      IP Address Select 172.16.10.110.
      Protocol Select HTTPS.
      Port Enter 443.
      Default Pool Select the Web-Tier-Pool-01 server pool that you created earlier.
      Connection Limit Enter 0.
      Connection Rate Limit Enter 0.
    3. (Optional) Click the Advanced tab, and associate an application rule with the virtual server.
      For supported examples, see: https://communities.vmware.com/docs/DOC-31772.

    In non-transparent mode, the back-end server cannot see the client IP, but can see the load balancer internal IP address. As a workaround for HTTP or HTTPS traffic, select the Insert X-Forwarded-For HTTP header in the application profile. When this option is selected, the Edge load balancer adds the header "X-Forwarded-For" with the value of the client source IP address.