This topic describes how you can configure the NSX firewall, load balancing, and NAT/SNAT services for VMware Tanzu Operations Manager on vSphere installations. These NSX-provided services take the place of an external device or the bundled HAProxy VM in VMware Tanzu Application Service for VMs.

This document presents the reader with fundamental configuration options of an Edge Services Gateway (ESG) with Tanzu Operations Manager and vSphere NSX. Its purpose is not to dictate the settings required on every deployment, but instead to empower the NSX Administrator with the ability to establish a known good “base” configuration and apply specific security configurations as required.

If you are using NSX, the specific configurations described here supersede any general recommendations in Preparing your firewall.

Assumptions

This topic assumes that the reader has the level of skill required to install and configure these products:

  • VMware vSphere v5.5 or later
  • NSX v6.1.x or later
  • Tanzu Operations Manager v1.6 or later

For detailed installation and configuration information about these products, see:

Overview

This cookbook follows a three-step recipe to deploy Tanzu Operations Manager, TAS for VMs and services behind an ESG:

  1. Configure Firewall
  2. Configure Load Balancer
  3. Configure NAT/SNAT

The ESG can scale to accommodate very large deployments as needed.

This cookbook focuses on a single-site deployment and makes these design assumptions:

  • There are five non-routable networks on the tenant (inside) side of the ESG.

    • The infra network is used to deploy Tanzu Operations Manager and BOSH Director.
    • The deployment network is used exclusively by TAS for VMs to deploy Diego Cells that host apps and related elements.
    • The tiles network is used for all other deployed tiles in a Tanzu Operations Manager installation.
    • The services network is used by BOSH Director for service tiles.
    • The container-to-container network is used for container to container communication in the Diego Cells.
  • There is a single service provider (outside) interface on the ESG that provides firewall, load balancing, and NAT/SNAT services.

  • The service provider (outside) interface is connected appropriately to the network backbone of the environment, as either routed or non-routed depending on the design. This cookbook does not cover provisioning of the uplink interface.

  • Routable IP addresses should be applied to the service provider (outside) interface of the ESG. VMware recommends that you apply 10 consecutive routable IP addresses to each ESG:

    • One reserved for NSX use (Controller to Edge I/F)
    • One for NSX Load Balancer to Gorouters
    • One for NSX Load Balancer to Diego Brains for SSH to apps
    • One routable IP address, used to access the Tanzu Operations Manager front end
    • One routable IP address, used with SNAT egress
    • Five for future use

VMware recommends that operators deploy the ESGs as high availability (HA) pairs in vSphere. VMware also recommends that they be sized “large” or greater for any pre-production or production use. The deployed size of the ESG impacts its overall performance, including how many SSL tunnels it can terminate.

The ESGs have an interface in each port group used by the foundation as well as a port group on the service provider (outside), often called the “transit network”. Each installation has a set of port groups in a vSphere DVS to support connectivity, so that the ESG arrangement is repeated for every install. It is not necessary to build a DVS for each ESG/foundation. You do not re-use an ESG amongst deployments. NSX Logical Switches (VXLAN vWires) are ideal candidates for use with this architecture.

The following diagram illustrates an example of port groups used with an ESG: Complex diagram. For more information, see other sections.

View a larger version of this diagram.

The following diagram illustrates an example of a network architecture deployment.

Complex diagram. For more information, see other sections.

View a larger version of this diagram.

The following diagram illustrates container-to-container networking. The overlay addresses are wrapped and transported using the underlay deployment subnet.

Complex diagram. For more information, see other sections.

View a larger version of this diagram.

Prep step: Configure DNS and network prerequisites

As a prerequisite, create wildcard DNS entries for system and apps domains in TAS for VMs. Map these domains to the selected IP address on the uplink (outside) interface of the ESG in your DNS server.

The wildcard DNS A record must resolve to an IP address associated with the outside interface of the ESG for it to function as a load balancer. You can either use a single IP address to resolve both the system and apps domain, or one IP address for each.

Additionally, assign these IP addresses and address ranges within your network:

  1. Assign IP Addresses to the “Uplink” (outside) interface.

    • Typically, you have one SNAT and three DNATs per ESG.
    • IP associated for SNAT use: All Tanzu Operations Manager internal IP addresses appear to be coming from this IP address at the ESG.
    • IP associated with Tanzu Operations Manager DNAT: This IP address is the publicly routable interface for Tanzu Operations Manager UI and SSH access.
  2. Assign “Internal” Interface IP Address Space to the Edge Gateway.

    • 192.168.10.0/26 = Tanzu Operations Manager deployment network (logical switch or port group)
    • 192.168.20.0/22 = Deployment network for the TAS for VMs tile
    • 192.168.24.0/22 = Tiles network for all tiles besides TAS for VMs
    • 192.168.28.0/22 = Dynamic services network for BOSH Director-managed service tiles.
    • 10.255.0.0/16 = Container-to-container network for intercontainer communication.

You must update the security group and load balancer information for your TAS for VMs deployments using NSX-V on vSphere through the Tanzu Operations Manager API. For more information, see vSphere with NSX-T or NSX-V in Configuring Load Balancing for TAS for VMs.

Step 1: Configure firewall

This procedure populates the ESG internal firewall with rules to protect a foundation.

These rules provide granular control on what can be accessed within an installation. For example, rules can be used to allow or deny another installation behind a different ESG access to apps published within the installation you are protecting.

This step is not required for the installation to function properly when the firewall feature is deactivated or set to Allow All.

To configure the ESG firewall:

  1. Select Edge.

  2. Select Manage.

  3. Click Firewall and set these rules:

    Name Source Destination Service Action
    Allow Ingress -> Tanzu Operations Manager VM Any OPS-MANAGER-IP SSH, HTTP, HTTPS Accept
    Allow Ingress -> TAS for VMs Any NSX-LOAD-BALANCER-IP HTTP, HTTPS Accept
    Allow Ingress -> SSH for Apps Any tcp:DIEGO-BRAIN-IP:2222 Any Accept
    Allow Ingress -> TCProuter Any tcp:NSX-TCP-LOAD-BALANCER-IP:5000 Any Accept
    Allow Inside <-> Inside (internal component communications) 192.168.10.0/26 192.168.20.0/22 192.168.24.0/22 192.168.28.0/22 192.168.10.0/26 192.168.20.0/22 192.168.24.0/22 192.168.28.0/22 Any Accept
    Allow Egress -> IaaS 192.168.10.0/26 VCENTER-IP ESXI-SVRS-IPS HTTP, HTTPS Accept
    Allow Egress -> DNS 192.168.0.0/16 DNS-IP DNS, DNS-UDP Accept
    Allow Egress -> NTP 192.168.0.0/16 NTP-IP NTP Accept
    Allow Egress -> SYSLOG 192.168.0.0/16 Syslog:514-IP SYSLOG Accept
    Allow ICMP 192.168.10.0/26 * ICMP Accept
    Allow Egress -> LDAP 192.168.10.0/26 192.168.20.0/22 LDAP:389-IP LDAP, LDAP-over-ssl Accept
    Allow Egress -> All Outbound 192.168.0.0/16 Any Any Accept
    Default Rule Any Any Any Deny

Step 2: Configure load balancer

The ESG provides software load balancing functionality, equivalent to the bundled HAProxy that is included with TAS for VMs, or hardware appliances such as an F5 or A10 load balancer.

This step is required for the installation to function properly.

These are the high-level steps for this procedure:

  1. Import SSL certificates to the Edge for SSL termination.

  2. Enable the load balancer.

  3. Create Application Profiles in the Load Balancing tab of NSX.

  4. Create Application Rules in the load balancer.

  5. Create Service Monitors for each pool type.

  6. Create Application Pools for the multiple groups needing load balancing.

  7. Create a Virtual Server (also known as a VIP) to pool-balanced IP addresses.

To do this procedure, you need PEM files of SSL certificates provided by the certificate supplier for only this installation of TAS for VMs, or the self-signed SSL certificates generated during TAS for VMs installation.

In this procedure, you combine the ESG’s IP address used for load balancing with a series of internal IP addresses provisioned for Gorouters in TAS for VMs. It is important to know the IP addresses used for the Gorouters beforehand.

You can pre-select or reserved IP addresses prior to deployment. Otherwise, you can discover them after deployment by looking them up in BOSH Director, which lists them in the release information of your TAS for VMs installation. VMware recommends using IP addresses that you pre-select or reserve prior to deployment.

You do all of the procedures in these sections through the ESG UI.

Step 2.1: Import SSL certificate

TAS for VMs requires SSL termination at the load balancer.

If you intend to pass SSL termination through the load balancer directly to the Gorouters, you can skip the following step and select Enable SSL Passthru in your PCF-HTTP Application Profile.

To enable SSL termination at the load balancer in ESG:

  1. Select Edge.

  2. Click the Manage tab.

  3. Select Settings.

  4. Select Certificates.

  5. Click the green + button to add a certificate.

  6. Enter the PEM file contents from the Networking pane of the TAS for VMs tile.

  7. Click Save.

Step 2.2: Enable the load balancer

To enable the load balancer:

  1. Select Load Balancer.

  2. Select Global Configuration.

  3. Edit the load balancer global configuration.

  4. Enable the load balancer.

  5. Enable acceleration.

  6. Set logging to your desired level.

Step 2.3: Create application profiles

The Application Profiles allow advanced X-Forwarded options as well as linking to the SSL certificate. You must create three profiles: PCF-HTTP, PCF-HTTPS, and PCF-TCP.

  1. Select Load Balancer.

  2. Select Global Application Profiles.

  3. To create the PCF-HTTP rule:

    1. Click the green + button.
    2. Click the pencil icon to to edit the profile.
    3. In the Name field, enter PCF-HTTP.
    4. Select the Insert X-Forwarded-For HTTP header check box. Edit Profile' screen  shows several fields, including Name and Insert X-Forwarded-For HTTP header enabled.
  4. To create the PCF-HTTPS rule:

    1. Click the green + button.
    2. Click the pencil icon to to edit the profile.
    3. In the Name field, enter PCF-HTTPS.
    4. Enable the Insert X-Forwarded-For HTTP header check box.
    5. Enter the service certificate you inserted previously
    6. If encrypting TLS traffic to the Gorouters, select the Enable Pool Side SSL check box. Otherwise, leave the check box deselected. Edit Profile' screen has table filtered by Service Certificates and a cert in a table.
  5. To create the PCF-TCP rule:

    1. Click the green + button.
    2. Click the pencil icon to to edit the profile.
    3. In the Name field, enter PCF-TCP.
    4. From the Type drop-down menu, select TCP. Edit Profile' screen
has Type drop-down menu set to TCP.

Step 2.4: Create application rules

In order for the ESG to perform proper X-Forwarded requests, you must add HAProxy directives to the ESG Application Rules. NSX supports most directives that HAProxy supports.

To create the Application Rules:

  1. Select Load Balancer.

  2. Select Application Rules.

  3. To create the option httplog rule:

    1. Click the green + button.
    2. For Name, enter option httplog.
    3. For Script, enter option httplog.
  4. To create the reqadd X-Forwarded-Proto:\ https rule:

    1. Click the green + button.
    2. For Name, enter reqadd X-Forwarded-Proto:\ https.
    3. For Script, enter reqadd X-Forwarded-Proto:\ https.
  5. To create the reqadd X-Forwarded-Proto:\ http rule:

    1. Click the green + button.
    2. For Name, enter reqadd X-Forwarded-Proto:\ http.
    3. For Script, enter reqadd X-Forwarded-Proto:\ http. Add Application Rule' modal shows Name and Script text fields filled with the script described in the steps above

Step 2.5: Create monitors for pools

NSX ships with several load balancing monitoring types pre-defined. These types are for HTTP, HTTPS, and TCP. For this installation, operators build new monitors matching the needs of each pool to ensure correct 1:1 monitoring for each pool type.

To create monitors for pools:

  1. Select Load Balancer.

  2. Select Service Monitoring.

  3. To create a new monitor for http-routers:

    1. Click the green + button. Keep all defaults.
    2. For Name, enter http-routers.
    3. From the Type drop-down menu, select HTTP.
    4. From the Method drop-down menu, select GET.
    5. From the URL drop-down menu, select /health.
  4. To create a new monitor for tcp-routers:

    1. Click the green + button. Keep all defaults.
    2. For Name, enter tcp-routers.
    3. From the Type drop-down menu, select HTTP.
    4. From the Method drop-down menu, select GET.
    5. From the URL drop-down menu, select /health.
  5. To create a new monitor for diego-brains:

    1. Click the green + button. Keep all defaults.
    2. For Name, enter diego-brains.
    3. From the Type drop-down menu, select TCP.

These monitors are selected during the next step when pools are created. A pool and a monitor are matched 1:1.

Step 2.6: Create pools of multi-element Tanzu Operations Manager targets

This procedure describes creating the pools of resources that ESG load-balances to: the Gorouter, TCP router, and Diego Brain jobs deployed by BOSH Director. If the IP addresses specified in the configuration do not exactly match the IP addresses reserved or used for the resources, then the pool does not effectively load-balance.

Step 2.6a: Create pool for http-routers

To create a pool for http-routers:

  1. Select Load Balancer.

  2. Select Pools.

  3. Click the green + button to create a new pool.

  4. Click the pencil icon to edit the pool.

  5. For Name, enter http-routers.

  6. From the Algorithm drop-down menu, select ROUND-ROBIN.

  7. From the Monitors drop-down menu, select http-routers.

  8. Under Members, click the green + button to enter all of the IP addresses reserved for the Gorouters into this pool. If you reserved more addresses than you have Gorouters, enter the addresses anyway. The load balancer ignores the missing resources as “down”.

    If your deployment matches the vSphere Reference Architecture, these IP addresses are in the 192.168.20.0/22 address space.

  9. Enter the ports the Gorouters use:

    1. For Port, enter 80.

      ESG assumes that internal traffic from the ESG load balancer to the Gorouters is trusted because it is on a VXLAN secured within NSX. If using encrypted TLS traffic to the Gorouter inside the VXLAN, enter 443 for Port.

    2. For Monitor Port, enter 8080. Edit Pool' screen shows Name with http-routers. Members table has several rows with Monitor port of 8080.

Step 2.6b: Create pool for tcp-routers

To create a pool for tcp-routers:

  1. Select Load Balancer.

  2. Select Pools.

  3. Click the green + button to create a new pool.

  4. Click the pencil icon to to edit the pool.

  5. For Name, enter tcp-routers.

  6. From the Algorithm drop-down menu, select ROUND-ROBIN.

  7. From the Monitors drop-down menu, select tcp-routers.

  8. Under Members, click the green + button to enter all of the IP addresses reserved for the TCP routers into this pool. If you reserved more addresses than you have Gorouters, enter the addresses anyway. The load balancer ignores the missing resources as “down”.

    If your deployment matches the vSphere Reference Architecture, these IP addresses are in the 192.168.20.0/22 address space.

  9. Enter the ports the TCP routers use:

    1. Leave Port empty.
    2. For Monitor Port, enter 80.

Step 2.6c: Create pool for diego-brains

To create a pool for diego-brains:

  1. Select Load Balancer.

  2. Select Pools.

  3. Click the green + button to create a new pool.

  4. Click the pencil icon to to edit the pool.

  5. For Name, enter diego-brains.

  6. From the Algorithm drop-down menu, select ROUND-ROBIN.

  7. From the Monitors drop-down menu, select diego-brains.

  8. Under Members, click the green + button to enter all of the IP addresses reserved for the Diego Brains into this pool. If you reserved more addresses than you have Gorouters, enter the addresses anyway. The load balancer ignores the missing resources as “down”.

    If your deployment matches the vSphere Reference Architecture, these IP addresses are in the 192.168.20.0/22 address space.

  9. Enter the ports the Diego Brains use:

    1. For Port, enter 2222.
    2. For Monitor Port, enter 2222.

Step 2.7: Create virtual servers

A The Virtual Server is the Virtual IP (VIP) that the load balancer uses to represent the pool of Gorouters to the outside world. A The Virtual Server also links the Application Policy, Application Rules, and back end pools to provide TAS for VMs load balancing services. The Virtual Server is the interface that the load balancer balances from. You create three Virtual Servers.

To create the Virtual Servers:

  1. Select Load Balancer.

  2. Select Virtual Servers.

  3. Select an IP address from the available routable address space allocated to the ESG. For information about reserved IP addresses, see General Overview.

  4. To create the GoRtr-HTTP Virtual Server:

    1. Click the green + button.
    2. Click the pencil icon to to edit the Virtual Server.
    3. From the Application Profile drop-down menu, select PCF-HTTP.
    4. For Name, enter GoRtr-HTTP.
    5. For IP Address, click Select IP Address to select the IP address to use as a VIP on the uplink interface.
    6. From the Protocol drop-down menu, select HTTP.
    7. For Port, enter 80.
    8. From the Default Pool drop-down menu, select http-routers. This connects this VIP to the pool of resources being load-balanced to.
    9. (Optional) If you want to configure a connection limit, enter an integer in Connection Limit.
    10. (Optional) If you want to configure a connection rate limit, enter an integer in Connection Rate Limit.
    11. Select the Advanced tab.
    12. Click the green + button to add the option httplog, reqadd X-Forwarded-Proto:\ http, and reqadd X-Forwarded-Proto:\ https Application Rules to this Virtual Server.

    Ensure that you match the protocol rule VIP- HTTP to the HTTP protocol and the protocol rule HTTPS to the HTTPS protocol.

  5. To create the GoRtr-HTTPS Virtual Server:

    1. Click the green + button.
    2. Click the pencil icon to to edit the Virtual Server.
    3. From the Application Profile drop-down menu, select PCF-HTTPS.
    4. For Name, enter GoRtr-HTTPS.
    5. For IP Address, click Select IP Address to select the IP address to use as a VIP on the uplink interface.
    6. From the Protocol drop-down menu, select HTTPS.
    7. For Port, enter 443.
    8. From the Default Pool drop-down menu, select https-routers. This connects this VIP to the pool of resources being load-balanced to.
    9. (Optional) If you want to configure a connection limit, enter an integer in Connection Limit.
    10. (Optional) If you want to configure a connection rate limit, enter an integer in Connection Rate Limit.
    11. Select the Advanced tab.
    12. Click the green + button to add the option httplog, reqadd X-Forwarded-Proto:\ http, and reqadd X-Forwarded-Proto:\ https Application Rules to this Virtual Server.

    Ensure that you match the protocol rule VIP- HTTP to the HTTP protocol and the protocol rule HTTPS to the HTTPS protocol.

  6. To create the TCPRtrs Virtual Server:

    1. Click the green + button.
    2. Click the pencil icon to to edit the Virtual Server.
    3. From the Application Profile drop-down menu, select PCF-TCP.
    4. For Name, enter TCPRtrs.
    5. For IP Address, click Select IP Address to select the IP address to use as a VIP on the uplink interface.
    6. From the Protocol drop-down menu, select TCP.
    7. For Port, enter 5000.
    8. From the Default Pool drop-down menu, select tcp-routers. This connects this VIP to the pool of resources being load-balanced to.
    9. (Optional) If you want to configure a connection limit, enter an integer in Connection Limit.
    10. (Optional) If you want to configure a connection rate limit, enter an integer in Connection Rate Limit. Modal 'Edit Virtual Server' on 'General' tab. Includes several fields. 'Connection Rate Limit' is a text field.
  7. To create the SSH-DiegoBrains Virtual Server:

    1. Click the green + button.
    2. Click the pencil icon to to edit the Virtual Server.
    3. From the Application Profile drop-down menu, select PCF-HTTPS.
    4. For Name, enter TCPRtrs.
    5. For IP Address, click Select IP Address. If you want to use this IP address for SSH access to apps, select the same IP address to use as a VIP on the uplink interface. If not, select a different IP address to use as the VIP.
    6. From the Protocol drop-down menu, select TCP.
    7. For Port, enter 2222.
    8. From the Default Pool drop-down menu, select diego-brains. This connects this VIP to the pool of resources being load-balanced to.
    9. (Optional) If you want to configure a connection limit, enter an integer in Connection Limit.
    10. (Optional) If you want to configure a connection rate limit, enter an integer in Connection Rate Limit. Modal New Virtual Server has same fields as Edit Virtual Server modal

Step 3: Configure NAT/SNAT

The ESG obfuscates the installation through network translation. The installation is placed entirely on non-routable RFC-1918 network address space, so you must translate routable IP addresses to non-routable IP addresses to make connections.

Correct NAT/SNAT configuration is required for the installation to function as expected.

To translate routable IP addresses to non-routable IP addresses, see the following table:

Action Applied on Interface Original IP Original/
Translated Port
Translated IP Protocol Description
SNAT Uplink 192.168.0.0
/16
Any Any PCF-IP All Nets Egress
DNAT Uplink OPS-MANAGER-IP Any 192.168.10
.OpsMgr
TCP OpsMgr Mask for external use
SNAT Infra 192.168.10
.OpsMgr
Any OPS-MANAGER-IP TCP OpsMgr Mask for internal use
DNAT Infra OPS-MANAGER-IP Any 192.168.10.OpsMgr TCP OpsMgr Mask for internal use

The NAT/SNAT on the infra network in this table is an example of an optional Hairpin NAT rule to allow VMs within the Infrastructure network to access the Tanzu Operations Manager API. This is because the Tanzu Operations Manager hostname and the API HTTPS endpoint are registered to the Tanzu Operations Manager VM external IP address. A pair of Hairpin NAT rules are necessary on each internal network interface that requires API access to Tanzu Operations Manager. You should create these rules only if the network must access the Tanzu Operations Manager API.

NAT/SNAT functionality is not required if routable IP address space is used on the Tenant Side of the ESG. At that point, the ESG simply performs routing between the address segments.

NSX generates a number of DNAT rules based on load balancing configs. You can safely ignore these.

Additional notes

The ESG also supports scenarios where Private RFC subnets and NAT are not utilized for Deployment or Infrastructure networks, and the guidance in this topic can be modified to meet those scenarios.

Additionally, the ESG supports up to 10 Interfaces allowing for more Uplink options if necessary.

This topic describes using Private RFC-1918 subnets with deployment networks because they are commonly used. ESG devices are capable of leveraging ECMP, OSPF, BGP, and IS-IS to handle dynamic routing of customer and public L3 IP space. That design is out of scope for this topic, but is supported by VMware NSX and Tanzu Operations Manager.

check-circle-line exclamation-circle-line close-line
Scroll to top icon