This topic describes how you can configure the NSX firewall, load balancing, and NAT/SNAT services for VMware Tanzu Operations Manager on vSphere installations. These NSX-provided services take the place of an external device or the bundled HAProxy VM in VMware Tanzu Application Service for VMs.
This document presents the reader with fundamental configuration options of an Edge Services Gateway (ESG) with Tanzu Operations Manager and vSphere NSX. Its purpose is not to dictate the settings required on every deployment, but instead to empower the NSX Administrator with the ability to establish a known good “base” configuration and apply specific security configurations as required.
If you are using NSX, the specific configurations described here supersede any general recommendations in Preparing your firewall.
This topic assumes that the reader has the level of skill required to install and configure these products:
For detailed installation and configuration information about these products, see:
This cookbook follows a three-step recipe to deploy Tanzu Operations Manager, TAS for VMs and services behind an ESG:
The ESG can scale to accommodate very large deployments as needed.
This cookbook focuses on a single-site deployment and makes these design assumptions:
There are five non-routable networks on the tenant (inside) side of the ESG.
There is a single service provider (outside) interface on the ESG that provides firewall, load balancing, and NAT/SNAT services.
The service provider (outside) interface is connected appropriately to the network backbone of the environment, as either routed or non-routed depending on the design. This cookbook does not cover provisioning of the uplink interface.
Routable IP addresses should be applied to the service provider (outside) interface of the ESG. VMware recommends that you apply 10 consecutive routable IP addresses to each ESG:
VMware recommends that operators deploy the ESGs as high availability (HA) pairs in vSphere. VMware also recommends that they be sized “large” or greater for any pre-production or production use. The deployed size of the ESG impacts its overall performance, including how many SSL tunnels it can terminate.
The ESGs have an interface in each port group used by the foundation as well as a port group on the service provider (outside), often called the “transit network”. Each installation has a set of port groups in a vSphere DVS to support connectivity, so that the ESG arrangement is repeated for every install. It is not necessary to build a DVS for each ESG/foundation. You do not re-use an ESG amongst deployments. NSX Logical Switches (VXLAN vWires) are ideal candidates for use with this architecture.
The following diagram illustrates an example of port groups used with an ESG:
View a larger version of this diagram.
The following diagram illustrates an example of a network architecture deployment.
View a larger version of this diagram.
The following diagram illustrates container-to-container networking. The overlay addresses are wrapped and transported using the underlay deployment subnet.
View a larger version of this diagram.
As a prerequisite, create wildcard DNS entries for system and apps domains in TAS for VMs. Map these domains to the selected IP address on the uplink (outside) interface of the ESG in your DNS server.
The wildcard DNS A
record must resolve to an IP address associated with the outside interface of the ESG for it to function as a load balancer. You can either use a single IP address to resolve both the system and apps domain, or one IP address for each.
Additionally, assign these IP addresses and address ranges within your network:
Assign IP Addresses to the “Uplink” (outside) interface.
Assign “Internal” Interface IP Address Space to the Edge Gateway.
You must update the security group and load balancer information for your TAS for VMs deployments using NSX-V on vSphere through the Tanzu Operations Manager API. For more information, see vSphere with NSX-T or NSX-V in Configuring Load Balancing for TAS for VMs.
This procedure populates the ESG internal firewall with rules to protect a foundation.
These rules provide granular control on what can be accessed within an installation. For example, rules can be used to allow or deny another installation behind a different ESG access to apps published within the installation you are protecting.
This step is not required for the installation to function properly when the firewall feature is deactivated or set to Allow All.
To configure the ESG firewall:
Select Edge.
Select Manage.
Click Firewall and set these rules:
Name | Source | Destination | Service | Action |
---|---|---|---|---|
Allow Ingress -> Tanzu Operations Manager VM | Any | OPS-MANAGER-IP | SSH, HTTP, HTTPS | Accept |
Allow Ingress -> TAS for VMs | Any | NSX-LOAD-BALANCER-IP | HTTP, HTTPS | Accept |
Allow Ingress -> SSH for Apps | Any | tcp:DIEGO-BRAIN-IP:2222 | Any | Accept |
Allow Ingress -> TCProuter | Any | tcp:NSX-TCP-LOAD-BALANCER-IP:5000 | Any | Accept |
Allow Inside <-> Inside (internal component communications) | 192.168.10.0/26 192.168.20.0/22 192.168.24.0/22 192.168.28.0/22 | 192.168.10.0/26 192.168.20.0/22 192.168.24.0/22 192.168.28.0/22 | Any | Accept |
Allow Egress -> IaaS | 192.168.10.0/26 | VCENTER-IP ESXI-SVRS-IPS | HTTP, HTTPS | Accept |
Allow Egress -> DNS | 192.168.0.0/16 | DNS-IP | DNS, DNS-UDP | Accept |
Allow Egress -> NTP | 192.168.0.0/16 | NTP-IP | NTP | Accept |
Allow Egress -> SYSLOG | 192.168.0.0/16 | Syslog:514-IP | SYSLOG | Accept |
Allow ICMP | 192.168.10.0/26 | * | ICMP | Accept |
Allow Egress -> LDAP | 192.168.10.0/26 192.168.20.0/22 | LDAP:389-IP | LDAP, LDAP-over-ssl | Accept |
Allow Egress -> All Outbound | 192.168.0.0/16 | Any | Any | Accept |
Default Rule | Any | Any | Any | Deny |
The ESG provides software load balancing functionality, equivalent to the bundled HAProxy that is included with TAS for VMs, or hardware appliances such as an F5 or A10 load balancer.
This step is required for the installation to function properly.
These are the high-level steps for this procedure:
Import SSL certificates to the Edge for SSL termination.
Enable the load balancer.
Create Application Profiles in the Load Balancing tab of NSX.
Create Application Rules in the load balancer.
Create Service Monitors for each pool type.
Create Application Pools for the multiple groups needing load balancing.
Create a Virtual Server (also known as a VIP) to pool-balanced IP addresses.
To do this procedure, you need PEM files of SSL certificates provided by the certificate supplier for only this installation of TAS for VMs, or the self-signed SSL certificates generated during TAS for VMs installation.
In this procedure, you combine the ESG’s IP address used for load balancing with a series of internal IP addresses provisioned for Gorouters in TAS for VMs. It is important to know the IP addresses used for the Gorouters beforehand.
You can pre-select or reserved IP addresses prior to deployment. Otherwise, you can discover them after deployment by looking them up in BOSH Director, which lists them in the release information of your TAS for VMs installation. VMware recommends using IP addresses that you pre-select or reserve prior to deployment.
You do all of the procedures in these sections through the ESG UI.
TAS for VMs requires SSL termination at the load balancer.
If you intend to pass SSL termination through the load balancer directly to the Gorouters, you can skip the following step and select Enable SSL Passthru in your PCF-HTTP
Application Profile.
To enable SSL termination at the load balancer in ESG:
Select Edge.
Click the Manage tab.
Select Settings.
Select Certificates.
Click the green + button to add a certificate.
Enter the PEM file contents from the Networking pane of the TAS for VMs tile.
Click Save.
To enable the load balancer:
Select Load Balancer.
Select Global Configuration.
Edit the load balancer global configuration.
Enable the load balancer.
Enable acceleration.
Set logging to your desired level.
The Application Profiles allow advanced X-Forwarded
options as well as linking to the SSL certificate. You must create three profiles: PCF-HTTP
, PCF-HTTPS
, and PCF-TCP
.
Select Load Balancer.
Select Global Application Profiles.
To create the PCF-HTTP
rule:
PCF-HTTP
.To create the PCF-HTTPS
rule:
PCF-HTTPS
.To create the PCF-TCP
rule:
PCF-TCP
.In order for the ESG to perform proper X-Forwarded
requests, you must add HAProxy directives to the ESG Application Rules. NSX supports most directives that HAProxy supports.
To create the Application Rules:
Select Load Balancer.
Select Application Rules.
To create the option httplog
rule:
option httplog
.option httplog
.To create the reqadd X-Forwarded-Proto:\ https
rule:
reqadd X-Forwarded-Proto:\ https
.reqadd X-Forwarded-Proto:\ https
.To create the reqadd X-Forwarded-Proto:\ http
rule:
reqadd X-Forwarded-Proto:\ http
.reqadd X-Forwarded-Proto:\ http
. NSX ships with several load balancing monitoring types pre-defined. These types are for HTTP, HTTPS, and TCP. For this installation, operators build new monitors matching the needs of each pool to ensure correct 1:1 monitoring for each pool type.
To create monitors for pools:
Select Load Balancer.
Select Service Monitoring.
To create a new monitor for http-routers
:
http-routers
.To create a new monitor for tcp-routers
:
tcp-routers
.To create a new monitor for diego-brains
:
diego-brains
.These monitors are selected during the next step when pools are created. A pool and a monitor are matched 1:1.
This procedure describes creating the pools of resources that ESG load-balances to: the Gorouter, TCP router, and Diego Brain jobs deployed by BOSH Director. If the IP addresses specified in the configuration do not exactly match the IP addresses reserved or used for the resources, then the pool does not effectively load-balance.
http-routers
To create a pool for http-routers
:
Select Load Balancer.
Select Pools.
Click the green + button to create a new pool.
Click the pencil icon to edit the pool.
For Name, enter http-routers
.
From the Algorithm drop-down menu, select ROUND-ROBIN.
From the Monitors drop-down menu, select http-routers.
Under Members, click the green + button to enter all of the IP addresses reserved for the Gorouters into this pool. If you reserved more addresses than you have Gorouters, enter the addresses anyway. The load balancer ignores the missing resources as “down”.
If your deployment matches the vSphere Reference Architecture, these IP addresses are in the 192.168.20.0/22 address space.
Enter the ports the Gorouters use:
80
. ESG assumes that internal traffic from the ESG load balancer to the Gorouters is trusted because it is on a VXLAN secured within NSX. If using encrypted TLS traffic to the Gorouter inside the VXLAN, enter 443
for Port.
8080
. tcp-routers
To create a pool for tcp-routers
:
Select Load Balancer.
Select Pools.
Click the green + button to create a new pool.
Click the pencil icon to to edit the pool.
For Name, enter tcp-routers
.
From the Algorithm drop-down menu, select ROUND-ROBIN.
From the Monitors drop-down menu, select tcp-routers.
Under Members, click the green + button to enter all of the IP addresses reserved for the TCP routers into this pool. If you reserved more addresses than you have Gorouters, enter the addresses anyway. The load balancer ignores the missing resources as “down”.
If your deployment matches the vSphere Reference Architecture, these IP addresses are in the 192.168.20.0/22 address space.
Enter the ports the TCP routers use:
80
.diego-brains
To create a pool for diego-brains
:
Select Load Balancer.
Select Pools.
Click the green + button to create a new pool.
Click the pencil icon to to edit the pool.
For Name, enter diego-brains
.
From the Algorithm drop-down menu, select ROUND-ROBIN.
From the Monitors drop-down menu, select diego-brains.
Under Members, click the green + button to enter all of the IP addresses reserved for the Diego Brains into this pool. If you reserved more addresses than you have Gorouters, enter the addresses anyway. The load balancer ignores the missing resources as “down”.
If your deployment matches the vSphere Reference Architecture, these IP addresses are in the 192.168.20.0/22 address space.
Enter the ports the Diego Brains use:
2222
.2222
.A The Virtual Server is the Virtual IP (VIP) that the load balancer uses to represent the pool of Gorouters to the outside world. A The Virtual Server also links the Application Policy, Application Rules, and back end pools to provide TAS for VMs load balancing services. The Virtual Server is the interface that the load balancer balances from. You create three Virtual Servers.
To create the Virtual Servers:
Select Load Balancer.
Select Virtual Servers.
Select an IP address from the available routable address space allocated to the ESG. For information about reserved IP addresses, see General Overview.
To create the GoRtr-HTTP
Virtual Server:
GoRtr-HTTP
.80
.option httplog
, reqadd X-Forwarded-Proto:\ http
, and reqadd X-Forwarded-Proto:\ https
Application Rules to this Virtual Server. Ensure that you match the protocol rule VIP- HTTP
to the HTTP
protocol and the protocol rule HTTPS
to the HTTPS
protocol.
To create the GoRtr-HTTPS
Virtual Server:
GoRtr-HTTPS
.443
.option httplog
, reqadd X-Forwarded-Proto:\ http
, and reqadd X-Forwarded-Proto:\ https
Application Rules to this Virtual Server. Ensure that you match the protocol rule VIP- HTTP
to the HTTP
protocol and the protocol rule HTTPS
to the HTTPS
protocol.
To create the TCPRtrs
Virtual Server:
TCPRtrs
.5000
.To create the SSH-DiegoBrains
Virtual Server:
TCPRtrs
.2222
.The ESG obfuscates the installation through network translation. The installation is placed entirely on non-routable RFC-1918 network address space, so you must translate routable IP addresses to non-routable IP addresses to make connections.
Correct NAT/SNAT configuration is required for the installation to function as expected.
To translate routable IP addresses to non-routable IP addresses, see the following table:
Action | Applied on Interface | Original IP | Original/ Translated Port |
Translated IP | Protocol | Description |
---|---|---|---|---|---|---|
SNAT | Uplink | 192.168.0.0 /16 |
Any | Any | PCF-IP | All Nets Egress |
DNAT | Uplink | OPS-MANAGER-IP | Any | 192.168.10 .OpsMgr |
TCP | OpsMgr Mask for external use |
SNAT | Infra | 192.168.10 .OpsMgr |
Any | OPS-MANAGER-IP | TCP | OpsMgr Mask for internal use |
DNAT | Infra | OPS-MANAGER-IP | Any | 192.168.10.OpsMgr | TCP | OpsMgr Mask for internal use |
The NAT/SNAT on the infra
network in this table is an example of an optional Hairpin NAT rule to allow VMs within the Infrastructure network to access the Tanzu Operations Manager API. This is because the Tanzu Operations Manager hostname and the API HTTPS endpoint are registered to the Tanzu Operations Manager VM external IP address. A pair of Hairpin NAT rules are necessary on each internal network interface that requires API access to Tanzu Operations Manager. You should create these rules only if the network must access the Tanzu Operations Manager API.
NAT/SNAT functionality is not required if routable IP address space is used on the Tenant Side of the ESG. At that point, the ESG simply performs routing between the address segments.
NSX generates a number of DNAT rules based on load balancing configs. You can safely ignore these.
The ESG also supports scenarios where Private RFC subnets and NAT are not utilized for Deployment or Infrastructure networks, and the guidance in this topic can be modified to meet those scenarios.
Additionally, the ESG supports up to 10 Interfaces allowing for more Uplink options if necessary.
This topic describes using Private RFC-1918 subnets with deployment networks because they are commonly used. ESG devices are capable of leveraging ECMP, OSPF, BGP, and IS-IS to handle dynamic routing of customer and public L3 IP space. That design is out of scope for this topic, but is supported by VMware NSX and Tanzu Operations Manager.