This topic describes how you can use Global DNS load balancers (GLBs) for multi-foundation environments. This topic also describes concepts such as foundation affinity and health checks.
If you want to configure a load balancer dedicated to one foundation and you are using an F5 LTM, see Configuring an F5 Load Balancer for TAS for VMs.
Multi-foundation environments are multiple instances of VMware Tanzu Operations Manager and runtime products such as VMware Tanzu Application Service for VMs (TAS for VMs) or VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) that can communicate with each other in some ways. Each foundation can use different infrastructures to fit your preferences.
Multi-foundation environments are commonly deployed in an active-active or active-passive pattern. An active-active pattern is when instances of apps run on two foundations and they are both in use. An active-passive pattern is when instances of apps run on two foundations, but the instances might only be active on one foundation, and the other foundation becomes active only in the event of a failover.
The typical setup for foundation failover requires the GLB be authoritative for a wildcard app domain. The wildcard app domain is not the same domain as the foundation-default app domain.
To configure your GLB:
Find and record your wildcard app domain. For TAS for VMs, your wildcard app domain is typically the Apps domain you configure in the Domains pane of the TAS for VMs tile. For more information, see the TAS for VMs documentation.
Add the domain you recorded to both TAS for VMs foundations using a shared domain. You can create a shared domain using the Cloud Foundry Command Line Interface (cf CLI). For more information about the cf CLI command to create shared domains, see the Cloud Foundry CLI Reference Guide.
To support failover, set the time-to-live (TTL) in the wildcard DNS record to about 30 to 180 seconds. When you determine your TTL, consider the tradeoff between the app performance impact and the resulting time for failover to occur.
Foundation affinity occurs when the GLB favors one foundation over another during a route request. For example, users experience less latency if they are routed to a foundation that is geographically closer, so the GLB can favor that foundation.
Different GLBs have their own mechanisms to achieve this. For more information, see the following table for common foundation affinity concepts:
Term | Description |
---|---|
Topology / geographically-based affinity | The GLB attempts to direct traffic to the graphically nearest foundation based on IP geolocation or provided topology for private networks of the LDNS server performing the lookup. |
Static-persist / member-hashing affinity | The Static Persist load balancing method uses the persist mask with the source IP address of the Local DNS in a deterministic algorithm to send requests to a specific pool member (virtual server). Using this method, the traffic manager sends DNS name resolution requests to the first available pool member based on a hash algorithm that determines the order of the pool members. |
Active-passive foundations | If one foundation is usually idle, you can always pick the active foundation IP as long as it remains available. |
Microservice-to-microservice affinity | For microservice apps, you typically use a Services Registry to manage traffic between microservices. There are two ways to do this:
|
In TAS for VMs, health checks determine whether a foundation for an app is healthy or not.
For the levels at which you can check the health of your foundation, see the following table:
Level | Description |
---|---|
Foundation | The GLB relies on a local load balancer in front of the Gorouters to determine the overall health of the foundation. The GLB can perform health checks on TCP port 443 or 80 or on the local load balancer. The local load balancer checks the health of the back end pool of Gorouters on port 8080. |
App | Health checks are set only for apps that have instances on both foundations. Each app instance has canary DNS records. The canary DNS records are the same for the app instances on each foundation. You would also need to add more VIPs dedicated to these canary apps that would be used to check their health. (Not recommended) |
VMware does not recommend setting health checks at the app level. Configuring health checks in this way can cause more frequent failover and delays while pushing apps. Complete failover of a foundation affects all apps on the platform. Health checks on a per-app basis can require additional overhead beyond the control of an app developer.