This topic describes two Google Cloud Platform (GCP) reference architectures for VMware Tanzu Operations Manager and any runtime products, including VMware Tanzu Application Service for VMs (TAS for VMs) and VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) (on a shared virtual private cloud (VPC) and on a single-project VPC). This topic also outlines multiple networking variants for VPC deployment.

These architectures are validated for production-grade Tanzu Operations Manager deployments using multiple availability zones (AZs).

For general requirements for running Tanzu Operations Manager and specific requirements for running Tanzu Operations Manager on GCP, see Tanzu Operations Manager on GCP Requirements.

A Tanzu Operations Manager reference architecture describes a proven approach for deploying Tanzu Operations Manager on a specific IaaS, such as GCP.

A Tanzu Operations Manager reference architecture must meet these requirements:

  • Be secure

  • Be publicly accessible

  • Include common Tanzu Operations Manager-managed services such as VMware Tanzu SQL, VMware Tanzu RabbitMQ, and Spring Cloud Services for VMware Tanzu

  • Be able to host at least 100 app instances

VMware provides reference architectures to help you determine the best configuration for your Tanzu Operations Manager deployment.

Shared vs. single-project VPCs

A shared VPC installation is more difficult to configure than a Tanzu Operations Manager deployment on a single-project VPC, because the required account privileges and resource allocations are more granular and complex. But the shared VPC architecture allows network assets to be centrally located, which simplifies auditing and security. VMware recommends the shared VPC model for:

  • Deployments with deep auditing and security requirements
  • When networks hosting the foundation need to connect back to an internal network through a VPN or interconnect

A single-project VPC lets the platform architect give Tanzu Operations Manager full access to the VPC and its resources, which makes configuration easier. VMware recommends single-project VPC architecture for:

  • Standalone deployments that do not connect to an internal network.
  • Test and experimental deployments, and for projects that do not belong to an organization.

Shared VPC GCP reference architecture

The following diagram illustrates a reference architecture for a deployment of Tanzu Operations Manager on a shared VPC on GCP. This architecture requires an organization on the VPC that contains a host project and a service project.

High-level system and routing components. The large groupings in the Organization are Host Project, Service Project (Foundation), and Region.

View a larger version of this diagram.

NAT network topology

To expose a minimal number of public IP addresses, set up your NAT as shown in the following diagram:

Complex NAT network topology diagram. Detailed routing for the Region block is shown.

View a larger version of this diagram.

Cloud Interconnect

To speed communication between data centers, use Google Cloud Interconnect, as shown in the following diagram:

This Cloud Interconnect diagram is described in the following text.

View a larger version of this diagram.

The diagram shows two customer networks in GCP. One shows the traditional setup of linking to the VPN Gateway through the Customer Gateway. The other shows Customer Gateway connecting to Cloud Interconnect instead of VPN Gateway. In both cases, the gateway then connects to the Cloud Router, which goes to each of the three subnetworks.

Host and service architecture

GCP allocates resources using a hierarchy that centers around projects. To create a VPC, architects define a host project that allocates network resources for the VPC, such as address space and firewall rules. Then they can define one or more service projects to run within the VPC, which share the network resources allocated by the host project and include their own non-network resources, such as VMs and storage buckets.

To install Tanzu Operations Manager, TAS for VMs, and services in a shared VPC on GCP, you create a host project for the VPC and a service project dedicated to running Tanzu Operations Manager, TAS for VMs, and services.

Host project resources

The host project centrally manages these shared VPC network resources:

  • Infra subnet (Tanzu Operations Manager VM and BOSH Director)
  • TAS for VMs subnet
  • Services subnet
  • Isolation segments
  • Firewall rules
  • NAT instances and gateway
  • VPN or Google Cloud Interconnect
  • Routes, such as egress internet through NAT or egress on-premise through a VPN

Service project resources

The service project manages these resources:

  • Google Cloud Compute instances (VMs)

    • BOSH Director
    • Tanzu Operations Manager VM
    • VMs deployed by BOSH, such as runtime and service components
  • Google Cloud Storage buckets for blobstore

    • BOSH Director
    • Resources
    • Buildpacks
    • Droplets
    • Packages
  • Service account and a service account key for TAS for VMs to access the storage buckets

  • A service account for Tanzu Operations Manager

  • Load balancers

  • Google Cloud SQL instances, if using external databases

Single-project VPC base GCP reference architecture

The following diagram illustrates a reference architecture for a deployment of Tanzu Operations Manager on a single-project VPC on GCP:

The GCP Reference Architecture components are listed in the following section.

View a larger version of this diagram.

Base reference architecture components

The following table lists the components that are part of a reference architecture deployment with three AZs.

Component Reference architecture notes
Domains and DNS Domain zones and routes in use by the reference architecture include: domains for *.apps and *.system (required), a route for Tanzu Operations Manager (required), a route for Doppler (required), a route for Loggregator (required), a route for SSH access to app containers (optional), and a route for TCP routing to apps (optional). Reference architecture uses GCP Cloud DNS as the DNS provider.
Tanzu Operations Manager VM Deployed on the infrastructure subnet and accessible by fully-qualified domain name (FQDN) or through an optional jumpbox.
BOSH Director Deployed on the infrastructure subnet.
Gorouters Accessed through the HTTP and TCP WebSockets load balancers. Deployed on the TAS for VMs subnet, one job per AZ.
Diego Brains Required. However, the SSH container access functionality is optional and enabled through the SSH Proxy load balancer. Deployed on the TAS for VMs subnet, one job per AZ.
TCP routers Optional feature for TCP routing. Deployed on the TAS for VMs subnet, one job per AZ.
Database Reference architecture uses GCP Cloud SQL rather than internal databases. Configure your database with a strong password and limit access only to components that require database access.
Blob Storage and Buckets For buildpacks, droplets, packages, and resources. Reference architecture uses Google Cloud Storage rather than internal file storage.
Services Deployed on the managed services subnet. Each service is deployed to each AZ.

Alternative GCP network layouts

This section describes the possible network layouts for TAS for VMs deployments as covered by the reference architecture of Tanzu Operations Manager on GCP.

At a high level, there are currently two possible ways of granting public internet access to TAS for VMs as described by the reference architecture:

  • NAT provides connectivity from TAS for VMs internals to the public internet.

  • Every VM receives its own public IP address (no NAT).

Public IP addresses solution

If you prefer not to use a NAT solution, you can configure TAS for VMs on GCP to assign public IP addresses for all components. This type of deployment might be more performant since most of the network traffic between TAS for VMs components is routed through the front-end load balancer and the Gorouter.

Network objects

The following table lists the network objects expected for each type of reference architecture deployment with three AZs. This assumes you are using NAT.

Network Object Notes Minimum Number: NAT-Based Minimum Number: Public IP Addresses
External IPs For a NAT solution, use global IP address for apps and system access, and Tanzu Operations Manager or an optional jumpbox. 2 30+
NAT One NAT per AZ. 3 0
Network One per deployment. GCP Network objects allow multiple subnets with multiple CIDRs, so a typical deployment probably only ever requires one GCP Network object. 1 1
Subnets Separate subnets for infrastructure (Tanzu Operations Manager VM, BOSH Director, jumpbox), TAS for VMs, and services. Using separate subnets allows you to configure different firewall rules to meet your needs. 3 3
Routes Routes are typically created by GCP dynamically when subnets are created, but you may need to create additional routes to force outbound communication to dedicated SNAT nodes. These objects are required to deploy TAS for VMs without public IP addresses. 3+ 3
Firewall Rules GCP firewall rules are bound to a Network object and can be created to use IP ranges, subnets, or instance tags to match for source and destination fields in a rule. The preferred method used in the reference architecture deployment is instance tags. 6+ 6+
Load balancers Used to handle requests to Gorouters and infrastructure components. GCP uses two or more load balancers.
The HTTP load balancer and TCP WebSockets load balancer are both required.
The TCP router load balancer used for TCP routing and the SSH load balancer that allows SSH access to Diego apps are both optional. The HTTP load balancer provides SSL termination.
2+ 2+
Jumpbox Optional. Provides a way of accessing different network components. For example, you can configure it with your own permissions and then set it up to access the Broadcom Support portal to download tiles. Using a jumpbox is particularly useful in IaaSes where Tanzu Operations Manager does not have a public IP address. In these cases, you can SSH into the Tanzu Operations Manager VM or any other component through the jumpbox. (1) (1)

Network communication in GCP deployments

These sections provide more background on the reasons behind certain network configuration decisions, specifically for the Gorouter.

Load balancer to Gorouter communications and TLS termination

In a TAS for VMs on GCP deployment, the Gorouter receives two types of traffic:

  • Unencrypted HTTP traffic on port 80 that is decrypted by the HTTP(S) load balancer

  • Encrypted secure web socket traffic on port 443 that is passed through the TCP WebSockets load balancer

TLS is terminated for HTTPS on the HTTP load balancer and for WebSockets (WSS) traffic on the Gorouter.

TAS for VMs deployments on GCP use two load balancers to handle Gorouter traffic because HTTP load balancers currently do not support WebSockets.

ICMP

GCP routers do not respond to ICMP. VMware recommends deactivating ICMP checks in your BOSH Director network configuration. For more information, see Step 5: Create Networks page in Configuring BOSH Director on GCP.

check-circle-line exclamation-circle-line close-line
Scroll to top icon