This topic describes the reference architectures that you can use when installing VMware Tanzu Operations Manager on any infrastructure to support the VMware Tanzu Application Service for VMs (TAS for VMs) and VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) runtime environments. These reference architectures describe proven approaches for deploying Tanzu Operations Manager and Runtimes, such as TAS for VMs or TKGI, on a specific IaaS, such as AWS, Azure, GCP, OpenStack, or vSphere.

These reference architectures meet the following requirements:

  • They are secure.
  • They are publicly accessible.
  • They include common Tanzu Operations Manager-managed services such as VMware Tanzu SQL, VMware Tanzu RabbitMQ, and Spring Cloud Services for VMware Tanzu.
  • They can host at least 100 app instances.
  • They have been deployed and validated by VMware to support Tanzu Operations Manager, TAS for VMs, and TKGI.

You can use Tanzu Operations Manager reference architectures to help plan the best configuration for deploying your Tanzu Operations Manager on your IaaS.

Reference architecture and planning topics

All Tanzu Operations Manager reference architectures start with the base TAS for VMs architecture and the base TKGI architecture.

These IaaS-specific topics build on these two common base architectures:

The following topics address aspects of platform architecture and planning that the Tanzu Operations Manager reference architectures do not cover:

TAS for VMs architecture

The following diagram illustrates a base architecture for TAS for VMs and how its network topology places and replicates TAS for VMs components across subnets and availability zones (AZs).

TAS for VMs Base Topology. Each AZ includes Infrastructure subnet, PAS Subnet, Services subnet, On-demand services subnet, Isolation Segment subnet.

View a larger version of this diagram.

Internal components

The following table describes the internal component placements shown in the preceding diagram:

Component Placement and access notes
Tanzu Operations Manager VM Deployed on one of the three public subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox.
BOSH Director Deployed on the infrastructure subnet.
Jumpbox Optional. Deployed on the infrastructure subnet for accessing TAS for VMs management components such as Tanzu Operations Manager and the Cloud Foundry Command Line Interface (cf CLI).
Gorouters (HTTP routers in Tanzu Operations Manager) Deployed on all three TAS for VMs subnets, one per AZ. Accessed through the HTTP, HTTPS, and SSL load balancers.
Diego Brain Deployed on all three TAS for VMs subnets, one per AZ. The Diego Brain component is required, but SSH container access support through the Diego Brain is optional, and enabled through the SSH load balancers.
TCP routers Optional. Deployed on all three TAS for VMs subnets, one per AZ, to support TCP routing.
Service tiles Service brokers and shared services instances are deployed to the Services subnet. Dedicated on-demand service instances are deployed to an on-demand services subnet.
Isolation segments Deployed on an isolation segment subnet. Includes Diego Cells and Gorouters for running and accessing apps hosted within isolation segments.

Networks

The following sections describe TAS for VMs’s recommendations for defining your networks and load-balancing their incoming requests:

Required subnets

TAS for VMs requires these statically-defined networks to host its main component systems:

  • Infrastructure subnet - /24 segment
    This subnet contains VMs that require access only for Platform Administrators, such as the Tanzu Operations Manager VM, the BOSH Director VM, and an optional jumpbox.

  • TAS for VMs subnet - /24 segment
    This subnet contains TAS for VMs runtime VMs, such as Gorouters, Diego Cells, and Cloud Controllers.

  • Services subnet - /24 segment
    The services and on-demand services networks support Tanzu Operations Manager tiles that you might add in addition to TAS for VMs. These are the networks for everything that is not TAS for VMs. Some services tiles can call for additional network capacity to grow into on-demand. If you use services with this capability, VMware recommends that you add an on-demand services network for each on-demand service.

  • On-demand services subnets - /24 segments
    This is for services that can allocate network capacity on-demand from BOSH for their worker VMs. VMware recommends allocating a dedicated subnet to each on-demand service. For example, you can configure the VMware Tanzu for Valkey on Cloud Foundry tile (formerly Redis for VMware Tanzu) as follows:

    • Network: Enter the existing Services network to host the service broker.
    • Services network: Deploy a new network OD-Services1 to host the Valkey (formerly Redis) worker VMs.
      Another on-demand service tile can then also use Services for its broker and a new OD-Services2 network for its workers, and so on.
  • Isolation segment subnets - /24 segments
    You can add one or more isolation segment tiles to a TAS for VMs installation to compartmentalize hosting and routing resources. For each isolation segment you deploy, you can designate a /24 network for its range of address space.

Load balancing

Any TAS for VMs installation needs a suitable load balancer to send incoming HTTP, HTTPS, SSH, and SSL traffic to its Gorouters and app containers. All installations approaching production-level use rely on external load balancing from hardware appliance vendors or other network-layer solutions.

The load balancer can also perform Layer 4 or Layer 7 load balancing functions. SSL can be terminated at the load balancer or used as a pass-through to the Gorouter.

Common deployments of load balancing in TAS for VMs are:

  • HTTP/HTTPS traffic to and from Gorouters

  • TCP traffic to and from TCP routers

  • Traffic from the Diego Brain, when developers access app containers through SSH

To load balance across multiple TAS for VMs foundations, use an IaaS-specific or vendor-specific Global Traffic Manager or Global DNS load balancer.

For more information, see Global DNS load balancers for multi-foundation environments.

High availability

TAS for VMs is not considered high availability (HA) until it runs across at least two AZs. VMware recommends defining three AZs.

On an IaaS with its own HA capabilities, using the IaaS HA with a TAS for VMs HA topology provides two benefits. Multiple AZs give TAS for VMs redundancy, so that losing an AZ is not catastrophic. The BOSH Resurrector can then replace lost VMs as needed to repair a foundation.

To back up and restore a foundation externally, use BOSH backup and restore (BBR). For more information, see the BOSH backup and restore documentation.

Storage

TAS for VMs requires disk storage for each component, both for persistent data and to allocate to ephemeral data. You size these disks in the Resource Config pane of the TAS for VMs tile. For more information about storage configuration and capacity planning, see the corresponding section in the reference architecture for your IaaS.

The platform also requires you to configure file storage for large shared objects. These blobstores can be external or internal. For details, see Configuring File Storage for TAS for VMs.

Security

For information about how TAS for VMs implements security, see:

Domain names

TAS for VMs requires these domain names to be registered:

  • System domain for TAS for VMs and other tiles: sys.domain.name
  • App domain for your apps: app.domain.name

You must also define these wildcard domain names and include them when creating certificates that access TAS for VMs and its hosted apps:

  • \*.SYSTEM-DOMAIN

  • \*.APPS-DOMAIN

  • \*.login.SYSTEM-DOMAIN

  • \*.uaa.SYSTEM-DOMAIN

Component scaling

For recommendations about scaling TAS for VMs for different deployment scenarios, see Scaling TAS for VMs.

TKGI Architecture

The following diagram illustrates a base architecture for VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) and how its network topology places and replicates TKGI components across subnets and AZs:

TKGI Base Deployment Topology. Each AZ includes Infrastructure subnet, Services subnet, and PKS Cluster Subnets, which are dynamically generated.

View a larger version of this diagram.

Internal components

The following table describes the internal component placements shown in the preceding diagram:

Component Placement and Access Notes
Tanzu Operations Manager VM Deployed on one of the subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox
BOSH Director Deployed on the infrastructure subnet
Jumpbox Optional. Deployed on the infrastructure subnet for accessing TKGI management components such as Tanzu Operations Manager and the kubectl command line interface
TKGI API Deployed as a service broker VM on the TKGI services subnet. Handles TKGI API and service adapter requests, and manages TKGI clusters. For more information, see TKGI Architecture.
Harbor tile Optional container images registry, typically deployed to the services subnet
TKGI Cluster Deployed to a dynamically-created, dedicated TKGI cluster subnet. Each cluster consists of worker nodes that run the workloads, or apps, and one or more primary nodes.

Networks

These sections describe VMware’s recommendations for defining your networks and load-balancing incoming requests.

Subnets requirements

TKGI requires these defined networks to host the main elements:

  • Infrastructure subnet - /24
    This subnet contains VMs that require access only for Platform Administrators, such as the Tanzu Operations Manager VM, the BOSH Director VM, and an optional jumpbox.

  • TKGI services subnet - /24
    This subnet hosts TKGI API VM and other optional service tiles such as VMware Harbor Registry.

  • TKGI cluster subnets - each one a /24 from a pool of pre-allocated IPs
    These subnets host TKGI clusters.

Load balancing

Load balancers can be used to manage traffic across primary nodes of a TKGI cluster or for deployed workloads. For information about how to configure load balancers for TKGI, see the corresponding section in the reference architecture for your IaaS.

High availability

TKGI has no inherent HA capabilities to design for. Make the best efforts to design HA at the IaaS, storage, power, and access layers to support TKGI.

Storage

TKGI requires shared storage across all AZs for the deployed workloads to appropriately allocate their required storage.

Security

For information about how TKGI implements security, see Enterprise TKGI Security and Firewall Ports.

Domain names

TKGI requires the *.pks.domain.name domain name to be registered when creating a wildcard certificate and TKGI tile configurations.

The wildcard certificate covers both the TKGI API domain, such as api.pks.domain.name, and the TKGI clusters domains, such as cluster.pks.domain.name.

Cluster management

For information about managing TKGI clusters, see Managing clusters.

check-circle-line exclamation-circle-line close-line
Scroll to top icon