You must follow these steps before you install VMware Tanzu Operations Manager on Google Cloud Platform (GCP).
Before you prepare your Tanzu Operations Manager installation:
If you are deploying VMware Tanzu Application Service for VMs (TAS for VMs), see Tanzu Operations Manager on GCP Requirements.
If you are deploying Enterprise VMware Tanzu Kubernetes Grid Integrated Edition (TKGI), see GCP Prerequisites and Resource Requirements.
This section outlines high-level infrastructure options for Tanzu Operations Manager on GCP. A Tanzu Operations Manager deployment includes Tanzu Operations Manager and your chosen runtime; for example, both Tanzu Operations Manager with TAS for VMs and Tanzu Operations Manager with TKGI are Tanzu Operations Manager deployments. For more information, review the deployment options and recommendations in Reference Architecture for Tanzu Operations Manager on GCP.
You can deploy Tanzu Operations Manager using one of two main configurations on a GCP virtual private cloud (VPC):
See Shared vs. Single-Project VPCs in Reference Architecture for Tanzu Operations Manager on GCP for a full discussion and recommendations.
When deploying Tanzu Operations Manager on GCP, VMware recommends using the following GCP components:
Tanzu Operations Manager uses IAM service accounts to access GCP resources.
For a single-project installation: Complete the following steps to create a service account for Tanzu Operations Manager.
For a shared-VPC installation: Complete the following steps twice, first, to create a host account, and second, to create a service account for Tanzu Operations Manager.
From the GCP console, click IAM & Admin, then Roles.
Click Create New Role:
ID: Enter a unique id; for example, “bosh.director”
Click Add Permissions. Select each of the following permissions, then click Add.
compute.addresses.get
compute.addresses.list
compute.backendServices.get
compute.backendServices.list
compute.diskTypes.get
compute.disks.delete
compute.disks.list
compute.disks.get
compute.disks.createSnapshot
compute.snapshots.create
compute.disks.create
compute.disks.resize
compute.images.useReadOnly
compute.globalOperations.get
compute.images.delete
compute.images.get
compute.images.create
compute.instanceGroups.get
compute.instanceGroups.list
compute.instanceGroups.update
compute.instances.setMetadata
compute.instances.setLabels
compute.instances.setTags
compute.instances.reset
compute.instances.start
compute.instances.list
compute.instances.get
compute.instances.delete
compute.instances.create
compute.subnetworks.use
compute.subnetworks.useExternalIp
compute.instances.detachDisk
compute.instances.attachDisk
compute.disks.use
compute.instances.deleteAccessConfig
compute.instances.addAccessConfig
compute.addresses.use
compute.addresses.useInternal
compute.machineTypes.get
compute.regionOperations.get
compute.zoneOperations.get
compute.networks.get
compute.subnetworks.get
compute.snapshots.delete
compute.snapshots.get
compute.targetPools.list
compute.targetPools.get
compute.targetPools.addInstance
compute.targetPools.removeInstance
compute.instances.use
storage.buckets.create
storage.objects.create
compute.zones.list
resourcemanager.projects.get
compute.subnetworks.list
compute.networks.list
If you intend to use the Tanzu Ops Manager VM Service Account option for authentication instead of AuthJSON, then the following permissions are also required:
compute.instances.setServiceAccount
Service Account User
roleClick Create
From the GCP console, click IAM & Admin, then Service accounts.
Click Create Service Account:
bosh
.Role: Use the drop-down menu, select the following roles:
Custom, then the name of the role you previously created; for example, BOSH Director
You must scroll down in the pop-up windows to select all required roles.
The Service Account User role is required only if you plan to use the Tanzu Operations Manager VM Service Account to deploy Tanzu Operations Manager. For more information about the Tanzu Operations Manager VM Service Account, see Step 2: Google Cloud Platform Config in Configuring BOSH Director on GCP.
Service account ID: The text box automatically generates a unique ID based on the user name.
Click Create. Your browser automatically downloads a JSON file with a private key for this account. Save this file in a secure location.
You can use this service account to configure file storage for TAS for VMs. For more information, see GCP in Configuring File Storage for TAS for VMs.
Tanzu Operations Manager manages GCP resources using the Google Compute Engine and Cloud Resource Manager APIs. To enable these APIs:
Log in to the Google Developers Console at https://console.developers.google.com.
In the console, go to the GCP projects where you want to install Tanzu Operations Manager.
Click API Manager, then Library.
Under Google Cloud APIs, click Compute Engine API.
On the Google Compute Engine API pane, click Enable.
In the search text box, type Google Cloud Resource Manager API
.
On the Google Cloud Resource Manager API pane, click Enable.
To verify that the APIs have been enabled, complete the following steps:
Log in to GCP using the IAM service account you created in Set up IAM Service Accounts:
$ gcloud auth activate-service-account --key-file JSON_KEY_FILENAME
List your projects:
$ gcloud projects list PROJECT_ID NAME PROJECT_NUMBER my-host-project-id my-host-project-name ############## my-service-project-id my-service-project-name ##############This command lists the projects in which you enabled Google Cloud APIs.
Log in to the GCP console.
Go to the GCP project where you want to install Tanzu Operations Manager. For a shared VPC installation, go to the host project.
Click VPC network, then CREATE VPC NETWORK.
In the Name text box, enter a name of your choice for the VPC network. This name helps you identify resources for this deployment in the GCP console. Network names must be lowercase; for example, pcf-virt-net
.
Under Subnets, create an infrastructure subnet for Tanzu Operations Manager and NAT instances. Complete the form as follows:
Name | pcf-infrastructure-subnet-GCP-REGION Example: pcf-infrastructure-subnet-us-west1 |
---|---|
Region | A region that supports three availability zones. For help selecting the correct region for your deployment, see the Google documentation about regions and zones. |
IP address range | A CIDR ending in /26 Example: 192.168.101.0/26 |
For deployments that do not use external IP addresses, enable Private Google access to allow your runtime to make API calls to Google services.
Click Add subnet to add a second subnet for the BOSH Director and components specific to your runtime. Complete the form as follows:
Name | pcf-RUNTIME-subnet-GCP-REGION Example: pcf-pas-subnet-us-west1 |
---|---|
Region | The same region you selected for the infrastructure subnet |
IP address range | A CIDR ending in /22 Example: 192.168.16.0/22 |
Click Add subnet to add a third Subnet with the following details:
Name | pcf-services-subnet-GCP-REGION Example: pcf-services-subnet-us-west1 |
---|---|
Region | The same region you selected for the previous subnets |
IP address range | A CIDR in /22 Example: 192.168.20.0/22 |
Under Dynamic routing mode, leave Regional selected.
Click Create.
Use NAT instances when you want to expose only a minimal number of public IP addresses.
Creating NAT instances permits internet access from cluster VMs. You might, for example, need this internet access for pulling Docker images or enabling internet access for your workloads.
For more information, see Reference Architecture for Tanzu Operations Manager on GCP and the GCP documentation.
In the GCP console, with your single project or shared-VPC host project selected, go to Compute Engine, then VM instances.
Click CREATE INSTANCE.
Complete the following text boxes:
pcf-nat-gateway-pri
. us-west1
, click us-west1-a
zone.n1-standard-4
.Ubuntu 14.04 LTS
.Expand the additional configuration text boxes by clicking Management, disks, networking, SSH keys.
In the Startup script text box under Automation, enter the following text:
#! /bin/bash
sudo sysctl -w net.ipv4.ip_forward=1
sudo sh -c 'echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf'
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
Click Networking to open additional network configuration text boxes:
nat-traverse
pcf-nat-instance
.pcf-virt-net
. You created this network in Step 1: Create a GCP Network with Subnets.pcf-infrastructure-subnet-GCP-REGION
.For Primary internal IP, click Ephemeral (Custom)
. Enter an IP address; for example, 192.168.101.2
, in the Custom ephemeral IP address text box. The IP address must meet the following requirements:
pcf-infrastructure-subnet-GCP-REGION
subnet..1
through .9
addresses in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION
subnet..1
address in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION
subnet.For External IP, click Ephemeral
.
If you select a static external IP address for the NAT instance, then you can use the static IP to further secure access to your CloudSQL instances.
Set IP forwarding to On
.
Click Create to finish creating the NAT instance.
Repeat steps 2 through 6 to create two additional NAT instances with the names and zones specified in the following table. The rest of the configuration remains the same.
Instance 2 | Name | pcf-nat-gateway-sec |
---|---|---|
Zone | Select the second zone from your region. Example: For region us-west1 , select zone us-west1-b . |
|
Internal IP | Select Custom and enter an IP address in the Internal IP address text box. Example: 192.168.101.3 . As described previously, this address must in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet, must exist in a reserved IP range set later in BOSH Director, and cannot be the same as the Gateway IP address set later in Tanzu Operations Manager. |
|
Instance 3 | Name | pcf-nat-gateway-ter |
Zone | Select the third zone from your region. Example: For region us-west1 , select zone us-west1-c . |
|
Internal IP | Select Custom and enter an IP address in the Internal IP address text box. Example: 192.168.101.4 . As previously described, this address must in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet, must exist in a reserved IP range set later in BOSH Director, and cannot be the same as the Gateway IP address set later in Tanzu Operations Manager. |
Go to VPC Networks, then Routes.
Click CREATE ROUTE.
Complete the form as follows:
pcf-nat-pri
pcf-virt-net
0.0.0.0/0
800
pcf
Specify an instance
pcf-nat-gateway-pri
Click Create to finish creating the route.
Repeat steps 2 through 4 to create two additional routes with the names and next hop instances specified in the following table. The rest of the configuration remains the same.
Route 2 | Name: pcf-nat-sec Next hop instance: pcf-nat-gateway-sec |
---|---|
Route 3 | Name: pcf-nat-ter Next hop instance: pcf-nat-gateway-ter |
GCP lets you assign tags to VM instances and create firewall rules that apply to VMs based on their tags. For more information about tags, see Labeling Resources in the GCP documentation. This step assigns tags and firewall rules to Tanzu Operations Manager components and VMs that handle incoming traffic.
With your single project or shared-VPC host project selected, go to the Networking, then VPC network pane and select Firewall rules.
Apply the firewall rules in the following table:
Firewall Rules | |
---|---|
Rule 1 | This rule allows SSH from public networks. Name: pcf-allow-ssh Network: pcf-virt-net Allowed protocols and ports: tcp:22 Source filter: IP ranges Source IP ranges: 0.0.0.0/0 Target tags: allow-ssh |
Rule 2 | This rule allows HTTP from public networks. Name: pcf-allow-http Network: pcf-virt-net Allowed protocols and ports: tcp:80 Source filter: IP ranges Source IP ranges: 0.0.0.0/0 Target tags: allow-http , router |
Rule 3 | This rule allows HTTPS from public networks. Name: pcf-allow-https Network: pcf-virt-net Allowed protocols and ports: tcp:443 Source filter: IP ranges Source IP ranges: 0.0.0.0/0 Target tags: allow-https , router |
Rule 4 | This rule allows Gorouter health checks. Name: pcf-allow-http-8080 Network: pcf-virt-net Allowed protocols and ports: tcp:8080 Source filter: IP ranges Source IP Ranges: 0.0.0.0/0 Target tags: router |
Rule 5 | This rule allows communication between BOSH-deployed jobs. Name: pcf-allow-pas-all Network: pcf-virt-net Allowed protocols and ports: tcp;udp;icmp Source filter: Source tags Target tags: pcf , pcf-opsman , nat-traverse Source tags: pcf , pcf-opsman , nat-traverse |
Rule 6 (Optional) | This rule allows access to the TCP router. Name: pcf-allow-cf-tcp Network: pcf-virt-net Source filter: IP ranges Source IP ranges: 0.0.0.0/0 Allowed protocols and ports: tcp:1024-65535 Target tags: pcf-cf-tcp |
Rule 7 (Optional) | This rule allows access to the SSH proxy. Name: pcf-allow-ssh-proxy Network: pcf-virt-net Source filter: IP ranges Source IP ranges: 0.0.0.0/0 Allowed protocols and ports: tcp:2222 Target tags: pcf-ssh-proxy , diego-brain |
If you want your firewall rules to permit traffic only within your private network, modify the Source IP Ranges from the table accordingly.
If you are using your GCP project only to deploy Tanzu Operations Manager, then you can delete the following default firewall rules:
default-allow-http
default-allow-https
default-allow-icmp
default-allow-internal
default-allow-rdp
default-allow-ssh
If you are deploying TKGI only, continue to Next steps.
If you are deploying TAS for VMs or other runtimes, continue to Create database instance and databases.
For a shared-VPC installation, click the service project in the GCP console. This step and the following steps allocate resources to the service project, not the host project.
From the GCP console, click SQL and then click CREATE INSTANCE.
Ensure MySQL is selected and click Next.
Under MySQL, click Second Generation instance type.
Click Configure MySQL under your choice for instance type: Development, Staging, or Production.
Configure the instance as follows:
pcf-pas-sql
Authorize Networks: Click Add network and create a network named all
that allows traffic from 0.0.0.0/0
.
If you assigned static IP addresses to your NAT instances, you can instead limit access to the database instances by specifying the NAT IP addresses.
Click Create.
Go to the Instances page and select the database instance you just created.
Select the Databases tab.
Click Create database to create the following databases:
account
app_usage_service
autoscale
ccdb
console
diego
locket
networkpolicyserver
nfsvolume
notifications
routing
silk
uaa
credhub
Select the USERS tab.
Click Create user account to create a unique user name and password for each database you previously created. For Host name, select Allow any host. You must create a total of fourteen user accounts.
Ensure that the networkpolicyserver
database user has the ALL PRIVILEGES
permission.
With your single project or shared-VPC service project selected in the GCP console, click Storage, then Browser.
Click CREATE BUCKET and create buckets with the following names. For Default storage class, click Multi-Regional:
PREFIX-pcf-buildpacks
PREFIX-pcf-droplets
PREFIX-pcf-packages
PREFIX-pcf-resources
PREFIX-pcf-backup
PREFIX
is a prefix of your choice. It is required to make the bucket name unique.
For load balancing, you can use a global HTTP load balancer or an internal, regional load balancer with a private IP address.
Single project, standalone installations typically use a global HTTP load balancer. See Create HTTP Load Balancer for how to set this up.
Shared-VPC installation typically use an internal TCP/UDP load balancer to minimize public IP addresses. See Create Internal Load Balancer for how to set this up.
To create an internal load balancer for Tanzu Operations Manager on GCP, do the following.
Create an internal-facing TCP/UDP load balancer for each region of your Tanzu Operations Manager deployment.
GCP Internal Load Balancer (iLB) is a regional product. Within the same VPC/network, client VMs in a different region from the iLB cannot access the iLB. For more information, see the GCP documentation.
Assign private IP addresses to the load balancers.
After you have deployed Tanzu Operations Manager, follow instructions in Create or Update a VM Extension to add a custom VM extension that applies internal load balancing to all VMs deployed by BOSH.
For example, the following manifest code adds a VM extension backend-pool
to Tanzu Operations Manager VMs:
vm_extensions:
- name: backend-pool
cloud_properties:
ephemeral_external_ip: true
backend_service:
name: name-of-backend-service
scheme: INTERNAL
To create a global HTTP load balancer for Tanzu Operations Manager on GCP:
Go to Compute Engine, then Instance groups.
Click CREATE INSTANCE GROUP.
Complete the form as follows:
pcf-http-lb
us-west1
, click zone us-west1-a
.pcf-virt-net
.pcf-pas-subnet-my-gcp-region
subnet that you created previously.Create a second instance group with the following details:
pcf-http-lb
us-west1
, click zone us-west1-b
.pcf-virt-net
.pcf-pas-subnet-my-gcp-region
subnet that you created previously.Create a third instance group with the following details:
pcf-http-lb
us-west1
, click zone us-west1-c
.pcf-virt-net
.pcf-pas-subnet-my-gcp-region
subnet that you created previously.Go to Compute Engine, then Health checks.
Click CREATE HEALTH CHECK.
Complete the form as follows:
pcf-cf-public
8080
/health
30
5
10
2
Click Create.
Go to Network services, then Load balancing.
Click CREATE LOAD BALANCER.
Under HTTP(S) Load Balancing, click Start configuration.
For the Name, enter pcf-global-pcf
.
Select Backend configuration.
Complete the form as follows:
pcf-http-lb-backend
HTTP
http
10 seconds
pcf-http-lb (us-west1-a)
. Click Done.pcf-http-lb (us-west1-b)
. Click Done.Click Add backend, click the Instance group that corresponds to the third zone of the multi-zone instance group you created; for example: pcf-http-lb (us-west1-c)
. Click Done.
pcf-cf-public
health check that you created.From the drop-down menu, click Backend services, then Create a backend service.
Complete the form as follows:
pcf-http-lb-backend
HTTP
http
10 seconds
Under Backends, then New backend, click the Instance group that corresponds to the first zone of the multi-zone instance group you created; for example: pcf-http-lb (us-west1-a)
. Click Done.
Click Add backend, click the Instance group that corresponds to the second zone of the multi-zone instance group you created; for example: pcf-http-lb (us-west1-b)
. Click Done.
Click Add backend, click the Instance group that corresponds to the third zone of the multi-zone instance group you created; for example: pcf-http-lb (us-west1-c)
. Click Done.
pcf-cf-public
health check that you created.Click Create.
Click Host and path rules to populate the default text boxes and a green check mark.
Click Frontend configuration, and add the following:
pcf-cf-lb-http
HTTP
pcf-global-pcf
.80
Click Add Frontend IP and port and add the following:
Skip this step if you do not have either a self-signed or trusted SSL certificate.
When you configure the tile for your chosen runtime, you are given the opportunity to create a new self-signed certificate. Upon creating a certificate, you can complete the Add Frontend IP and port section.
pcf-cf-lb-https
HTTPS
pcf-global-pcf
address you create for the previous Frontend IP and Port.443
In the Name text box, enter a name for the certificate.
In the Public key certificate text box, copy in the contents of your public certificate, or upload your certificate as a .pem file. If the certificate is runtime-generated, copy and paste the generated contents from the runtime’s Certificate text box into the BOSH Director Public key certificate text box.
Review the completed frontend configuration.
Click Review and finalize to verify your configuration.
Click Create.
The load balancer for tailing logs with WebSockets for Tanzu Operations Manager on GCP operates on TCP port 443
.
Click Create load balancer.
Under TCP Load Balancing, click Start configuration.
On the Create a load balancer configuration UI, make the following selections:
Click Continue.
In the New TCP load balancer window, enter pcf-wss-logs
in the Name text box.
Click Backend configuration to configure the Backend service:
From the Health check drop-down menu, create a health check with the following details:
pcf-gorouter
8080
/health
30
5
10
2
The Backend configuration section shows a green check mark.
Click Frontend configuration. Fill in the text boxes:
TCP
pcf-gorouter-wss
.443
Click Review and finalize to verify your configuration.
Click Create.
Click Create load balancer.
Under TCP Load Balancing, click Start configuration.
Under Internet facing or internal only, click From Internet to my VMs.
Under Connection termination, click No (TCP).
Click Continue.
In the New TCP load balancer window, enter pcf-ssh-proxy
in the Name text box.
Click Backend configuration, and enter the following values:
None
10%
No health check
Click Frontend configuration, and add the following:
TCP
pcf-ssh-proxy
.2222
(Optional) Review and finalize your load balancer.
Click Create.
This step required only if you enable TCP routing in your deployment.
To create a load balancer for TCP routing in GCP:
Click Create load balancer.
Under TCP Load Balancing, click Start configuration.
Under Connection termination, click No (TCP).
Click Continue.
On the New TCP load balancer pane, enter a unique name for the load balancer in the Name text box; for example, pcf-cf-tcp-lb
.
Click Backend configuration, and enter the following values:
pcf-tcp-lb
80
/health
30
5
10
2
Click Frontend configuration, and add the front end IP and port entry as follows:
TCP
pcf-cf-tcp-lb
.1024-65535
Click Review and finalize to verify your configuration.
Click Create.
In this step, you redirect queries for your domain to the IP addresses of your load balancers.
Locate the static IP addresses of the load balancers you created in Preparing to deploy Tanzu Operations Manager on GCP:
pcf-global-pcf
pcf-wss-logs
pcf-ssh-proxy
pcf-cf-tcp-lb
You can locate the static IP address of each load balancer by clicking its name under Network services, then Load balancing in the GCP console.
Log in to the DNS registrar that hosts your domain. Examples of DNS registrars include Network Solutions, GoDaddy, and Register.com.
Create A records with your DNS registrar that map domain names to the public static IP addresses of the load balancers located previously:
Create and map this record... | To the IP of this load balancer | Required |
---|---|---|
\*.sys.MY-DOMAIN Example: \*.sys.example.com |
pcf-global-pcf |
Yes |
\*.apps.MY-DOMAIN Example: \*.apps.example.com |
pcf-global-pcf |
Yes |
doppler.sys.MY-DOMAIN Example: doppler.sys.example.com |
pcf-wss-logs |
Yes |
loggregator.sys.MY-DOMAIN Example: loggregator.sys.example.com |
pcf-wss-logs |
Yes |
ssh.sys.MY-DOMAIN Example: ssh.sys.example.com |
pcf-ssh-proxy |
Yes, to allow SSH access to apps |
tcp.MY-DOMAIN Example: tcp.example.com |
pcf-cf-tcp-lb |
No, only set up if you have enabled the TCP routing feature |
Save your changes within the web interface of your DNS registrar.
Run the following dig
command to confirm that you created your A record successfully:
dig SUBDOMAIN.EXAMPLE-URL.com
Where SUBDOMAIN.EXAMPLE-URL
is the subdomain for your load balancer.
You should see the A record that you just created:
;; ANSWER SECTION: xyz.EXAMPLE.COM. 1767 IN A 203.0.113.1
(Optional) To prepare for deploying either a TAS for VMs or TKGI tile on GCP, you can download the required runtime tile in advance:
After initiating the tile download, proceed to the next step, Deploying Tanzu Operations Manager on GCP.