This topic describes the preparation steps required to install VMware Tanzu Operations Manager (Ops Manager) on Google Cloud Platform (GCP).

Prerequisites

Before you prepare your Ops Manager installation, do the following depending on the runtime you intend to deploy:

Configuration and Components

This section outlines high-level infrastructure options for Ops Manager on GCP. A Ops Manager deployment includes Ops Manager and your chosen runtime. For example, both Ops Manager with TAS for VMs and Ops Manager with TKGI are Ops Manager deployments. For more information, review the deployment options and recommendations in Reference Architecture for Ops Manager on GCP.

You can deploy Ops Manager using one of two main configurations on a GCP virtual private cloud (VPC):

  • A single-project configuration that gives Ops Manager full access to VPC resources
  • A shared VPC configuration in which Ops Manager shares VPC resources

See Shared vs Single-Project VPCs in Reference Architecture for Ops Manager on GCP for a full discussion and recommendations.

When deploying Ops Manager on GCP, VMware recommends using the following GCP components:

Step 1: Set up IAM Service Accounts

Ops Manager uses IAM service accounts to access GCP resources.

For a single-project installation: Complete the following steps to create a service account for Ops Manager.

For a shared-VPC installation: Complete the following steps twice, to create a host account and service account for Ops Manager.

  1. From the GCP console, select IAM & Admin, then Service accounts.

  2. Click Create Service Account:

    • Service account name: Enter a name. For example, bosh.
    • Role: Use the drop-down menu, select the following roles:

      • Service Accounts > Service Account User
      • Service Accounts > Service Account Token Creator
      • Compute Engine > Compute Instance Admin (v1)
      • Compute Engine > Compute Network Admin
      • Compute Engine > Compute Storage Admin
      • Storage > Storage Admin

      Note: You must scroll down in the pop-up windows to select all required roles.

      The Service Account User role is required only if you plan to use The Ops Manager VM Service Account to deploy Ops Manager. For more information about The Ops Manager VM Service Account, see Step 2: Google Cloud Platform Config in Configuring BOSH Director on GCP.

    • Service account ID: The field automatically generates a unique ID based on the username.

    • Furnish a new private key: Select this checkbox and JSON as the Key type.

      alt-text=""

  3. Click Create. Your browser automatically downloads a JSON file with a private key for this account. Save this file in a secure location.

Note: You can use this service account to configure file storage for TAS for VMs. For more information, see GCP in Configuring File Storage for TAS for VMs.

Step 2: Enable Google Cloud APIs

Ops Manager manages GCP resources using the Google Compute Engine and Cloud Resource Manager APIs. To enable these APIs:

  1. Log in to the Google Developers Console at https://console.developers.google.com.

  2. In the console, navigate to the GCP projects where you want to install Ops Manager.

    • For a single-project installation, complete the following steps for the Ops Manager project.
    • For a shared-VPC installation, complete the following steps for both host and service projects, to enable them to access the Google Cloud API.
  3. Select API Manager > Library.

  4. Under Google Cloud APIs, select Compute Engine API.

  5. On the Google Compute Engine API page, click Enable.

  6. In the search field, enter Google Cloud Resource Manager API.

  7. On the Google Cloud Resource Manager API page, click Enable.

  8. To verify that the APIs have been enabled, perform the following steps:

    1. Log in to GCP using the IAM service account you created in Set up IAM Service Accounts:

      $ gcloud auth activate-service-account --key-file JSON_KEY_FILENAME
      
    2. List your projects:

      $ gcloud projects list
      PROJECT_ID              NAME                      PROJECT_NUMBER
      my-host-project-id      my-host-project-name      ##############
      my-service-project-id   my-service-project-name   ##############
      
      This command lists the projects where you enabled Google Cloud APIs.

Step 3: Create a GCP Network with Subnets

  1. Log in to the GCP console.

  2. Navigate to the GCP project where you want to install Ops Manager. For a shared-VPC installation, navigate to the host project.

  3. Select VPC network, then CREATE VPC NETWORK.

    alt-text=On the GCP console, the VPC Network page has two sections: VPC Networks and External IP Addresses.

  4. In the Name field, enter a name of your choice for the VPC network. This name helps you identify resources for this deployment in the GCP console. Network names must be lowercase. For example, pcf-virt-net.

    1. Under Subnets, complete the form as follows to create an infrastructure subnet for Ops Manager and NAT instances:

      Name pcf-infrastructure-subnet-GCP-REGION
      Example: pcf-infrastructure-subnet-us-west1
      Region A region that supports three availability zones. For help selecting the correct region for your deployment, see the Google documentation about regions and zones.
      IP address range A CIDR ending in /26
      Example: 192.168.101.0/26

      See the following image for an example:

      alt-text=The New subnet dialog box includes these sections: Name, Region, IP address range, Private Google access, and Flow logs.

      Note: For deployments that do not use external IP addresses, enable Private Google access to allow your runtime to make API calls to Google services.

    2. Click Add subnet to add a second subnet for the BOSH Director and components specific to your runtime. Complete the form as follows:

      Name pcf-RUNTIME-subnet-GCP-REGION
      Example: pcf-pas-subnet-us-west1
      Region The same region you selected for the infrastructure subnet
      IP address range A CIDR ending in /22
      Example: 192.168.16.0/22
    3. Click Add subnet to add a third Subnet with the following details:

      Name pcf-services-subnet-GCP-REGION
      Example: pcf-services-subnet-us-west1
      Region The same region you selected for the previous subnets
      IP address range A CIDR in /22
      Example: 192.168.20.0/22

      See the following image for an example:

      alt-text=The VPC networks wizard shows three example subnets.

  5. Under Dynamic routing mode, leave Regional selected.

  6. Click Create.

Step 4: Create NAT Instances

Use NAT instances when you want to expose only a minimal number of public IP addresses.

Creating NAT instances permits internet access from cluster VMs. You might, for example, need this internet access for pulling Docker images or enabling internet access for your workloads.

For more information, see Reference Architecture for Ops Manager on GCP and the GCP documentation.

  1. In the GCP console, with your single project or shared-VPC host project selected, navigate to Compute Engine > VM instances.

    alt-text=""

  2. Click CREATE INSTANCE.

    alt-text=The GCP Console Compute Engine page shows the Create Instance button.

  3. Complete the following fields:

    • Name: Enter pcf-nat-gateway-pri.
      This is the first, or primary, of three NAT instances you need. If you use a single AZ, you need only one NAT instance.
    • Zone: Select the first zone from your region.
      Example: For region us-west1, select zone us-west1-a.
    • Machine type: Select n1-standard-4.
    • Boot disk: Click Change and select Ubuntu 14.04 LTS.

    alt-text=The Create Instance dialog box showing the sections: Name, Zone, Machine Type, and Boot disk.

  4. Expand the additional configuration fields by clicking Management, disks, networking, SSH keys.

    alt-text=The Management, disks, networking, SSH keys drop-down menu.

    1. In the Startup script field under Automation, enter the following text:

      #! /bin/bash
      sudo sysctl -w net.ipv4.ip_forward=1
      sudo sh -c 'echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf'
      sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
      
  5. Click Networking to open additional network configuration fields:

    alt-text=The page includes the tabs: Management, Disks, Networking, and SSH Keys.

    1. In the Network tags field, add the following: nat-traverse and pcf-nat-instance.
    2. Click the Networking tab and the pencil icon to edit the Network interface.
    3. For Network, select pcf-virt-net. You created this network in Step 1: Create a GCP Network with Subnets.
    4. For Subnetwork, select pcf-infrastructure-subnet-GCP-REGION.
    5. For Primary internal IP, select Ephemeral (Custom). Enter an IP address, for example, 192.168.101.2, in the Custom ephemeral IP address field. The IP address must meet the following requirements:

      • The IP address must exist in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet.
      • The IP address must exist in a reserved IP range set later in BOSH Director. The reserved range is typically the first .1 through .9 addresses in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet.
      • The IP address cannot be the same as the Gateway IP address set later in Ops Manager. The Gateway IP address is typically the first .1 address in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet.
    6. For External IP, select Ephemeral.

      Note: If you select a static external IP address for the NAT instance, then you can use the static IP to further secure access to your CloudSQL instances.

    7. Set IP forwarding to On.

    8. Click Done.
  6. Click Create to finish creating the NAT instance.

  7. Repeat steps 2–6 to create two additional NAT instances with the names and zones specified in the table below. The rest of the configuration remains the same.

    Instance 2 Name pcf-nat-gateway-sec
    Zone Select the second zone from your region.
    Example: For region us-west1, select zone us-west1-b.
    Internal IP Select Custom and enter an IP address in the Internal IP address field. Example: 192.168.101.3.

    As described above, this address must in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet, must exist in a reserved IP range set later in BOSH Director, and cannot be the same as the Gateway IP address set later in Ops Manager.
    Instance 3 Name pcf-nat-gateway-ter
    Zone Select the third zone from your region.
    Example: For region us-west1, select zone us-west1-c.
    Internal IP Select Custom and enter an IP address in the Internal IP address field. Example: 192.168.101.4.

    As described above, this address must in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet, must exist in a reserved IP range set later in BOSH Director, and cannot be the same as the Gateway IP address set later in Ops Manager.

Create Routes for NAT Instances

  1. Navigate to VPC Networks > Routes.

    alt-text=The Networking menu includes these options: VPC network, Network Services, VPN.

  2. Click CREATE ROUTE.

  3. Complete the form as follows:

    • Name: pcf-nat-pri
    • Network: pcf-virt-net
    • Destination IP range: 0.0.0.0/0
    • Priority: 800
    • Instance tags: pcf
    • Next hop: Specify an instance
    • Next hop instance: pcf-nat-gateway-pri
  4. Click Create to finish creating the route.

  5. Repeat steps 2–4 to create two additional routes with the names and next hop instances specified in the table below. The rest of the configuration remains the same.

    Route 2 Name: pcf-nat-sec
    Next hop instance: pcf-nat-gateway-sec
    Route 3 Name: pcf-nat-ter
    Next hop instance: pcf-nat-gateway-ter

Step 5: Create Firewall Rules for the Network

GCP lets you assign tags to VM instances and create firewall rules that apply to VMs based on their tags. For more information about tags, see Labeling Resources in the GCP documentation. This step assigns tags and firewall rules to Ops Manager components and VMs that handle incoming traffic.

  1. With your single project or shared-VPC host project selected, navigate to the Networking > VPC network pane and select Firewall rules.

  2. Apply the firewall rules in the following table:

    Firewall Rules
    Rule 1 This rule allows SSH from public networks.

    Name: pcf-allow-ssh
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:22
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Target tags: allow-ssh
    Rule 2 This rule allows HTTP from public networks.

    Name: pcf-allow-http
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:80
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Target tags: allow-http, router
    Rule 3 This rule allows HTTPS from public networks.

    Name: pcf-allow-https
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:443
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Target tags: allow-https, router
    Rule 4 This rule allows GoRouter health checks.

    Name: pcf-allow-http-8080
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:8080
    Source filter: IP ranges
    Source IP Ranges: 0.0.0.0/0
    Target tags: router
    Rule 5 This rule allows communication between BOSH-deployed jobs.

    Name: pcf-allow-pas-all
    Network: pcf-virt-net
    Allowed protocols and ports: tcp;udp;icmp
    Source filter: Source tags
    Target tags: pcf, pcf-opsman, nat-traverse
    Source tags: pcf, pcf-opsman, nat-traverse
    Rule 6 (Optional) This rule allows access to the TCP router.

    Name: pcf-allow-cf-tcp
    Network: pcf-virt-net
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Allowed protocols and ports: tcp:1024-65535
    Target tags: pcf-cf-tcp
    Rule 7 (Optional) This rule allows access to the SSH proxy.

    Name: pcf-allow-ssh-proxy
    Network: pcf-virt-net
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Allowed protocols and ports: tcp:2222
    Target tags: pcf-ssh-proxy, diego-brain

    Note: If you want your firewall rules to only permit traffic within your private network, modify the Source IP Ranges from the table accordingly.

  3. If you are only using your GCP project to deploy Ops Manager, then you can delete the following default firewall rules:

    • default-allow-http
    • default-allow-https
    • default-allow-icmp
    • default-allow-internal
    • default-allow-rdp
    • default-allow-ssh

If you are deploying TKGI only, continue to Next Steps.

If you are deploying TAS for VMs or other runtimes, proceed to the following step.

Step 6: Create Database Instance and Databases

Create Database Instance

  1. For a shared-VPC installation, select the service project in the GCP console. This step and the following steps allocate resources to the service project, not the host project.

  2. From the GCP console, select SQL and click CREATE INSTANCE.

  3. Ensure MySQL is selected and click Next.

  4. Under MySQL, select instance type Second Generation.

  5. Click Configure MySQL under your choice for instance type: Development, Staging, or Production.

  6. Configure the instance as follows:

    • Instance ID: pcf-pas-sql
    • Root password: Set a password for the root user.
    • Region: Select the region you specified when creating networks.
    • Zone: Any.
    • Configure machine type and storage:
      • Click Change and then select db-n1-standard-2.
      • Ensure that Enable automatic storage increases is selected. This allows DB storage to grow automatically when space is required.
    • Enable auto backups and high availability: Make the following selections:
      • Leave Automate backups and Enable binary logging selected.
      • Under High availability, select the Create failover replica checkbox.
    • Authorize Networks: Click Add network and create a network named all that allows traffic from 0.0.0.0/0.

      Note: If you assigned static IP addresses to your NAT instances, you can instead limit access to the database instances by specifying the NAT IP addresses.

  7. Click Create.

Create Databases

  1. Navigate to the Instances page and select the database instance you just created.

  2. Select the Databases tab.

  3. Click Create database to create the following databases:

    • account
    • app_usage_service
    • autoscale
    • ccdb
    • console
    • diego
    • locket
    • networkpolicyserver
    • nfsvolume
    • notifications
    • routing
    • silk
    • uaa
    • credhub
  4. Select the USERS tab.

  5. Click Create user account to create a unique username and password for each database you created above. For Host name, select Allow any host. You must create a total of fourteen user accounts.

Note: Ensure that the networkpolicyserver database user has the ALL PRIVILEGES permission.

Step 7: Create Storage Buckets

  1. With your single project or shared-VPC service project selected in the GCP console, select Storage > Browser.

  2. Using CREATE BUCKET, create buckets with the following names. For Default storage class, select Multi-Regional:

    • PREFIX-pcf-buildpacks
    • PREFIX-pcf-droplets
    • PREFIX-pcf-packages
    • PREFIX-pcf-resources
    • PREFIX-pcf-backup

    Where PREFIX is a prefix of your choice, required to make the bucket name unique.

Step 8: Create HTTP Load Balancer

For load balancing, you can use a global HTTP load balancer or an internal, regional load balancer with a private IP address.

Single-project, standalone installations typically use a global HTTP load balancer. For more information, see Create HTTP Load Balancer below for how to set this up.

Shared-VPC installation typically use an internal TCP/UDP load balancer to minimize public IP addresses. For more information, see Create Internal Load Balancer below for how to set this up.

Create Internal Load Balancer

To create an internal load balancer for Ops Manager on GCP, do the following.

  1. Create an internal-facing TCP/UDP load balancer for each region of your Ops Manager deployment.

    Note: GCP Internal Load Balancer (iLB) is a regional product. Within the same VPC/network, client VMs in a different region from the iLB cannot access the iLB. For more information, see the GCP documentation.

  2. Assign private IP addresses to the load balancers.

  3. After you have deployed Ops Manager, follow instructions in Create or Update a VM Extension to add a custom VM extension that applies internal load balancing to all VMs deployed by BOSH.

    • For example, the following manifest code adds a VM extension backend-pool to Ops Manager VMs:

      vm_extensions:
      - name: backend-pool
        cloud_properties:
          ephemeral_external_ip: true
          backend_service:
            name: name-of-backend-service
            scheme: INTERNAL
      

Create HTTP Load Balancer

To create a global HTTP load balancer for Ops Manager on GCP:

  1. Create Instance Group
  2. Create Health Check
  3. Configure Back End
  4. Configure Front End

Create Instance Group

  1. Navigate to Compute Engine > Instance groups.

  2. Click CREATE INSTANCE GROUP.

  3. Complete the form as follows:

    • For Name, enter pcf-http-lb
    • For Location, select Single-zone.
    • For Zone, select the first zone from your region.
      Example: For region us-west1, select zone us-west1-a.
    • Under Group type, select Unmanaged instance group.
    • For Network, select pcf-virt-net.
    • For Subnetwork, select the pcf-pas-subnet-my-gcp-region subnet that you created previously.
    • Click Create.
  4. Create a second instance group with the following details:

    • Name: pcf-http-lb
    • Location: Single-zone
    • Zone: Select the second zone from your region.
      Example: For region us-west1, select zone us-west1-b.
    • Group type: Select Unmanaged instance group.
    • Network: Select pcf-virt-net.
    • Subnetwork: Select the pcf-pas-subnet-my-gcp-region subnet that you created previously.
  5. Create a third instance group with the following details:

    • Name: pcf-http-lb
    • Location: Single-zone
    • Zone: Select the third zone from your region.
      Example: For region us-west1, select zone us-west1-c.
    • Group type: Select Unmanaged instance group.
    • Network: Select pcf-virt-net.
    • Subnetwork: Select the pcf-pas-subnet-my-gcp-region subnet that you created previously.

Create Health Check

  1. Navigate to Compute Engine > Health checks.

  2. Click CREATE HEALTH CHECK.

  3. Complete the form as follows:

    • Name: pcf-cf-public
    • Port: 8080
    • Request path: /health
    • Check interval: 30
    • Timeout: 5
    • Healthy threshold: 10
    • Unhealthy threshold: 2
  4. Click Create.

Configure Back End

  1. Navigate to Network services > Load balancing.

  2. Click CREATE LOAD BALANCER.

  3. Under HTTP(S) Load Balancing, click Start configuration.

  4. For the Name, enter pcf-global-pcf.

  5. Select Backend configuration

    1. From the dropdown, select Backend services > Create a backend service.
    2. Complete the form as follows:

      • Name: pcf-http-lb-backend.
      • Protocol: HTTP.
      • Named port: http.
      • Timeout: 10 seconds.
      • Under Backends > New backend, select the Instance group that corresponds to the first zone of the multi-zone instance group you created. For example: pcf-http-lb (us-west1-a). Click Done.
      • Click Add backend, select the Instance group that corresponds to the second zone of the multi-zone instance group you created. For example: pcf-http-lb (us-west1-b). Click Done.
      • Click Add backend, select the Instance group that corresponds to the third zone of the multi-zone instance group you created. For example: pcf-http-lb (us-west1-c). Click Done.
      • Health check: Select the pcf-cf-public health check that you created.
      • Cloud CDN: Ensure Cloud CDN is deactivated.
  6. Click Create.

Configure Front End

  1. Click Host and path rules to populate the default fields and a green check mark.

  2. Select Frontend configuration, and add the following:

    • Name: pcf-cf-lb-http
    • Protocol: HTTP
    • IP: Perform the following steps:
      1. Select Create IP address.
      2. Enter a Name for the new static IP address and an optional description. For example, pcf-global-pcf.
      3. Click Reserve.
    • Port: 80
  3. Click Add Frontend IP and port and add the following:

    Note: Skip this step if you do not have either a self-signed or trusted SSL certificate. When you configure the tile for your chosen runtime, you are given the opportunity to create a new self-signed certificate. Upon creating a certificate, you can complete the Add Frontend IP and port section.


    • Name: pcf-cf-lb-https
    • Protocol: HTTPS
    • IP address: Select the pcf-global-pcf address you create for the previous Frontend IP and Port.
    • Port: 443
    • Select Create a new certificate. The Create a New Certificate dialog is displayed.
    • In the Name field, enter a name for the certificate.

      alt-text=""

    • In the Public key certificate field, copy in the contents of your public certificate, or upload your certificate as a .pem file. If the certificate is runtime-generated, copy and paste the generated contents from the runtime’s Certificate field into the BOSH Director Public key certificate field.

    • In the Certificate chain field, enter or upload your certificate chain in the .pem format. If you are using a self-signed certificate, such as a TAS for VMs or TKGI-generated certificate, do not enter a value in the Certificate Chain field.
    • In the Private key field, copy in the contents or upload the .pem file of the private key for the certificate. If the certificate is runtime-generated, copy and paste the generated contents from the runtime’s Private Key field into the BOSH Director Private key field.
  4. Review the completed frontend configuration.

  5. Click Review and finalize to verify your configuration.

  6. Click Create.

Step 9: Create TCP WebSockets Load Balancer

The load balancer for tailing logs with WebSockets for Ops Manager on GCP operates on TCP port 443.

  1. From the GCP console, select Network services > Load balancing > Create load balancer.

  2. Under TCP Load Balancing, click Start configuration.

    alt-text=The Create a load balancer page has three sections: HTTP(S) Load Balancing, TCP Load Balancing, and UDP Load Balancing.

  3. In the Create a load balancer configuration screen, make the following selections:

    • Under Internet facing or internal only, select From Internet to my VMs.
    • Under Multiple regions or single region, select Single region only.
    • Under Connection termination, select No (TCP).

      alt-text=""

  4. Click Continue.

  5. In the New TCP load balancer window, enter pcf-wss-logs in the Name field.

  6. Click Backend configuration to configure the Backend service:

    alt-text=""

    • Region: Select the region you used to create the network in Create a GCP Network with Subnets.
    • From the Health check dropdown, create a health check with the following details:
      • Name: pcf-gorouter
      • Port: 8080
      • Request path: /health
      • Check interval: 30
      • Timeout: 5
      • Healthy threshold: 10
      • Unhealthy threshold: 2 The Backend configuration section shows a green check mark.
  7. Click Frontend configuration to open its configuration window and complete the fields:

    • Protocol: TCP
    • IP: Perform the following steps:
      1. Select Create IP address.
      2. For name Name for the new static IP address and an optional description. For example, pcf-gorouter-wss.
      3. Click Reserve.
    • Port: 443
  8. Click Review and finalize to verify your configuration.

  9. Click Create.

Step 10: Create SSH Proxy Load Balancer

  1. From the GCP console, select Network services > Load balancing > Create load balancer.

  2. Under TCP Load Balancing, click Start configuration.

  3. Under Internet facing or internal only, select From Internet to my VMs.

  4. Under Connection termination, select No (TCP).

    alt-text=""

  5. Click Continue.

  6. In the New TCP load balancer window, enter pcf-ssh-proxy in the Name field.

  7. Select Backend configuration, and enter the following field values:

    • Region: Select the region you used to create the network in Create a GCP Network with Subnet.
    • Backup pool: None
    • Failover ratio: 10%
    • Health check: No health check

    alt-text=""

  8. Select Frontend configuration, and add the following:

    • Protocol: TCP
    • IP: Perform the following steps:
      1. Select Create IP address.
      2. Enter a Name for the new static IP address and an optional description. For example, pcf-ssh-proxy.
      3. Click Reserve.
    • Port: 2222
  9. (Optional) Review and finalize your load balancer.

  10. Click Create.

Step 11: Create Load Balancer for TCP Router

Note: This step is optional and only required if you enable TCP routing in your deployment.

To create a load balancer for TCP routing in GCP:

  1. From the GCP console, select Network services > Load balancing > Create load balancer.

  2. Under TCP Load Balancing, click Start configuration.

  3. Under Connection termination, select No (TCP). Click Continue.

  4. On the New TCP load balancer screen, enter a unique name for the load balancer in the Name field. For example, pcf-cf-tcp-lb.

  5. Select Backend configuration, and enter the following field values:

    • Region: Select the region you used to create the network in Create a GCP Network with Subnet.
    • From the Health check dropdown, create a health check with the following details:

      • Name: pcf-tcp-lb
      • Port: 80
      • Request path: /health
      • Check interval: 30
      • Timeout: 5
      • Healthy threshold: 10
      • Unhealthy threshold: 2
      • Click Save and continue.

        alt-text=""

  6. Select Frontend configuration, and add the front end IP and port entry as follows:

    • Protocol: TCP
    • IP: Perform the following steps:
      1. Select Create IP address.
      2. Enter a Name for the new static IP address and an optional description. For example, pcf-cf-tcp-lb.
      3. Click Reserve.
    • Port: 1024-65535

      alt-text=""

  7. Click Review and finalize to verify your configuration.

  8. Click Create.

Step 12: Add DNS Records for Your Load Balancers

In this step, you redirect queries for your domain to the IP addresses of your load balancers.

  1. Locate the static IP addresses of the load balancers you created in Preparing to Deploy Ops Manager on GCP:

    • An HTTP(S) load balancer named pcf-global-pcf
    • A TCP load balancer for WebSockets named pcf-wss-logs
    • A TCP load balancer named pcf-ssh-proxy
    • A TCP load balancer named pcf-cf-tcp-lb

    Note: You can locate the static IP address of each load balancer by clicking its name under Network services > Load balancing in the GCP console.

  2. Log in to the DNS registrar that hosts your domain. Examples of DNS registrars include Network Solutions, GoDaddy, and Register.com.

  3. Create A records with your DNS registrar that map domain names to the public static IP addresses of the load balancers located above:

    Create and map this record... To the IP of this load balancer Required
    \*.sys.MY-DOMAIN
    Example: \*.sys.example.com
    pcf-global-pcf Yes
    \*.apps.MY-DOMAIN
    Example: \*.apps.example.com
    pcf-global-pcf Yes
    doppler.sys.MY-DOMAIN
    Example: doppler.sys.example.com
    pcf-wss-logs Yes
    loggregator.sys.MY-DOMAIN
    Example: loggregator.sys.example.com
    pcf-wss-logs Yes
    ssh.sys.MY-DOMAIN
    Example: ssh.sys.example.com
    pcf-ssh-proxy Yes, to allow SSH access to apps
    tcp.MY-DOMAIN
    Example: tcp.example.com
    pcf-cf-tcp-lb No, only set up if you have enabled the TCP routing feature
  4. Save changes within the web interface of your DNS registrar.

  5. In a terminal window, run the following dig command to confirm that you created your A record successfully:

    dig SUBDOMAIN.EXAMPLE-URL.com
    

    Where SUBDOMAIN.EXAMPLE-URL is the subdomain for your load balancer.

    You should see the A record that you just created:

    ;; ANSWER SECTION:
    xyz.EXAMPLE.COM.      1767    IN  A 203.0.113.1
    

Next Steps

(Optional) To prepare for deploying either a TAS for VMs or TKGI tile on GCP, you can download the required runtime tile in advance:

  • To download TAS for VMs, log in to VMware Tanzu Network, select your desired release version, and download VMware Tanzu Application Service for VMs.
  • To download TKGI, log in to VMware Tanzu Network, select your desired release version, and download VMware Tanzu Kubernetes Grid Integrated Edition.

After initiating the tile download, proceed to the next step, Deploying Ops Manager on GCP.

check-circle-line exclamation-circle-line close-line
Scroll to top icon