You must follow these steps before you install VMware Tanzu Operations Manager on Google Cloud Platform (GCP).

Prerequisites

Before you prepare your Tanzu Operations Manager installation, do the following tasks depending on the runtime you intend to deploy:

Configuration and components

This section outlines high-level infrastructure options for Tanzu Operations Manager on GCP. A Tanzu Operations Manager deployment includes Tanzu Operations Manager and your chosen runtime. For example, both Tanzu Operations Manager with TAS for VMs and Tanzu Operations Manager with TKGI are Tanzu Operations Manager deployments. For more information, review the deployment options and recommendations in Reference architecture for Tanzu Operations Manager on GCP.

You can deploy Tanzu Operations Manager using one of two main configurations on a GCP virtual private cloud (VPC):

  • A single-project configuration that gives Tanzu Operations Manager full access to VPC resources
  • A shared VPC configuration in which Tanzu Operations Manager shares VPC resources

See Shared vs Single-Project VPCs in Reference Architecture for Tanzu Operations Manager on GCP for a full discussion and recommendations.

When deploying Tanzu Operations Manager on GCP, VMware recommends using the following GCP components:

Step 1: Set up IAM service accounts

Tanzu Operations Manager uses IAM service accounts to access GCP resources.

For a single-project installation: Complete the following steps to create a service account for Tanzu Operations Manager.

For a shared-VPC installation: Complete the following steps twice, to create a host account and service account for Tanzu Operations Manager.

  1. From the GCP console, click IAM & Admin, then Service accounts.

  2. Click Create Service Account:

  3. Service account name: Enter a name. For example, bosh.

  4. Role: Use the drop-down menu, select the following roles: *Service Accounts, then Service Account User * Service Accounts, then Service Account Token Creator *Compute Engine, then Compute Instance Admin (v1) * Compute Engine, then Compute Network Admin *Compute Engine, then Compute Storage Admin * Storage, then Storage Admin

    You must scroll down in the pop-up windows to select all required roles.
    The Service Account User role is required only if you plan to use The Tanzu Operations Manager VM Service Account to deploy Tanzu Operations Manager. For more information about The Tanzu Operations Manager VM Service Account, see Step 2: Google Cloud Platform Config in Configuring BOSH Director on GCP.

    • Service account ID: The text box automatically generates a unique ID based on the username.
    • Furnish a new private key: Select this check box and JSON as the Key type.

      Create service account form

  5. Click Create. Your browser automatically downloads a JSON file with a private key for this account. Save this file in a secure location. You can use this service account to configure file storage for TAS for VMs. For more information, see GCP in Configuring File Storage for TAS for VMs.

Step 2: Enable Google Cloud APIs

Tanzu Operations Manager manages GCP resources using the Google Compute Engine and Cloud Resource Manager APIs. To enable these APIs:

  1. Log in to the Google Developers Console at https://console.developers.google.com.

  2. In the console, go to the GCP projects where you want to install Tanzu Operations Manager.

  3. For a single-project installation, complete the following steps for the Tanzu Operations Manager project.

  4. For a shared-VPC installation, complete the following steps for both host and service projects, to enable them to access the Google Cloud API.

  5. Click API Manager, then Library.

  6. Under Google Cloud APIs, click Compute Engine API.

  7. On the Google Compute Engine API pane, click Enable.

  8. In the search text box, enter Google Cloud Resource Manager API.

  9. On the Google Cloud Resource Manager API pane, click Enable.

  10. To verify that the APIs have been enabled, complete the following steps:

    1. Log in to GCP using the IAM service account you created in Set up IAM Service Accounts:

      $ gcloud auth activate-service-account --key-file JSON_KEY_FILENAME
      
    2. List your projects:

      $ gcloud projects list
      PROJECT_ID              NAME                      PROJECT_NUMBER
      my-host-project-id      my-host-project-name      ##############
      my-service-project-id   my-service-project-name   ##############
      
      This command lists the projects where you enabled Google Cloud APIs.

Step 3: Create a GCP network with subnets

  1. Log in to the GCP console.

  2. Go to the GCP project where you want to install Tanzu Operations Manager. For a shared VPC installation, go to the host project.

  3. Click VPC network, then CREATE VPC NETWORK.

    On the GCP console, the VPC Network page has two sections: VPC Networks and External IP Addresses.

  4. In the Name text box, enter a name of your choice for the VPC network. This name helps you identify resources for this deployment in the GCP console. Network names must be lowercase. For example, pcf-virt-net.

    1. Under Subnets, complete the form as follows to create an infrastructure subnet for Tanzu Operations Manager and NAT instances:

      Name pcf-infrastructure-subnet-GCP-REGION
      Example: pcf-infrastructure-subnet-us-west1
      Region A region that supports three availability zones. For help selecting the correct region for your deployment, see the Google documentation about regions and zones.
      IP address range A CIDR ending in /26
      Example: 192.168.101.0/26

      See the following image for an example:

      The New subnet dialog box includes these sections: Name, Region, IP address range, Private Google access, and Flow logs.

      For deployments that do not use external IP addresses, enable Private Google access to allow your runtime to make API calls to Google services.

    2. Click Add subnet to add a second subnet for the BOSH Director and components specific to your runtime. Complete the form as follows:

      Name pcf-RUNTIME-subnet-GCP-REGION
      Example: pcf-pas-subnet-us-west1
      Region The same region you selected for the infrastructure subnet
      IP address range A CIDR ending in /22
      Example: 192.168.16.0/22
    3. Click Add subnet to add a third Subnet with the following details:

      Name pcf-services-subnet-GCP-REGION
      Example: pcf-services-subnet-us-west1
      Region The same region you selected for the previous subnets
      IP address range A CIDR in /22
      Example: 192.168.20.0/22

      See the following image for an example:

      The VPC networks wizard shows three example subnets.

  5. Under Dynamic routing mode, leave Regional selected.

  6. Click Create.

Step 4: Create NAT instances

Use NAT instances when you want to expose only a minimal number of public IP addresses.

Creating NAT instances permits internet access from cluster VMs. You might, for example, need this internet access for pulling Docker images or enabling internet access for your workloads.

For more information, see Reference Architecture for Tanzu Operations Manager on GCP and the GCP documentation.

  1. In the GCP console, with your single project or shared-VPC host project selected, navigate to Compute Engine, then VM instances.

    VM Instances pane

  2. Click CREATE INSTANCE.

    The GCP Console Compute Engine page shows the Create Instance button.

  3. Complete the following text boxes:

    • Name: Enter pcf-nat-gateway-pri.
      This is the first, or primary, of three NAT instances you need. If you use a single AZ, you need only one NAT instance.
    • Zone: Click the first zone from your region.
      Example: For region us-west1, click us-west1-a zone.
    • Machine type: Click n1-standard-4.
    • Boot disk: Click Change and click Ubuntu 14.04 LTS.

    The Create Instance dialog box showing the sections: Name, Zone, Machine Type, and Boot disk.

  4. Expand the additional configuration text boxes by clicking Management, disks, networking, SSH keys.

    The Management, disks, networking, SSH keys drop-down menu.

    1. In the Startup script text box under Automation, enter the following text:

      #! /bin/bash
      sudo sysctl -w net.ipv4.ip_forward=1
      sudo sh -c 'echo net.ipv4.ip_forward=1 >> /etc/sysctl.conf'
      sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
      
  5. Click Networking to open additional network configuration text boxes:

    The page includes the tabs: Management, Disks, Networking, and SSH Keys.

    1. In the Network tags text box, add the following: nat-traverse and pcf-nat-instance.
    2. Click the Networking tab and the pencil icon to edit the Network interface.
    3. For Network, click pcf-virt-net. You created this network in Step 1: Create a GCP Network with Subnets.
    4. For Subnetwork, click pcf-infrastructure-subnet-GCP-REGION.
    5. For Primary internal IP, click Ephemeral (Custom). Enter an IP address, for example, 192.168.101.2, in the Custom ephemeral IP address text box. The IP address must meet the following requirements:

      • The IP address must exist in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet.
      • The IP address must exist in a reserved IP range set later in BOSH Director. The reserved range is typically the first .1 through .9 addresses in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet.
      • The IP address cannot be the same as the Gateway IP address set later in Tanzu Operations Manager. The Gateway IP address is typically the first .1 address in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet.
    6. For External IP, click Ephemeral.

      If you select a static external IP address for the NAT instance, then you can use the static IP to further secure access to your CloudSQL instances.

    7. Set IP forwarding to On.

    8. Click Done.
  6. Click Create to finish creating the NAT instance.

  7. Repeat steps 2 through 6 to create two additional NAT instances with the names and zones specified in the following table. The rest of the configuration remains the same.

    Instance 2 Name pcf-nat-gateway-sec
    Zone Select the second zone from your region.
    Example: For region us-west1, select zone us-west1-b.
    Internal IP Select Custom and enter an IP address in the Internal IP address text box. Example: 192.168.101.3.

    As described previously, this address must in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet, must exist in a reserved IP range set later in BOSH Director, and cannot be the same as the Gateway IP address set later in Tanzu Operations Manager.
    Instance 3 Name pcf-nat-gateway-ter
    Zone Select the third zone from your region.
    Example: For region us-west1, select zone us-west1-c.
    Internal IP Select Custom and enter an IP address in the Internal IP address text box. Example: 192.168.101.4.

    As previously described, this address must in the CIDR range you set for the pcf-infrastructure-subnet-GCP-REGION subnet, must exist in a reserved IP range set later in BOSH Director, and cannot be the same as the Gateway IP address set later in Tanzu Operations Manager.

Create routes for NAT instances

  1. Navigate to VPC Networks, then Routes.

    The Networking menu includes these options: VPC network, Network Services, VPN.

  2. Click CREATE ROUTE.

  3. Complete the form as follows:

    • Name: pcf-nat-pri
    • Network: pcf-virt-net
    • Destination IP range: 0.0.0.0/0
    • Priority: 800
    • Instance tags: pcf
    • Next hop: Specify an instance
    • Next hop instance: pcf-nat-gateway-pri
  4. Click Create to finish creating the route.

  5. Repeat steps 2 through 4 to create two additional routes with the names and next hop instances specified in the following table. The rest of the configuration remains the same.

    Route 2 Name: pcf-nat-sec
    Next hop instance: pcf-nat-gateway-sec
    Route 3 Name: pcf-nat-ter
    Next hop instance: pcf-nat-gateway-ter

Step 5: Create firewall rules for the network

GCP lets you assign tags to VM instances and create firewall rules that apply to VMs based on their tags. For more information about tags, see Labeling Resources in the GCP documentation. This step assigns tags and firewall rules to Tanzu Operations Manager components and VMs that handle incoming traffic.

  1. With your single project or shared-VPC host project selected, go to the Networking, then VPC network pane and select Firewall rules.

  2. Apply the firewall rules in the following table:

    Firewall Rules
    Rule 1 This rule allows SSH from public networks.

    Name: pcf-allow-ssh
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:22
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Target tags: allow-ssh
    Rule 2 This rule allows HTTP from public networks.

    Name: pcf-allow-http
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:80
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Target tags: allow-http, router
    Rule 3 This rule allows HTTPS from public networks.

    Name: pcf-allow-https
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:443
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Target tags: allow-https, router
    Rule 4 This rule allows Gorouter health checks.

    Name: pcf-allow-http-8080
    Network: pcf-virt-net
    Allowed protocols and ports: tcp:8080
    Source filter: IP ranges
    Source IP Ranges: 0.0.0.0/0
    Target tags: router
    Rule 5 This rule allows communication between BOSH-deployed jobs.

    Name: pcf-allow-pas-all
    Network: pcf-virt-net
    Allowed protocols and ports: tcp;udp;icmp
    Source filter: Source tags
    Target tags: pcf, pcf-opsman, nat-traverse
    Source tags: pcf, pcf-opsman, nat-traverse
    Rule 6 (Optional) This rule allows access to the TCP router.

    Name: pcf-allow-cf-tcp
    Network: pcf-virt-net
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Allowed protocols and ports: tcp:1024-65535
    Target tags: pcf-cf-tcp
    Rule 7 (Optional) This rule allows access to the SSH proxy.

    Name: pcf-allow-ssh-proxy
    Network: pcf-virt-net
    Source filter: IP ranges
    Source IP ranges: 0.0.0.0/0
    Allowed protocols and ports: tcp:2222
    Target tags: pcf-ssh-proxy, diego-brain

    If you want your firewall rules to only permit traffic within your private network, modify the Source IP Ranges from the table accordingly.

  3. If you are only using your GCP project to deploy Tanzu Operations Manager, then you can delete the following default firewall rules:

    • default-allow-http
    • default-allow-https
    • default-allow-icmp
    • default-allow-internal
    • default-allow-rdp
    • default-allow-ssh

If you are deploying TKGI only, continue to Next steps.

If you are deploying TAS for VMs or other runtimes, proceed to the following step.

Step 6: Create database instance and databases

Create database instance

  1. For a shared-VPC installation, click the service project in the GCP console. This step and the following steps allocate resources to the service project, not the host project.

  2. From the GCP console, click SQL and click CREATE INSTANCE.

  3. Ensure MySQL is selected and click Next.

  4. Under MySQL, click Second Generation instance type.

  5. Click Configure MySQL under your choice for instance type: Development, Staging, or Production.

  6. Configure the instance as follows:

    • Instance ID: pcf-pas-sql
    • Root password: Set a password for the root user.
    • Region: Select the region you specified when creating networks.
    • Zone: Any.
    • Configure machine type and storage:
      • Click Change and then select db-n1-standard-2.
      • Ensure that Enable automatic storage increases is selected. This allows DB storage to grow automatically when space is required.
    • Enable auto backups and high availability: Make the following selections:
      • Leave Automate backups and Enable binary logging selected.
      • Under High availability, select the Create failover replica check box.
    • Authorize Networks: Click Add network and create a network named all that allows traffic from 0.0.0.0/0.

      If you assigned static IP addresses to your NAT instances, you can instead limit access to the database instances by specifying the NAT IP addresses.

  7. Click Create.

Create databases

  1. Go to the Instances page and select the database instance you just created.

  2. Select the Databases tab.

  3. Click Create database to create the following databases:

    • account
    • app_usage_service
    • autoscale
    • ccdb
    • console
    • diego
    • locket
    • networkpolicyserver
    • nfsvolume
    • notifications
    • routing
    • silk
    • uaa
    • credhub
  4. Select the USERS tab.

  5. Click Create user account to create a unique username and password for each database you created above. For Host name, select Allow any host. You must create a total of fourteen user accounts.

Ensure that the networkpolicyserver database user has the ALL PRIVILEGES permission.

Step 7: Create storage buckets

  1. With your single project or shared-VPC service project selected in the GCP console, click Storage, then Browser.

  2. Using CREATE BUCKET, create buckets with the following names. For Default storage class, click Multi-Regional:

    • PREFIX-pcf-buildpacks
    • PREFIX-pcf-droplets
    • PREFIX-pcf-packages
    • PREFIX-pcf-resources
    • PREFIX-pcf-backup

    Where PREFIX is a prefix of your choice, required to make the bucket name unique.

Step 8: Create HTTP load balancer

For load balancing, you can use a global HTTP load balancer or an internal, regional load balancer with a private IP address.

Single project, standalone installations typically use a global HTTP load balancer. For more information, see Create HTTP Load Balancer on how to set this up.

Shared-VPC installation typically use an internal TCP/UDP load balancer to minimize public IP addresses. For more information, see Create Internal Load Balancer below for how to set this up.

Create internal load balancer

To create an internal load balancer for Tanzu Operations Manager on GCP, do the following.

  1. Create an internal-facing TCP/UDP load balancer for each region of your Tanzu Operations Manager deployment.

    GCP Internal Load Balancer (iLB) is a regional product. Within the same VPC/network, client VMs in a different region from the iLB cannot access the iLB. For more information, see the GCP documentation.

  2. Assign private IP addresses to the load balancers.

  3. After you have deployed Tanzu Operations Manager, follow instructions in Create or Update a VM Extension to add a custom VM extension that applies internal load balancing to all VMs deployed by BOSH.

    • For example, the following manifest code adds a VM extension backend-pool to Tanzu Operations Manager VMs:

      vm_extensions:
      - name: backend-pool
        cloud_properties:
          ephemeral_external_ip: true
          backend_service:
            name: name-of-backend-service
            scheme: INTERNAL
      

Create HTTP load balancer

To create a global HTTP load balancer for Tanzu Operations Manager on GCP:

  1. Create Instance Group.
  2. Create Health Check.
  3. Configure Back End.
  4. Configure Front End.

Create instance group

  1. Go to Compute Engine, then Instance groups.

  2. Click CREATE INSTANCE GROUP.

  3. Complete the form as follows:

  4. For Name, enter pcf-http-lb

  5. For Location, click Single-zone.
  6. For Zone, click the first zone from your region.
    Example: For region us-west1, click zone us-west1-a.
  7. Under Group type, click Unmanaged instance group.
  8. For Network, click pcf-virt-net.
  9. For Subnetwork, click the pcf-pas-subnet-my-gcp-region subnet that you created previously.
  10. Click Create.

  11. Create a second instance group with the following details:

  12. Name: pcf-http-lb

  13. Location: Single-zone
  14. Zone: Click the second zone from your region.
    Example: For region us-west1, click zone us-west1-b.
  15. Group type: Click Unmanaged instance group.
  16. Network: Click pcf-virt-net.
  17. Subnetwork: Click the pcf-pas-subnet-my-gcp-region subnet that you created previously.

  18. Create a third instance group with the following details:

  19. Name: pcf-http-lb

  20. Location: Single-zone
  21. Zone: Click the third zone from your region.
    Example: For region us-west1, click zone us-west1-c.
  22. Group type: Click Unmanaged instance group.
  23. Network: Click pcf-virt-net.
  24. Subnetwork: Click the pcf-pas-subnet-my-gcp-region subnet that you created previously.

Create health check

  1. Go to Compute Engine, then Health checks.

  2. Click CREATE HEALTH CHECK.

  3. Complete the form as follows:

  4. Name: pcf-cf-public

  5. Port: 8080
  6. Request path: /health
  7. Check interval: 30
  8. Timeout: 5
  9. Healthy threshold: 10
  10. Unhealthy threshold: 2

  11. Click Create.

Configure back end

  1. Go to Network services, then Load balancing.

  2. Click CREATE LOAD BALANCER.

  3. Under HTTP(S) Load Balancing, click Start configuration.

  4. For the Name, enter pcf-global-pcf.

  5. Select Backend configuration

  6. From the drop-down menu, click Backend services, then Create a backend service.
  7. Complete the form as follows: *Name: pcf-http-lb-backend.

    • Protocol: HTTP. *Named port: http.
    • Timeout: 10 seconds. *Under Backends, then New backend, click the Instance group that corresponds to the first zone of the multi-zone instance group you created. For example: pcf-http-lb (us-west1-a). Click Done.
    • Click Add backend, click the Instance group that corresponds to the second zone of the multi-zone instance group you created. For example: pcf-http-lb (us-west1-b). Click Done. *Click Add backend, click the Instance group that corresponds to the third zone of the multi-zone instance group you created. For example: pcf-http-lb (us-west1-c). Click Done.
    • Health check: Click the pcf-cf-public health check that you created.
    • Cloud CDN: Ensure Cloud CDN is deactivated.
  8. Click Create.

Configure front end

  1. Click Host and path rules to populate the default text boxes and a green check mark.

  2. Click Frontend configuration, and add the following:

    • Name: pcf-cf-lb-http
    • Protocol: HTTP
    • IP: Perform the following steps:
      1. Click Create IP address.
      2. Enter a Name for the new static IP address and an optional description. For example, pcf-global-pcf.
      3. Click Reserve.
    • Port: 80
  3. Click Add Frontend IP and port and add the following:

    Skip this step if you do not have either a self-signed or trusted SSL certificate.

    When you configure the tile for your chosen runtime, you are given the opportunity to create a new self-signed certificate. Upon creating a certificate, you can complete the Add Frontend IP and port section.

    • Name: pcf-cf-lb-https
    • Protocol: HTTPS
    • IP address: Click the pcf-global-pcf address you create for the previous Frontend IP and Port.
    • Port: 443
    • Select Create a new certificate. The Create a New Certificate dialog is displayed.
    • In the Name text box, enter a name for the certificate.

      Create a new certificate form

    • In the Public key certificate text box, copy in the contents of your public certificate, or upload your certificate as a .pem file. If the certificate is runtime-generated, copy and paste the generated contents from the runtime’s Certificate text box into the BOSH Director Public key certificate text box.

    • In the Certificate chain text box, enter or upload your certificate chain in the .pem format. If you are using a self-signed certificate, such as a TAS for VMs or TKGI-generated certificate, do not enter a value in the Certificate Chain text box.
    • In the Private key text box, copy in the contents or upload the .pem file of the private key for the certificate. If the certificate is runtime-generated, copy and paste the generated contents from the runtime’s Private Key text box into the BOSH Director Private key text box.
  4. Review the completed frontend configuration.

  5. Click Review and finalize to verify your configuration.

  6. Click Create.

Step 9: Create TCP WebSockets load balancer

The load balancer for tailing logs with WebSockets for Tanzu Operations Manager on GCP operates on TCP port 443.

  1. From the GCP console, click Network services, then Load balancing, followed by Create load balancer.

  2. Under TCP Load Balancing, click Start configuration.

    The Create a load balancer page has three sections: HTTP(S) Load Balancing, TCP Load Balancing, and UDP Load Balancing.

  3. On the Create a load balancer configuration UI, make the following selections:

    • Under Internet facing or internal only, click From Internet to my VMs.
    • Under Multiple regions or single region, click Single region only.
    • Under Connection termination, click No (TCP).

      Create a load balancer form

  4. Click Continue.

  5. In the New TCP load balancer window, enter pcf-wss-logs in the Name text box.

  6. Click Backend configuration to configure the Backend service:

    Backend configuration form.

    • Region: Click the region you used to create the network in Create a GCP Network with Subnets.
    • From the Health check drop-down menu, create a health check with the following details:
      • Name: pcf-gorouter
      • Port: 8080
      • Request path: /health
      • Check interval: 30
      • Timeout: 5
      • Healthy threshold: 10
      • Unhealthy threshold: 2 The Backend configuration section shows a green check mark.
  7. Click Frontend configuration to open its configuration window and complete the textb boxes:

    • Protocol: TCP
    • IP: Perform the following steps:
      1. Click Create IP address.
      2. For name Name for the new static IP address and an optional description. For example, pcf-gorouter-wss.
      3. Click Reserve.
    • Port: 443
  8. Click Review and finalize to verify your configuration.

  9. Click Create.

Step 10: Create SSH proxy load balancer

  1. From the GCP console, click Network services, then Load balancing, followed by Create load balancer.

  2. Under TCP Load Balancing, click Start configuration.

  3. Under Internet facing or internal only, click From Internet to my VMs.

  4. Under Connection termination, click No (TCP).

    Create a load balancer pane

  5. Click Continue.

  6. In the New TCP load balancer window, enter pcf-ssh-proxy in the Name text box.

  7. Click Backend configuration, and enter the following values:

    • Region: Click the region you used to create the network in Create a GCP Network with Subnet.
    • Backup pool: None
    • Failover ratio: 10%
    • Health check: No health check

    Backend configuration form

  8. Click Frontend configuration, and add the following:

    • Protocol: TCP
    • IP: Perform the following steps:
      1. Click Create IP address.
      2. Enter a Name for the new static IP address and an optional description. For example, pcf-ssh-proxy.
      3. Click Reserve.
    • Port: 2222
  9. (Optional) Review and finalize your load balancer.

  10. Click Create.

Step 11: Create load balancer for TCP router

This step is optional and only required if you enable TCP routing in your deployment.

To create a load balancer for TCP routing in GCP:

  1. From the GCP console, click Network services, then Load balancing, followed by Create load balancer.

  2. Under TCP Load Balancing, click Start configuration.

  3. Under Connection termination, click No (TCP) and click Continue.

  4. On the New TCP load balancer pane, enter a unique name for the load balancer in the Name text box. For example, pcf-cf-tcp-lb.

  5. Click Backend configuration, and enter the following values:

    • Region: Click the region you used to create the network in Create a GCP Network with Subnet.
    • From the Health check drop-down menu, create a health check with the following details:

      • Name: pcf-tcp-lb
      • Port: 80
      • Request path: /health
      • Check interval: 30
      • Timeout: 5
      • Healthy threshold: 10
      • Unhealthy threshold: 2
      • Click Save and continue.

        Backend configuration form

  6. Click Frontend configuration, and add the front end IP and port entry as follows:

    • Protocol: TCP
    • IP: Perform the following steps:
      1. Click Create IP address.
      2. Enter a Name for the new static IP address and an optional description. For example, pcf-cf-tcp-lb.
      3. Click Reserve.
    • Port: 1024-65535

      New TCP load balancer Frontend configuration pane with Create button

  7. Click Review and finalize to verify your configuration.

  8. Click Create.

Step 12: Add DNS records for your load balancers

In this step, you redirect queries for your domain to the IP addresses of your load balancers.

  1. Locate the static IP addresses of the load balancers you created in Preparing to deploy Tanzu Operations Manager on GCP:

    • An HTTP(S) load balancer named pcf-global-pcf
    • A TCP load balancer for WebSockets named pcf-wss-logs
    • A TCP load balancer named pcf-ssh-proxy
    • A TCP load balancer named pcf-cf-tcp-lb

    You can locate the static IP address of each load balancer by clicking its name under Network services, then Load balancing in the GCP console.

  2. Log in to the DNS registrar that hosts your domain. Examples of DNS registrars include Network Solutions, GoDaddy, and Register.com.

  3. Create A records with your DNS registrar that map domain names to the public static IP addresses of the load balancers located above:

    Create and map this record... To the IP of this load balancer Required
    \*.sys.MY-DOMAIN
    Example: \*.sys.example.com
    pcf-global-pcf Yes
    \*.apps.MY-DOMAIN
    Example: \*.apps.example.com
    pcf-global-pcf Yes
    doppler.sys.MY-DOMAIN
    Example: doppler.sys.example.com
    pcf-wss-logs Yes
    loggregator.sys.MY-DOMAIN
    Example: loggregator.sys.example.com
    pcf-wss-logs Yes
    ssh.sys.MY-DOMAIN
    Example: ssh.sys.example.com
    pcf-ssh-proxy Yes, to allow SSH access to apps
    tcp.MY-DOMAIN
    Example: tcp.example.com
    pcf-cf-tcp-lb No, only set up if you have enabled the TCP routing feature
  4. Save your changes within the web interface of your DNS registrar.

  5. Run the following dig command to confirm that you created your A record successfully:

    dig SUBDOMAIN.EXAMPLE-URL.com
    

    Where SUBDOMAIN.EXAMPLE-URL is the subdomain for your load balancer.

    You should see the A record that you just created:

    ;; ANSWER SECTION:
    xyz.EXAMPLE.COM.      1767    IN  A 203.0.113.1
    

Next steps

(Optional) To prepare for deploying either a TAS for VMs or TKGI tile on GCP, you can download the required runtime tile in advance:

  • To download TAS for VMs, log in to the Broadcom Support portal, select your desired release version, and download VMware Tanzu Application Service for VMs.
  • To download TKGI, log in to the Broadcom Support portal, select your desired release version, and download VMware Tanzu Kubernetes Grid Integrated Edition.

After initiating the tile download, proceed to the next step, Deploying Tanzu Operations Manager on GCP.

check-circle-line exclamation-circle-line close-line
Scroll to top icon