This is an overview of the manual deployment process that you need to follow to set up your provider and tenant Google Cloud projects, configure them, deploy an SDDC, and associate it with VMware Cloud Director service.

The procedures below provide the information that you need to successfully configure VMware Cloud Director service with Google Cloud VMware Engine, but do not include the full set of steps and instructions for working with the Google Cloud Console or with NSX Manager. For detailed instructions, follow the relevant links to the Google Cloud documentation and to the NSX Administration Guide guide.

Prerequisites

Verify that you have the necessary rights to configure the provider and tenant projects in Google Cloud.

Configure the Provider Project

To start using your Google Cloud VMware Engine resources, you must configure your provider cloud and your provider management network.

Procedure

  1. In the Google Cloud Console, activate the Cloud DNS API in the provider project. See Enable the Cloud DNS API in the Google Cloud VMware Engine documentation.
  2. Access the VMware Engine Portal and, when prompted, activate its API. See Accessing the VMware Engine Portal in the Google Cloud VMware Engine documentation.
  3. In the Google Cloud Console, create a VPC network. See Create and Manage VPC Networks in the Google Cloud Virtual Private Cloud (VPC) documentation.
    • Enter a meaningful name for the network.
    • In the Region text box, select the region where your environment is located.
    • Select the check box to confirm that the subnet configuration includes a range outside of the RFC 1918 address space.
    • Select the radio button to activate private Google access.
    • Select Global dynamic routing mode, and set the maximum MTU to 1500.
  4. Configure a private service connection to Google Cloud Platform, and connect to the network that was created when you assigned an IP range. See Configuring private services access in the Google Cloud Virtual Private Cloud (VPC) documentation.

Set Up a Google Cloud VMware Engine SDDC

To start providing resources for tenants to consume, you must create an SDDC.

Procedure

  1. Create a private cloud. See Creating a VMware Engine private cloud in the Google Cloud VMware Engine documentation.
    • As Location for the cloud, select the Google Cloud Platform data center in which to create the SDDC.
    • As Node type, select Multi Node, with a minimum of 4 nodes.
  2. Create a private connection between the SDDC and the provider project. See Complete private connection creation in the VMware Engine portal in the Google Cloud VMware Engine documentation.
    • In the Service text box, select VPC Network.
    • In the Region text box, select the region where you created your private cloud.
    • In the Peer Project ID text box, enter the provider project name.
      Tip: Open the Google Cloud Platform in a separate tab, and copy the provider project name form the project info.
    • In the Peer Project Number, enter the provider project number.
      Tip: From the Google Cloud Platform tab, copy the provider project number which is below the provider project name in the project info.
    • In the Peer VPC ID text box, enter the name of the ID of the provider management network.
    • In the Tenant Project ID text box, enter the ID of the tenant project.
      Tip: To find the tenant project ID, in the left pane, click VPC Network > VPC Network Peering. From the right pane, copy the Peered Project ID value.
    • In the Routing Mode drop-down menu, select Global.
    In a few minutes, the Region Status displays as Connected.
  3. Update the peering connection of the servicenetworking VPC network to both import and export custom routes. See Updating a peering connection in the Google Cloud Virtual Private Cloud (VPC) documentation.
  4. Activate internet access and public IP network service for your region. See Enable internet access and public IP network service for your region in the Google Cloud VMware Engine documentation.

Configure the Tenant Project

To provide resources to the tenant project, configure the tenant service network and the peering connection.

Procedure

  1. In the Google Cloud Console, navigate to the tenant project and deletе its default VPC network.
  2. Create a new VPC network.
    • In the Region text box, select the region where your SDDC is located.
    • Select the check box to confirm that the subnet configuration includes a range outside of the RFC 1918 address space.
    • Select the radio button to activate private Google access.
    • In the Subnet Section, select Done.
    • Select Global dynamic routing mode.
    • Set the maximum MTU to 1500.
  3. Configure a private service connection to Google Cloud Platform, and allocate an internal IP range for the service connection to use. See Configuring private services access in the Google Cloud Virtual Private Cloud (VPC) documentation.
  4. Update the peering connection that you created in step 3 to both import and export custom routes. See Updating a peering connection in the Google Cloud Virtual Private Cloud (VPC) documentation.
  5. Create a private connection in the Google Cloud VMware Engine portal. See Complete private connection creation in the VMware Engine portal in Google Cloud VMware Engine documentation.
    • From the Service drop-down menu, select VPC Network.
    • From the Region drop-down menu, select the region where you created your private cloud.
    • In the Peer Project ID text box, enter the provider project name.
      Tip: Open Google Cloud Console in a separate tab, and copy the provider project name form the project information.
    • In the Peer Project Number text box, enter the project number.
    • In the Peer VPC ID text box, enter the name of the tenant VPC network that you created in step 2..
    • In the Tenant Project ID text box, enter the ID of the tenant project.
      Note: To find the tenant project ID, in the left pane, click VPC Network > VPC Network Peering. From the right pane, copy the Peered Project ID value.
    • In the Routing Mode drop-down menu, select Global.

What to do next

To configure any additional tenant projects, repeat the steps for each project.

Create a Jump Host in the Provider Project and Allow Network Access

You can use the jump host in the provider project for controlled access to vCenter Server, NSX Manager, and other services in remote networks.

Procedure

  1. In the provider project, create a Windows server VM instance in the same region and zone as your private cloud. See Create a Windows Server VM Instance in the Google Cloud Compute Engine documentation.
  2. Under Networking, disks, security, management, sole-tenancy, edit the network interface to the provider management network.
  3. From the VM instance details, set a Windows password for the VM, and make a note of it.
  4. Create a firewall rule that allows ingress traffic. See Creating firewall rules in the Google Cloud Virtual Private Cloud (VPC) documentation.
    • Enter a unique and meaningful name for the rule, for example, the service being provided.
    • As Network, select the provider management network.
    • For the Direction of Traffic, select Ingress.
    • As Targets, select All instances in the network.
    • As Source Filter, select IP ranges, and, in the text box, enter 0.0.0.0/0 to allow sources from any network.
    • In the Protocols and ports text box, select TCP 3389.
  5. Create a firewall rule that allows east-west traffic within the provider project.
    • Enter a unique and meaningful name for the rule, for example, east-west.
    • As Network, select the provider management network.
    • For the Direction of Traffic, select Egress.
    • As Targets, select All instances in the network.
    • As Source Filter, select IP ranges, and, in the text box, enter the range of the management network.
    • In the Protocols and ports text box, select Allow all.
  6. Make a note of the external IP address of the VM instance to use for Remote Desktop Protocol (RDP) communication.
  7. Verify that you can log in to the newly created VM with the external IP and the Windows credentials.

Associate the SDDC via VMware Reverse Proxy

To use infrastructure resources that are not publicly accessible and have only outbound access to the internet within your VMware Cloud Director service environment, you must set up your VMware Cloud Director instance to use VMware proxy service.

Procedure

  1. On the jump host VM, log in to VMware Cloud Partner Navigator, navigate to VMware Cloud Director service and generate the proxy appliance. See How Do I Configure and Download the VMware Reverse Proxy OVА.
  2. Verify the proxy appliance connectivity.
    1. Log in to the proxy appliance as root.
    2. To verify the appliance has obtained an IP address, run ip a.
    3. To ensure that the service is active and running, run systemctl status transporter-client.service.
      Note: If the command results in an error, verify that DNS is working and it can access the internet.
    4. To verify the proxy appliance's connectivity, run transporter-status.sh.
    5. Run the command to diagnose any issues with the proxy appliance.
  3. In VMware Cloud Director service, navigate to the VMware Cloud Director instance from which you generated the proxy, and associate the data center through VMware Proxy. See How Do I Associate a VMware Cloud Director Instance with an SDDC via VMware Proxy.
    To create a provider VDC during the SDDC association, select the Create Infrastructure Resources check box.

Results

When the task completes, the SDDC shows up as a provider VDC in the VMware Cloud Director instance UI.

Deploy and Configure IPsec Tunnel

Deploy and configure a VPN appliance in the tenant project to connect to the tier-1 gateway in the provider VDC though an IPsec tunnel.

Procedure

  1. In the tenant project, create firewall rules that manages the access to the VPN appliance. See Configuring Firewall Rules in the Google Cloud Virtual Private Cloud (VPC) documentation.
    1. Create an ingress rule.
      • Enter a unique and meaningful name for the rule, for example gcve-transit.
      • As Network, enter tenantname-transit.
      • As Priority, enter 100.
      • For the Direction of Traffic, select Ingress.
      • As Action on match, select Allow.
      • As Targets, select All instances in the network.
      • As Source Filter, select IP ranges, and enter the range for the transit network, for example, 100.64.0.0/16.
      • In the Protocols and ports text box, select Allow all.
    2. Create an egress rule.
      • Enter a unique and meaningful name for the rule, such as ipsec-egress.
      • As Network, select tenantname-transit.
      • As Priority, enter 100.
      • For the Direction of Traffic, select Egress.
      • As Action on match, select Allow.
      • As Targets, select All instances in the network.
      • As Source Filter, select IP ranges, and enter the range for the transit network, for example, 100.64.0.0/16.
      • In the Protocols and ports text box, select IPsec ports.
  2. In the tenant project, deploy a CentOS 7 Linux VM to use for the IPSec VPN Tunnel, and connect to it. See Create a Linux VM Instance in Compute Engine in the Google Cloud Compute Engine documentation.
  3. Under Networking, disks, security, management, sole-tenancy, activate IP forwarding, and edit the network interface to the tenantname-transit network.
  4. Edit the Linux VM network settings to add the network's tag name. See Configuring network tags in the Google Cloud Virtual Private Cloud (VPC) documentation.
    This tag can be whatever the provider wants, but it must be uniform across all routes that point to the Internet, and must be applied to any VM that might need Internet access in the provider-owned customer project.
  5. Install and configure an IPsec implementation on the Linux VM.
  6. In the tenant project, create the IPsec VPN route. See Adding a static route in the Google Cloud Virtual Private Cloud (VPC) documentation.
    • Enter a meaningful name for the route.
    • As Network, select tenantname-transit.
    • As Destination IP Range, enter a range in the SDDC for the tenant.
    • As Priority, enter 100.
    • As Next Hop, select Specify an instance.
    • As Next Hop Instance, enter the VM on which you installed and configured IPsec VPN.

Configure IPsec VPN and Tenant Firewall Rules in NSX Manager

To secure the network connectivity of tenant workloads, configure IPsec VPN and firewall rules.

Procedure

  1. Configure IPsec VPN in the VMware Cloud Director instance that is managing the Google Cloud VMware Engine SDDC. See Configure NSX Policy-Based IPSec VPN.
  2. Through the provider jump host, log in to NSX Manager as admin, and configure firewall rules in the tenant tier-1 gateway. See Add a Gateway Firewall Policy and Rule in the NSX Administration Guide.
    1. Add a firewall rule.
      • As source, add the remote tenant project's CIDR block.
      • In the Destination column, select Any.
      • In the Services column, select Any.
      • In the Action column, select Allow.
    2. Add an outbound firewall rule.
      • As source, select Any for any local network or alternatively, it can be locked down to a single CIDR.
      • In the Destination column, enter the CIDR block for the Google Cloud Platform tenant project.
      • In the Action column, select Allow.
    3. Publish both rules.