In this section, you download the Photon OS 3.0 Greenplum Database OVA template from VMware Marketplace, deploy the OVA in vSphere, perform a series of configuration changes to the virtual machine, and create a template from it. Finally, you verify that the virtual machine is in configured correctly by running the /etc/gpv/validate utility.

Downloading the Greenplum Database to a local machine

The Greenplum Database OVA template is available on VMware Marketplace.

Log in and download the preferred version. Make note of the directory where the file was saved.

Deploying the Greenplum Database Template OVA

  1. Log in to vCenter and navigate to Hosts and Clusters.
  2. Right click your cluster, then click Deploy OVF Template.
  3. Choose Local file, select the OVA file from your local machine, then click Next.
  4. Set Virtual machine name as greenplum-db-template. Select the desired Datacenter, then click Next.
  5. Select the desired compute resource, then click Next. Wait while vCenter queries the OVA advanced configuration options.
  6. Verify that the Review details section is correct, then click Next.
  7. Select your vSAN storage, then click Next.
  8. Select gp-virtual-external as the Destination Network, then click Next.
  9. Configure the Customize Template section:
  10. Set Number of Segments to the number of Greenplum segments you plan to deploy.
  11. Set Internal Network:
    • Internal Network MTU Size with the desired MTU size. The recommended size is 9000.
    • Internal Network IP Prefix with the leading octets for the gp-virtual-internal network IP range, for example 192.168.1.
  12. Set Routable Networking:
    • Hostname as greenplum-db-template.
    • NTP Server with a routable NTP server to sync with mdw and smdw.
  13. Click Next.
  14. Review the configuration, then click Finish.
  15. Do not power on the template until you have updated memory and CPU resources.

Modifying Resources

Based on your underlying hardware, you may need to change the default settings for CPU, memory and data disk size, which are preset to 8 vCPU, 30 GB and 16 GB respectively.

  1. Right click the greenplum-db-template virtual machine, then click Edit Settings.

  2. If you want to change the memory size, click on the number on the right of Memory and enter the desired allocated memory.

  3. If you want to change the number of CPUs, click on the number on the right of CPU and select the desired number of CPUs.

  4. If you want to change the size of Data disk as per the VM Sizing, click on the number on the right of Hard disk 2 and enter the desired size.


  5. Click OK.

  6. Power on the greenplum-db-template virtual machine.

Validating the Virtual Machine Template

  1. Launch the web console and log in as gpadmin with the password changeme.
  2. Follow the prompt to reset the password of gpadmin.
  3. Use Ctrl+D to log out, then try login as root with password changeme.
  4. Follow the prompt to reset the password of root.
  5. As root, run /etc/gpv/validate and ensure there are no errors.
  6. If there are no errors, power off the virtual machine.

Provisioning the Virtual Machines

Use the Terraform software you installed in Creating the Jumpbox Virtual Machine to generate copies of the template virtual machine you just created. The following steps will guide you to configure them based on the number of virtual machines in your environment, IP address ranges, and other settings you specify in the installation script.

  1. Create a file named main.tf and copy the contents described in OVA Script.

  2. Log in to the jumpbox virtual machine as root.

  3. Use scp to copy the main.tf file to the jumpbox, under the root user home directory.

  4. Update the following variables under the Terraform variables section of the main.tf script with the correct values for your environment. You collected the required information in the Prerequisites section.

    Variable Description
    vsphere_user Name of the vSphere administrator level user.
    vsphere_password Password of the vSphere administrator level user.
    vsphere_server The IP address or, preferably, the Fully-Qualified Domain Name (FQDN) of your vCenter server.
    vsphere_datacenter The name of the data center for Greenplum in your vCenter environment.
    vsphere_compute_cluster The name of the compute cluster for Greenplum in your data center.
    vsphere_datastore The name of the vSAN datastore which will contain your Greenplum data.
    vsphere_storage_policy The name of the storage policy defined during Setting Up vSphere Storage or Setting Up vSphere Encryption.
    gp_virtual_external_ipv4_addresses The routable IP addresses for mdw and smdw, in that order; for example: ["10.0.0.111", "10.0.0.112"].
    gp_virtual_external_ipv4_netmask The number of bits in the netmask for gp-virtual-external; for example: 24.
    gp_virtual_external_gateway The gateway IP address for the gp-virtual-external network.
    dns_servers The DNS servers for the gp-virtual-external network, listed as an array; for example: ["8.8.8.8", "8.8.4.4"].
    gp_virtual_etl_bar_ipv4_cidr The leading octets for the internal, non-routable network gp-virtual-etl-bar; for example: '192.168.2.0/24'.

     

  5. Initialize Terraform:

    $ terraform init
    

    The output would be similar to:

    Terraform has been successfully initialized!
    
    You may now begin working with Terraform. Try running "terraform plan" to see
    any changes that are required for your infrastructure. All Terraform commands
    should now work.
    
    If you ever set or change modules or backend configuration for Terraform,
    re-run this command to reinitialize your working directory. If you forget, other
    commands will detect it and remind you to do so if necessary.
    
  6. Verify that your Terraform configuration is correct by running:

    terraform plan
    
  7. Deploy the cluster:

    terraform apply
    

    Answer Yes to the following prompt:

    Do you want to perform these actions?
    Terraform will perform the actions described above.
    Only 'yes' will be accepted to approve.
    
    Enter a value: yes
    

You can check the progress of the virtual machines creation under the Recent Tasks panel on your vSphere client.

Once Terraform has completed, it generates a file named terraform.tfstate.
This file must not be deleted, as it keeps a record of all the virtual machines and their states.
Terraform also uses this file when modifying any virtual machines.
VMware recommends retaining a snapshot of the jumpbox virtual machine.

Monitoring the Greenplum Deployment

The initialization of the Greenplum is fully automated in the mdw virtual machine. Once the mdw virtual machine is created by the Terraform script, you can log in to mdw as root, and monitor the deployment process by running:

$ journalctl -fu gpv-mdw

When the Greenplum cluster is initialized, you should see the following message:

Dec 02 20:30:32 mdw bash[2228]: 2021-12-02 20:30:32 Starting the gpcc agents and webserver...
Dec 02 20:30:35 mdw bash[2228]: 2021-12-02 20:30:35 Agent successfully started on 6/6 hosts
Dec 02 20:30:35 mdw bash[2228]: 2021-12-02 20:30:35 View Greenplum Command Center at https://mdw:28080
Dec 02 20:30:35 mdw bash[2228]: Greenplum Initialization Complete!

Setting Up the GPCC Login User

After the initialization, the GPCC (Greenplum Command Center) web service will be available at:

https://mdw_public_ip:28080

Note: https will be enabled by default.

There are two default login users for GPCC: gpmon and gpcc_user. The GPCC login user gpmon is reserved for the service to function and should not be used, while the GPCC login user gpcc_user will be used to login to the GPCC web page.

In order to login to the web page, the password for the GPCC login user gpcc_user needs to be reset. Run /etc/gpv/reset-gpcc-passwd as the gpadmin Linux user on the mdw machine to reset the password. There will be a prompt to type and retype the new password.

Now you can login to the web page with username gpcc_user and the new password you just set.

Note: if you want to create a new login user for GPCC, refer to the GPCC documentation.

There are two default login users for GPCC: gpmon and gpcc_user. The GPCC login user gpmon is reserved for the service to function and should not be used, while the GPCC login user gpcc_user will be used to login to the GPCC web page.

In order to login to the web page, the password for the GPCC login user gpcc_user needs to be reset. Run /etc/gpv/reset-gpcc-passwd as the gpadmin Linux user on the mdw machine to reset the password. There will be a prompt to type and retype the new password.

Now you can login to the web page with username gpcc_user and the new password you just set.

Note: if you want to create a new login user for GPCC, refer to the GPCC documentation.

Next Steps

Now that the Greenplum Database has been deployed, follow the steps provided in Validating the Greenplum Installation to verify that Greenplum is installed correctly.

check-circle-line exclamation-circle-line close-line
Scroll to top icon