Platform Automation Toolkit provides tools to create an Operations Manager with a BOSH Director.

  1. Operations Manager needs to be deployed with IaaS specific configuration. Platform Automation Toolkit provides a configuration file format that looks like this:

    Copy and paste the YAML below for your IaaS and save as opsman-config.yml in your working directory.

    For AWS:

    ---
    opsman-configuration:
      aws:
        access_key_id: ((access_key))
        boot_disk_size: 100
        iam_instance_profile_name: ((ops_manager_iam_instance_profile_name))
        instance_type: m5.large
        key_pair_name: ((ops_manager_key_pair_name))
        public_ip: ((ops_manager_public_ip))
        region: ((region))
        secret_access_key: ((secret_key))
        security_group_ids: [((ops_manager_security_group_id))]
        vm_name: ((environment_name))-ops-manager-vm
        vpc_subnet_id: ((ops_manager_subnet_id))
    

    For Azure:

    ---
    opsman-configuration:
      azure:
        boot_disk_size: "100"
        client_id: ((client_id))
        client_secret: ((client_secret))
        cloud_name: ((iaas_configuration_environment_azurecloud))
        container: ((ops_manager_container_name))
        location: ((location))
        network_security_group: ((ops_manager_security_group_name))
        private_ip: ((ops_manager_private_ip))
        public_ip: ((ops_manager_public_ip))
        resource_group: ((resource_group_name))
        ssh_public_key: ((ops_manager_ssh_public_key))
        storage_account: ((ops_manager_storage_account_name))
        storage_sku: "Premium_LRS"
        subnet_id: ((management_subnet_id))
        subscription_id: ((subscription_id))
        tenant_id: ((tenant_id))
        use_managed_disk: "true"
        vm_name: "((resource_group_name))-ops-manager"
        vm_size: "Standard_DS2_v2"
    

    For GCP:

    ---
    opsman-configuration:
      gcp:
        boot_disk_size: 100
        custom_cpu: 4
        custom_memory: 16
        gcp_service_account: ((service_account_key))
        project: ((project))
        public_ip: ((ops_manager_public_ip))
        region: ((region))
        ssh_public_key: ((ops_manager_ssh_public_key))
        tags: ((ops_manager_tags))
        vm_name: ((environment_name))-ops-manager-vm
        vpc_subnet: ((management_subnet_name))
        zone: ((availability_zones.0))
    

    For vSphere:

    ---
    opsman-configuration:
      vsphere:
        vcenter:
          datacenter: ((vcenter_datacenter))
          datastore: ((vcenter_datastore))
          folder: ((ops_manager_folder))
          url: ((vcenter_host))
          username: ((vcenter_username))
          password: ((vcenter_password))
          resource_pool: /((vcenter_datacenter))/host/((vcenter_cluster))/Resources/((vcenter_resource_pool))
          insecure: ((allow_unverified_ssl))
        disk_type: thin
        dns: ((ops_manager_dns_servers))
        gateway: ((management_subnet_gateway))
        hostname: ((ops_manager_dns))
        netmask: ((ops_manager_netmask))
        network: ((management_subnet_name))
        ntp: ((ops_manager_ntp))
        private_ip: ((ops_manager_private_ip))
        ssh_public_key: ((ops_manager_ssh_public_key))
    

    Where:

    • The ((parameters)) map to outputs from the terraform-outputs.yml, which can be provided via vars file for YAML interpolation in a subsequent step.

    Note For a supported IaaS not listed above, see the Platform Automation Toolkit docs.

  2. First import the Platform Automation Toolkit Docker Image.

    docker import ${PLATFORM_AUTOMATION_TOOLKIT_IMAGE_TGZ} platform-automation-toolkit-image
    

    Where ${PLATFORM_AUTOMATION_TOOLKIT_IMAGE_TGZ} is set to the filepath of the image downloaded from Pivnet.

  3. Create the Operations Manager using the om vm-lifecycle CLI. This requires the Ops Manager Image for your IaaS and the previously created opsman-config.yml to be present in your working directory.

    The following command runs a docker image to invoke the om vm-lifecycle command to create the Operations Manager VM, mounts the current directory from your local filesystem as a new directory called /workspace within the image, and does its work from within that directory.

    docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-toolkit-image \
      om vm-lifecycle create-vm \
        --config opsman-config.yml \
        --image-file ops-manager*.{yml,ova,raw} \
        --vars-file terraform-outputs.yml
    

    The om vm-lifecycle create-vm command writes a state.yml file uniquely identifying the created Operations Manager VM. This state.yml file is used for long term management of the Operations Manager VM. We recommend storing it for future use.

  4. Create an env.yml file in your working directory to provide parameters to allow om to target the Operations Manager.

    connect-timeout: 30            # default 5
    request-timeout: 1800          # default 1800
    skip-ssl-validation: true      # default false
    
  5. Export the Operations Manager DNS entry created by Terraform as the as the target Operations Manager for om.

    export OM_TARGET="$(om interpolate -c terraform-outputs.yml --path /ops_manager_dns)"
    

    Alternatively, this can be included in the env.yml created above as the target attribute.

  6. Setup authentication on the Operations Manager.

    om --env env.yml configure-authentication \
       --username ${OM_USERNAME} \
       --password ${OM_PASSWORD} \
       --decryption-passphrase ${OM_DECRYPTION_PASSPHRASE}
    

    Where:

    • ${OM_USERNAME} is the desired username for accessing the Operations Manager.
    • ${OM_PASSWORD} is the desired password for accessing the Operations Manager.
    • ${OM_DECRYPTION_PASSPHRASE} is the desired decryption passphrase used for recovering the Operations Manager if the VM is restarted.

    This configures Operations Manager with whichever credentials you set which will be required with every subsequent om command.

  7. The Operations Manager can now be used to create a BOSH Director.

    Copy and paste the YAML below for your IaaS and save as director-config.yml.

    For AWS:

    ---
    az-configuration:
    - name: ((availability_zones.0))
    - name: ((availability_zones.1))
    - name: ((availability_zones.2))
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: ((availability_zones.0))
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: management
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          cidr: ((management_subnet_cidrs.0))
          dns: 169.254.169.253
          gateway: ((management_subnet_gateways.0))
          iaas_identifier: ((management_subnet_ids.0))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.0))
        - availability_zone_names:
          - ((availability_zones.1))
          cidr: ((management_subnet_cidrs.1))
          dns: 169.254.169.253
          gateway: ((management_subnet_gateways.1))
          iaas_identifier: ((management_subnet_ids.1))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.1))
        - availability_zone_names:
          - ((availability_zones.2))
          cidr: ((management_subnet_cidrs.2))
          dns: 169.254.169.253
          gateway: ((management_subnet_gateways.2))
          iaas_identifier: ((management_subnet_ids.2))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges.2))
      - name: services
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          cidr: ((services_subnet_cidrs.0))
          dns: 169.254.169.253
          gateway: ((services_subnet_gateways.0))
          iaas_identifier: ((services_subnet_ids.0))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.0))
        - availability_zone_names:
          - ((availability_zones.1))
          cidr: ((services_subnet_cidrs.1))
          dns: 169.254.169.253
          gateway: ((services_subnet_gateways.1))
          iaas_identifier: ((services_subnet_ids.1))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.1))
        - availability_zone_names:
          - ((availability_zones.2))
          cidr: ((services_subnet_cidrs.2))
          dns: 169.254.169.253
          gateway: ((services_subnet_gateways.2))
          iaas_identifier: ((services_subnet_ids.2))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges.2))
    properties-configuration:
      director_configuration:
        ntp_servers_string: 169.254.169.123
      iaas_configuration:
        access_key_id: ((ops_manager_iam_user_access_key))
        secret_access_key: ((ops_manager_iam_user_secret_key))
        iam_instance_profile: ((ops_manager_iam_instance_profile_name))
        vpc_id: ((vpc_id))
        security_group: ((platform_vms_security_group_id))
        key_pair_name: ((ops_manager_key_pair_name))
        ssh_private_key: ((ops_manager_ssh_private_key))
        region: ((region))
    resource-configuration:
      compilation:
        instance_type:
          id: automatic
    
    vmextensions-configuration:
    - name: concourse-lb
      cloud_properties:
        lb_target_groups:
          - ((environment_name))-concourse-tg-tcp
          - ((environment_name))-concourse-tg-ssh
          - ((environment_name))-concourse-tg-credhub
          - ((environment_name))-concourse-tg-uaa
        security_groups:
          - ((environment_name))-concourse-sg
          - ((platform_vms_security_group_id))
    - name: increased-disk
      cloud_properties:
        ephemeral_disk:
          type: gp2
          size: 512000
    

    For Azure:

    ---
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: 'zone-1'
      other_availability_zones:
        name: 'zone-2'
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: management
        service_network: false
        subnets:
        - iaas_identifier: ((network_name))/((management_subnet_name))
          cidr: ((management_subnet_cidr))
          reserved_ip_ranges: ((management_subnet_gateway))-((management_subnet_range))
          dns: 168.63.129.16
          gateway: ((management_subnet_gateway))
      - name: services-1
        service_network: false
        subnets:
        - iaas_identifier: ((network_name))/((services_subnet_name))
          cidr: ((services_subnet_cidr))
          reserved_ip_ranges: ((services_subnet_gateway))-((services_subnet_range))
          dns: 168.63.129.16
          gateway: ((services_subnet_gateway))
    properties-configuration:
      iaas_configuration:
        subscription_id: ((subscription_id))
        tenant_id: ((tenant_id))
        client_id: ((client_id))
        client_secret: ((client_secret))
        resource_group_name: ((resource_group_name))
        bosh_storage_account_name: ((bosh_storage_account_name))
        default_security_group: ((platform_vms_security_group_name))
        ssh_public_key: ((ops_manager_ssh_public_key))
        ssh_private_key: ((ops_manager_ssh_private_key))
        cloud_storage_type: managed_disks
        storage_account_type: Standard_LRS
        environment: ((iaas_configuration_environment_azurecloud))
        availability_mode: availability_sets
      director_configuration:
        ntp_servers_string: 0.pool.ntp.org
        metrics_ip: ''
        resurrector_enabled: true
        post_deploy_enabled: false
        bosh_recreate_on_next_deploy: false
        retry_bosh_deploys: true
        hm_pager_duty_options:
          enabled: false
        hm_emailer_options:
          enabled: false
        blobstore_type: local
        database_type: internal
      security_configuration:
        trusted_certificates: ''
        generate_vm_passwords: true
    
    vmextensions-configuration:
    - name: concourse-lb
      cloud_properties:
        load_balancer: ((environment_name))-concourse-lb
    - name: increased-disk
      cloud_properties:
        ephemeral_disk:
          size: 512000
    

    For GCP:

    ---
    az-configuration:
    - name: ((availability_zones.0))
    - name: ((availability_zones.1))
    - name: ((availability_zones.2))
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: ((availability_zones.0))
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: management
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          - ((availability_zones.1))
          - ((availability_zones.2))
          cidr: ((management_subnet_cidr))
          dns: 169.254.169.254
          gateway: ((management_subnet_gateway))
          iaas_identifier: ((network_name))/((management_subnet_name))/((region))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges))
      - name: services
        subnets:
        - availability_zone_names:
          - ((availability_zones.0))
          - ((availability_zones.1))
          - ((availability_zones.2))
          cidr: ((services_subnet_cidr))
          dns: 169.254.169.254
          gateway: ((services_subnet_gateway))
          iaas_identifier: ((network_name))/((services_subnet_name))/((region))
          reserved_ip_ranges: ((services_subnet_reserved_ip_ranges))
    properties-configuration:
      iaas_configuration:
        project: ((project))
        auth_json: ((ops_manager_service_account_key))
        default_deployment_tag: ((platform_vms_tag))
      director_configuration:
        ntp_servers_string: 169.254.169.254
      security_configuration:
        trusted_certificates: ''
        generate_vm_passwords: true
    resource-configuration:
      compilation:
        instance_type:
          id: automatic
    
    vmextensions-configuration:
    - name: concourse-lb
      cloud_properties:
        target_pool: ((environment_name))-concourse
    - name: increased-disk
      cloud_properties:
        root_disk_size_gb: 500
        root_disk_type: pd-ssd
    

    For vSphere + NSXT:

    ---
    az-configuration:
      - name: az1
        clusters:
          - cluster: ((vcenter_cluster))
            resource_pool: ((vcenter_resource_pool))
    properties-configuration:
      director_configuration:
        ntp_servers_string: ((ops_manager_ntp))
        retry_bosh_deploys: true
      iaas_configuration:
        vcenter_host: ((vcenter_host))
        vcenter_username: ((vcenter_username))
        vcenter_password: ((vcenter_password))
        datacenter: ((vcenter_datacenter))
        disk_type: thin
        ephemeral_datastores_string: ((vcenter_datastore))
        persistent_datastores_string: ((vcenter_datastore))
        nsx_networking_enabled: true
        nsx_mode: nsx-t
        nsx_address: ((nsxt_host))
        nsx_username: ((nsxt_username))
        nsx_password: ((nsxt_password))
        nsx_ca_certificate: ((nsxt_ca_cert))
        ssl_verification_enabled: ((disable_ssl_verification))
    network-assignment:
      network:
        name: management
      singleton_availability_zone:
        name: az1
    networks-configuration:
      icmp_checks_enabled: false
      networks:
        - name: management
          subnets:
            - availability_zone_names:
                - az1
              cidr: ((management_subnet_cidr))
              dns: ((ops_manager_dns_servers))
              gateway: ((management_subnet_gateway))
              reserved_ip_ranges: ((management_subnet_reserved_ip_ranges))
              iaas_identifier: ((management_subnet_name))
    
    vmextensions-configuration:
      - name: concourse-lb
        cloud_properties:
          nsxt:
            ns_groups:
            - ((environment_name))_concourse_ns_group
      - name: increased-disk
        cloud_properties:
          disk: 512000
    

    For vSphere without NSXT:

    ---
    az-configuration:
    - name: default
      iaas_configuration_name: az1
      clusters:
      - cluster: ((vcenter_cluster))
        resource_pool: ((vcenter_resource_pool))
    iaas-configurations:
    - datacenter: ((vcenter_datacenter))
      disk_type: thin
      ephemeral_datastores_string: ((vcenter_datastore))
      nsx_networking_enabled: false
      persistent_datastores_string: ((vcenter_datastore))
      ssl_verification_enabled: ((disable_ssl_verification))
      vcenter_host: ((vcenter_host))
      vcenter_password: ((vcenter_password))
      vcenter_username: ((vcenter_username))
    network-assignment:
      network:
        name: az1
      singleton_availability_zone:
        name: az1
    networks-configuration:
      icmp_checks_enabled: false
      networks:
      - name: az1
        subnets:
        - iaas_identifier: ((management_subnet_name))
          cidr: ((management_subnet_cidr))
          dns: ((ops_manager_dns_servers))
          gateway: ((management_subnet_gateway))
          reserved_ip_ranges: ((management_subnet_reserved_ip_ranges))
          availability_zone_names:
          - az1
    properties-configuration:
      director_configuration:
        ntp_servers_string: ((ops_manager_ntp))
        retry_bosh_deploys: true
      security_configuration:
        generate_vm_passwords: true
        opsmanager_root_ca_trusted_certs: false
      syslog_configuration:
        enabled: false
    vmextensions-configuration: [
      # depending on how your routing is set up
      # you may need to create a vm-extension here
      # to route traffic for your Concourse
    ]
    

    Where:

    • The ((parameters)) map to outputs from the terraform-outputs.yml, which can be provided via vars file for YAML interpolation in a subsequent step.
  8. Create the BOSH director using the om CLI.

    The previously saved director-config.yml and terraform-outputs.yml files can be used directly with om to configure the director.

Note The following om commands implicitly use the OM_USERNAME, OM_PASSWORD, and OM_DECRYPTION_PASSPHRASE environment variables. These were set in a previous step, so you may need to re-set them if you are in a fresh shell.

```
om --env env.yml configure-director \
   --config director-config.yml \
   --vars-file terraform-outputs.yml

om --env env.yml apply-changes \
   --skip-deploy-products
```

The end result will be a working BOSH director,
which can be targeted for the Concourse deployment.

Upload Releases and the Stemcell to the BOSH Director

  1. Write the private key for connecting to the BOSH director.

    om interpolate \
      -c terraform-outputs.yml \
      --path /ops_manager_ssh_private_key > /tmp/private_key
    
  2. Export the environment variables required to target the BOSH director/BOSH CredHub and verify you are properly targeted.

    eval "$(om --env env.yml bosh-env --ssh-private-key=/tmp/private_key)"
    
    # Will return a non-error if properly targeted
    bosh curl /info
    
  3. Upload all of the BOSH releases previously downloaded. Note that you'll either need to copy them to your working directory before running these commands, or change directories to wherever you originally downloaded them.

    # upload releases
    bosh upload-release concourse-release*.tgz
    bosh upload-release bpm-release*.tgz
    bosh upload-release postgres-release*.tgz
    bosh upload-release uaa-release*.tgz
    bosh upload-release credhub-release*.tgz
    bosh upload-release backup-and-restore-sdk-release*.tgz
    
  4. Upload the previously downloaded stemcell. (If you changed to your downloads directory, remember to change back after uploading this file.)

    bosh upload-stemcell *stemcell*.tgz
    

Next Steps

When you have deployed the BOSH Director and uploaded the releases and stemcell, see Deploying Concourse with BOSH.

check-circle-line exclamation-circle-line close-line
Scroll to top icon