This topic will teach you how to add a product to an existing pipeline. This includes downloading the product from the Broadcom Support portal, extracting the configuration, and installing the configured product.
This guide assumes that you are working from one of the pipelines created in Installing Tanzu Operations Manager or Upgrading an existing Tanzu Operations Manager, but you don't have to have exactly that pipeline. If your pipeline is different, though, you may run into trouble with some of the assumptions made here:
config
and platform-automation
.pivnet_token
.apply-director-changes
.env.yml
based on the instructions in Configuring Env. This file exists in the configuration
resource.fly
target named control-plane
, with an existing pipeline called foundation
.foundation
pipeline's pipeline.yml
.You should be able to use the pipeline YAML in this document with any pipeline, as long as you make sure the names in the assumptions list match what's in your pipeline, either by changing the example YAML or your pipeline.
The instructions and example in the following add the VMware Tanzu Application Service for VMs product.
Before setting the pipeline, create a config file for download-product
to download Tanzu Application Service from the Broadcom Support portal.
Create a download-tas.yml
file for the IaaS you are using.
AWS
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: aws
Azure
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: azure
GCP
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: google
OpepnStack
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: openstack
vSphere
---
pivnet-api-token: ((pivnet_token))
pivnet-file-glob: "*srt*.pivotal" # this guide installs Small Footprint TAS
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.9\..*$
stemcell-iaas: vsphere
Add and commit this file to the same directory as the previous guides. This file should be accessible from the configuration
resource.
git add download-tas.yml
git commit -m "Add download-tas file for foundation"
git push
Now that you have a config file, you can add a new download-upload-and-stage-tas
job in your pipeline.yml
file.
jobs: # Do not duplicate this if it already exists in your pipeline.yml,
# just add the following lines to the jobs section
- name: download-upload-and-stage-tas
serial: true
plan:
- aggregate:
- get: platform-automation-image
params:
globs: ["*image*.tgz"]
unpack: true
- get: platform-automation-tasks
params:
globs: ["*tasks*.zip"]
unpack: true
- get: config
- task: prepare-tasks-with-secrets
image: platform-automation-image
file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
input_mapping:
tasks: platform-automation-tasks
output_mapping:
tasks: platform-automation-tasks
params:
CONFIG_PATHS: config
- task: download-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/download-product.yml
input_mapping:
config: config
params:
CONFIG_FILE: download-tas.yml
output_mapping:
downloaded-product: tas-product
downloaded-stemcell: tas-stemcell
Commit your changes.
git add pipeline.yml
git commit -m 'download TAS and its stemcell'
Now, set the pipeline
fly -t control-plane set-pipeline -p foundation -c pipeline.yml
If the pipeline sets without errors, run a git push
of the config.
If fly set-pipeline returns an error, fix any and all errors until the pipeline can be set. When the pipeline can be set properly, run:
git add pipeline.yml
git commit --amend --no-edit
git push
Testing your pipeline: We generally want to try things out right away to see if they're working right. However, in this case, if you have a very slow internet connection and/or multiple Concourse workers, you might want to hold off until we've got the job doing more, so that if it works, you don't have to wait for the download again.
Now that you have a product downloaded and (potentially) cached on a Concourse worker, upload and stage the new product to Tanzu Operations Manager.
jobs:
- name: download-upload-and-stage-tas
serial: true
plan:
- aggregate:
- get: platform-automation-image
params:
globs: ["*image*.tgz"]
unpack: true
- get: platform-automation-tasks
params:
globs: ["*tasks*.zip"]
unpack: true
- get: config
- task: prepare-tasks-with-secrets
image: platform-automation-image
file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
input_mapping:
tasks: platform-automation-tasks
output_mapping:
tasks: platform-automation-tasks
params:
CONFIG_PATHS: config
- task: download-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/download-product.yml
input_mapping:
config: config
params:
CONFIG_FILE: download-tas.yml
output_mapping:
downloaded-product: tas-product
downloaded-stemcell: tas-stemcell
- task: upload-tas-stemcell
image: platform-automation-image
file: platform-automation-tasks/tasks/upload-stemcell.yml
input_mapping:
env: config
stemcell: tas-stemcell
params:
ENV_FILE: env.yml
- task: upload-and-stage-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/upload-and-stage-product.yml
input_mapping:
product: tas-product
env: config
Re-set the pipeline.
fly -t control-plane set-pipeline -p foundation -c pipeline.yml
When this finishes successfully, make a commit and push the changes.
git add pipeline.yml
git commit -m 'upload tas and stemcell to Ops Manager'
git push
Before automating the configuration and installation of the product, add a config file. The simplest way to do this is to choose your config options in the Tanzu Operations Manager UI, and then pull its resulting configuration.
Advanced Tile Config Option: For an alternative that generates the configuration from the product file, using ops files to select options, see Config template.
Configure the product manually according to the product's installation instructions. Use the installation instructions in the VMware Tanzu Application Service documentation.
After the product is fully configured, apply the changes (Apply Changes) in the Tanzu Operations Manager UI, and then continue this guide.
If you do not click Apply Changes, Tanzu Operations Manager cannot generate credentials. You can still go through this process without an initial applying changes, but you will be unable to use om staged-config
with --include-credentials
, and may have an incomplete configuration at the end of this process.
om
has a command called staged-config. It is used to extract staged product configuration from the Tanzu Operations Manager UI. om
requires a env.yml
, which is available. It was used in the upload-and-stage
task.
Most products will contain the following top-level keys:
The command can be run directly using Docker:
staged-config
for the Tanzu Application Service product. For more information, see Running commands locally.To pull the configuration from Tanzu Operations Manager:
Download the image from the Broadcom Support portal.
Import the image.
export ENV_FILE=env.yml
docker import ${PLATFORM_AUTOMATION_IMAGE_TGZ} platform-automation-image
Run om staged-products
to find the name of the product in Tanzu Operations Manager.
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
om --env ${ENV_FILE} staged-products
The result should be a table that looks like the following
+---------------------------+-----------------+
| NAME | VERSION |
+---------------------------+-----------------+
| cf | <VERSION> |
| p-bosh | <VERSION> |
+---------------------------+-----------------+
p-bosh
is the name of the director. As cf
is the only other product on our Tanzu Operations Manager, we can safely assume that this is the product name for Tanzu Application Service.
Using the product name cf
, extract the current configuration from Tanzu Operations Manager.
docker run -it --rm -v $PWD:/workspace -w /workspace platform-automation-image \
om --env ${ENV_FILE} staged-config --include-credentials --product-name cf > tas-config.yml
Now you have a configuration file for our tile ready to back up (almost). There are a few more steps required before you are ready to commit.
Look through your tas-config.yml
for any sensitive values. These values should be ((parameterized))
and saved off in a secrets store (in this example, we use CredHub).
Log in to CredHub, if you are not already logged in. Be sure to note the space at the beginning of the line. This will ensure your valuable secrets are not saved in terminal history.
# note the starting space
credhub login --server example.com \
--client-name your-client-id \
--client-secret your-client-secret
Depending on your credential type, you may need to pass client-id
and client-secret
, as we do above, or username
and password
. We use the client
approach because that's the credential type that automation should usually be working with. Nominally, a username represents a person, and a client represents a system; this isn't always exactly how things are in practice. Use whichever type of credential you have in your case. Note that if you exclude either set of flags, CredHub will interactively prompt for username
and password
, and hide the characters of your password when you type them. This method of entry can be better in some situations.
The example list of some sensitive values from our tas-config.yml
are as follows, note that this is intentionally incomplete.
product-properties:
.properties.cloud_controller.encrypt_key:
value:
secret: my-super-secure-secret
.properties.networking_poe_ssl_certs:
value:
- certificate:
cert_pem: |-
-----BEGIN CERTIFICATE-----
my-cert
-----END CERTIFICATE-----
private_key_pem: |-
-----BEGIN RSA PRIVATE KEY-----
my-private-key
-----END RSA PRIVATE KEY-----
name: certificate
Start with the Cloud Controller encrypt key because this is a value that you might want to rotate at some point. Store it as a password
type in CredHub.
# note the starting space
credhub set \
--name /concourse/your-team-name/cloud_controller_encrypt_key \
--type password \
--password my-super-secure-secret
To validate that you have set this correctly, run:
# no need for an extra space
credhub get --name /concourse/your-team-name/cloud_controller_encrypt_key
Expect an output like this:
id: <guid>
name: /concourse/your-team-name/cloud_controller_encrypt_key
type: password
value: my-super-secure-secret
version_created_at: "<timestamp>"
In preparation for storing the Networking POE certs as a certificate
type in CredHub, save the certificate and private key as plain text files. In this example, these files are named poe-cert.txt
and poe-private-key.txt
. There should be no formatting or indentation in these files, only new lines.
# note the starting space
credhub set \
--name /concourse/your-team-name/networking_poe_ssl_certs \
--type rsa \
--public poe-cert.txt \
--private poe-private-key.txt
Validate that these are set correctly.
# no need for an extra space
credhub get --name /concourse/your-team-name/networking_poe_ssl_certs
The output should look like this:
id: <guid>
name: /concourse/your-team-name/networking_poe_ssl_certs
type: rsa
value:
private_key: |
-----BEGIN RSA PRIVATE KEY-----
my-private-key
-----END RSA PRIVATE KEY-----
public_key: |
-----BEGIN CERTIFICATE-----
my-cert
-----END CERTIFICATE-----
version_created_at: "<timestamp>"
Remove credentials from disk: Once you have validated that the certificates are set correctly in CredHub, remember to delete poe-cert.txt
and poe-private-key.txt
from your working directory. This will prevent a potential security leak or an accidental commit of those credentials.
Repeat this process for all sensitive values in your tas-config.yml
.
After this is complete, you can remove those secrets from tas-config.yml
and replace them with ((parameterized-values))
. The parameterized value name should match the name in CredHub. For this example, it looks like this:
product-properties:
.properties.cloud_controller.encrypt_key:
value:
secret: ((cloud_controller_encrypt_key))
.properties.networking_poe_ssl_certs:
value:
- certificate:
cert_pem: ((networking_poe_ssl_certs.public_key))
private_key_pem: ((networking_poe_ssl_certs.private_key))
name: certificate
When this is ready; that is, tas-config.yml
is parameterized to your liking, commit the config file.
git add tas-config.yml
git commit -m "Add tas-config file for foundation"
git push
Now you can configure the product and apply changes.
First, update the pipeline to have a configure-product step.
jobs:
- name: download-upload-and-stage-tas
serial: true
plan:
- aggregate:
- get: platform-automation-image
resource: platform-automation
params:
globs: ["*image*.tgz"]
unpack: true
- get: platform-automation-tasks
resource: platform-automation
params:
globs: ["*tasks*.zip"]
unpack: true
- get: config
- task: prepare-tasks-with-secrets
image: platform-automation-image
file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
input_mapping:
tasks: platform-automation-tasks
output_mapping:
tasks: platform-automation-tasks
params:
CONFIG_PATHS: config
- task: download-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/download-product.yml
input_mapping:
config: config
params:
CONFIG_FILE: download-tas.yml
output_mapping:
downloaded-product: tas-product
downloaded-stemcell: tas-stemcell
- task: upload-tas-stemcell
image: platform-automation-image
file: platform-automation-tasks/tasks/upload-stemcell.yml
input_mapping:
env: config
stemcell: tas-stemcell
params:
ENV_FILE: env/env.yml
- task: upload-and-stage-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/stage-product.yml
input_mapping:
product: tas-product
env: config
- name: configure-tas
serial: true
plan:
- aggregate:
- get: platform-automation-image
passed: [download-upload-and-stage-tas]
trigger: true
params:
globs: ["*image*.tgz"]
unpack: true
- get: platform-automation-tasks
params:
globs: ["*tasks*.zip"]
unpack: true
- get: config
passed: [download-upload-and-stage-tas]
- task: prepare-tasks-with-secrets
image: platform-automation-image
file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
input_mapping:
tasks: platform-automation-tasks
output_mapping:
tasks: platform-automation-tasks
params:
CONFIG_PATHS: config
- task: configure-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/configure-product.yml
input_mapping:
config: config
env: config
params:
CONFIG_FILE: tas-config.yml
This new job will configure the TAS product with the config file we previously created.
Add an apply-changes
job so that these changes will be applied by the Tanzu Operations Manager.
- name: configure-tas
serial: true
plan:
- aggregate:
- get: platform-automation-image
trigger: true
params:
globs: ["*image*.tgz"]
unpack: true
- get: platform-automation-tasks
params:
globs: ["*tasks*.zip"]
unpack: true
- get: config
passed: [download-upload-and-stage-tas]
- task: prepare-tasks-with-secrets
image: platform-automation-image
file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
input_mapping:
tasks: platform-automation-tasks
output_mapping:
tasks: platform-automation-tasks
params:
CONFIG_PATHS: config
- task: configure-tas
image: platform-automation-image
file: platform-automation-tasks/tasks/configure-product.yml
input_mapping:
config: config
env: config
params:
CONFIG_FILE: tas-config.yml
- name: apply-changes
serial: true
plan:
- aggregate:
- get: platform-automation-image
params:
globs: ["*image*.tgz"]
unpack: true
- get: platform-automation-tasks
params:
globs: ["*tasks*.zip"]
unpack: true
- get: config
passed: [configure-tas]
- task: prepare-tasks-with-secrets
image: platform-automation-image
file: platform-automation-tasks/tasks/prepare-tasks-with-secrets.yml
input_mapping:
tasks: platform-automation-tasks
output_mapping:
tasks: platform-automation-tasks
params:
CONFIG_PATHS: config
- task: apply-changes
image: platform-automation-image
file: platform-automation-tasks/tasks/apply-changes.yml
input_mapping:
env: config
Adding multiple products: When adding multiple products, you can add the configure jobs as passed constraints to the apply-changes job so that they all are applied at once. Tanzu Operations Manager will handle any inter-product dependency ordering. This will speed up your apply changes when compared with running apply changes for each product separately.
Example: passed: [configure-tas, configure-tas-windows, configure-healthwatch]
Set the pipeline one final time, run the job, and confirm that it passes.
fly -t control-plane set-pipeline -p foundation -c pipeline.yml
Commit the final changes to your repository.
git add pipeline.yml
git commit -m "configure-tas and apply-changes"
git push
You have now successfully added a product to your automation pipeline.
An alternative to the staged-config workflow outlined in these examples is config-template
.
config-template
is an om
command that creates a base config file with optional ops files from a given tile or pivnet slug.
This section assumes that you are adding TAS for VMs, as in the procedure above.
# note the leading space
export PIVNET_API_TOKEN='your-vmware-tanzu-network-api-token'
docker run -it -v $HOME/configs:/configs platform-automation-image \
om config-template \
--output-directory /configs/ \
--pivnet-api-token "${PIVNET_API_TOKEN}" \
--pivnet-product-slug elastic-runtime \
--product-version '2.5.0' \
--product-file-glob 'cf*.pivotal' # Only necessary if the product has multiple .pivotal files
This series of commands creates or updates a directory at $HOME/configs/cf/2.5.0/
.
cd
into the directory to get started creating your config.
In the directory, you'll see a product.yml
file. This is the template for the product configuration you're about to build. Open it in an editor of your choice. Get familiar with the file's contents. The values are variables intended to be interpolated from other sources, designated with the(())
syntax.
You can find the value for any property with a default in the product-default-vars.yml
file. This file serves as a good example of a variable source.
Create a vars file of your own for variables without default values. For the base template, you can get a list of required variables by running:
docker run -it -v $HOME/configs:/configs platform-automation-image \
om interpolate \
--config product.yml \
-l product-default-vars.yml \
-l resource-vars.yml \
-l errand-vars.yml
Put these vars in a file and give them the appropriate values. After you've included all the variables, the output will be the finished template. The rest of this guide refers to these vars as required-vars.yml
.
There may be situations that call for splitting your vars across multiple files. This can be useful if there are vars that need to be interpolated when you apply the configuration, rather than when you create the final template. You might consider creating a separate vars file for each of the following cases:
When creating your final template using om interpolate
, you can use the --skip-missing
flag to leave such vars to be rendered later.
If you're having trouble figuring out what the values should be, here are some approaches you can use:
Look in the template where the variable appears for some additional context of its value.
Look at the tile's online documentation
Upload the tile to a Tanzu Operations Manager and visit the tile in the Tanzu Operations Manager UI to see if that provides any hints.
If you are still struggling, inspect the HTML of the Tanzu Operations Manager web page to help you map the value names to the associated UI elements.
When using the Tanzu Operations Manager docs and UI, be aware that the field names in the UI do not necessarily map directly to property names.
The above process will get you a default installation, with no optional features or variables, that is entirely deployed in a single Availability Zone (AZ).
To provide non-required variables, use multiple AZs, or make non-default selections for some options, use some of the ops files in one of the following four directories:
features | Allow the enabling of selectors for a product; for example, enabling/disabling of an s3 bucket |
network | Contains options for enabling 2-3 availability zones for network configuration |
optional | Contains optional properties without defaults. For optional values that can be provided more than once, there's an ops file for each param count. |
resource | Contains configuration that can be applied to resource configuration; for example, BOSH VM extensions |
For more information on BOSH VM Extensions, see Creating a director config file.
To use an ops file, add -o
with the path to the ops file you want to use to your interpolate
command.
So, to enable TCP routing in Tanzu Application Service, add -o features/tcp_routing-enable.yml
. For the rest of this guide, the vars for this feature are referred to as feature-vars.yml
. If you run your complete command, you should again get a list of any newly-required variables.
docker run -it -v $HOME/configs:/configs platform-automation-image \
om interpolate \
--config product.yml \
-l product-default-vars.yml \
-l resource-vars.yml \
-l required-vars.yml \
-o features/tcp_routing-enable.yml \
-l feature-vars.yml \
-l errand-vars.yml
After selecting your ops files and created your vars files, decide which vars you want in the template and which you want to have interpolated later.
Create a final template and write it to a file, using only the vars you want in the template, and using --skip-missing
to allow the rest to remain as variables.
docker run -it -v $HOME/configs:/configs platform-automation-image \
om interpolate \
--config product.yml \
-l product-default-vars.yml \
-l resource-vars.yml \
-l required-vars.yml \
-o features/tcp_routing-enable.yml \
-l feature-vars.yml \
-l errand-vars.yml \
--skip-missing \
> pas-config-template.yml
You can check the resulting configuration into a git repo. For vars that do not include credentials, you can check those vars files in, too. Handle vars that are secret more carefully. See Using a secrets store to store credentials.
You can then delete the config template directory.
There are two recommended ways to support multiple foundation workflows:
This section explains how to support multiple foundations using ops files.
Starting with an incomplete Tanzu Application Service config from vSphere as an example:
# base.yml
# An incomplete YAML response from om staged-config
product-name: cf
product-properties:
.cloud_controller.apps_domain:
value: ((cloud_controller_apps_domain))
.cloud_controller.encrypt_key:
value:
secret: ((cloud_controller_encrypt_key.secret))
.properties.security_acknowledgement:
value: X
.properties.cloud_controller_default_stack:
value: default
network-properties:
network:
name: DEPLOYMENT
other_availability_zones:
- name: AZ01
singleton_availability_zone:
name: AZ01
resource-config:
diego_cell:
instances: 5
instance_type:
id: automatic
uaa:
instances: 1
instance_type:
id: automatic
For a single foundation deployment, leaving values such as ".cloud_controller.apps_domain"
as they are works fine. For multiple foundations, this value will be different for each deployed foundation. Other values, such as .cloud_controller.encrypt_key
, have a secret that already has a placeholder from om
. If different foundations have different load requirements, the values in resource-config
can also be edited using ops files.
Using the earlier example, fill in the existing placeholder for cloud_controller.apps_domain
in the first foundation.
# replace-domain-ops-file.yml
- type: replace
path: /product-properties/.cloud_controller.apps_domain/value?
value: unique.foundation.one.domain
To test that the ops file works in your base.yml
, do this locally using bosh int
:
bosh int base.yml -o replace-domain.yml
The following code returns base.yml
with the replaced (interpolated) values:
# interpolated-base.yml
network-properties:
network:
name: DEPLOYMENT
other_availability_zones:
- name: AZ01
singleton_availability_zone:
name: AZ01
product-name: cf
product-properties:
.cloud_controller.apps_domain: unique.foundation.one.domain
.cloud_controller.encrypt_key:
value:
secret: ((cloud_controller_encrypt_key.secret))
.properties.cloud_controller_default_stack:
value: default
.properties.security_acknowledgement:
value: X
resource-config:
diego_cell:
instance_type:
id: automatic
instances: 5
uaa:
instance_type:
id: automatic
instances: 1
Anything that needs to be different per deployment can be replaced using ops files as long as the path:
is correct.
Upgrading products to new patch versions:
Replicating configuration settings from one product to the same product on a different foundation: