This topic tells you how to build an on-demand service tile using the Tile Generator. For an example tile, see the example-kafka-on-demand-tile in GitHub.
For a list of available manifest properties for the broker, see the broker job spec in GitHub.
To build an on-demand tile you need the following releases:
When using the ODB in a tile with Tanzu Operations Manager v2.0 and earlier, you need at least two private networks:
The network for service instances should be flagged as a Service Network in Tanzu Operations Manager.
NoteFor Tanzu Operations Manager v2.1 and later, you do not need separate networks for the on-demand broker and service instances. However, VMware recommends that you have at least two networks as described above.
There are several methods you can use to build a tile. This topic describes how to build a tile using the Tile Generator.
To use the Tile Generator to build a tile for an on-demand service:
Generate a tile.yml
file by doing steps 1 through 4 of Using the Tile Generator.
Add accessors, on-demand broker lifecycle errands, and optional features to the tile.yml
file generated in step 1. This provides configuration for the ODB and additional configuration options for operators to select in Tanzu Operations Manager.
For more information about what to add to the tile.yml
, see the following sections below:
Build your tile by running the following command:
tile build
The ODB requires tiles to be configured with certain information. You must add accessors to the tile.yml
file to provide values that operators cannot configure in Tanzu Operations Manager.
Add the following accessors to your tile.yml
file:
Note The accessors in this section are mandatory. For other accessors, see Tanzu Operations Manager Provided Snippets.
Tanzu Operations Manager uses these accessors to get values relating to the BOSH Director installation. For the on-demand broker to interact with BOSH Director, on-demand service tiles must be configured with credentials for managing BOSH deployments.
The following table lists the accessors you must add:
Accessor | Description |
---|---|
$director.hostname | The director’s hostname or IP address |
$director.ca_public_key | The director’s root ca certificate. Related: Configure SSL Certificates. |
For example:
bosh:
url: https://(( $director.hostname )):25555
root_ca_cert: (( $director.ca_public_key ))
To see this example in context, see the example-kafka-on-demand-tile in GitHub.
Tanzu Operations Manager uses these accessors to get values that have been assigned to the tile after installation. To enable $self
accessors, set service_broker: true
at the top level of your tile.yml
file.
Note Setting service_broker: true
causes the BOSH Director to redeploy when installing or uninstalling the tile.
The following table lists the accessors you must add:
Accessor | Description |
---|---|
$self.uaa_client_name | UAA client name that can authenticate with the BOSH Director |
$self.uaa_client_secret | UAA client secret that can authenticate with the BOSH Director |
$self.stemcell_version | The stemcell that the service deployment uses |
$self.service_network | Service network configured for the on-demand instances |
The service network has to be created manually. Create a subnet on AWS and then add it to the director. In the BOSH Director tile, under Create Networks > ADD network > fill in the subnet/vpc details.
For example:
bosh:
authentication:
uaa:
url: https://(( $director.hostname )):8443
client_id: (( $self.uaa_client_name ))
client_secret: (( $self.uaa_client_secret ))
To see this example in context, see the example-kafka-on-demand-tile in GitHub.
Tanzu Operations Manager uses these accessors to get values from the VMware Tanzu Application Service for VMs (TAS for VMs) tile. If you want to use TAS for VMs, add these accessors to your tile.yml
file.
The following table lists the accessors you must add to use TAS for VMs:
Accessor | Description |
---|---|
..cf.ha_proxy.skip_cert_verify.value | Flag to skip SSL certificate verification for connections to the CF API |
..cf.cloud_controller.apps_domain.value | The application domain configured in the CF installation |
..cf.cloud_controller.system_domain.value | The system domain configured in the CF installation |
..cf.uaa.system_services_credentials.identity | Username of a CF user in the cloud_controller.admin group, to be used by services |
..cf.uaa.system_services_credentials.password | Password of a CF user in the cloud_controller.admin group, to be used by services |
For example:
disable_ssl_cert_verification: (( ..cf.ha_proxy.skip_cert_verify.value ))
cf:
url: https://api.(( ..cf.cloud_controller.system_domain.value ))
authentication:
url: https://uaa.(( ..cf.cloud_controller.system_domain.value ))
user_credentials:
username: (( ..cf.uaa.system_services_credentials.identity ))
password: (( ..cf.uaa.system_services_credentials.password ))
To see this example in context, see the example-kafka-on-demand-tile in GitHub.
The example-kafka-on-demand-tile example in GitHub shows how the errands in the on-demand broker release can be used.
VMware recommends that you add the errands below to your tile. The errands should be specified in the following order:
Post-deploy:
register-broker
upgrade-all-service-instances
Pre-delete:
delete-all-service-instances-and-deregister-broker
Note The upgrade-all-service-instances
errand can be configured with the number of simultaneous upgrades and the number of canary instances. For more information about these parameters, see Upgrade All Service Instances.
For more information about these errands and how to add them, see Broker and Service Management.
The example example-kafka-on-demand-tile in GitHub shows how to create a tab with fields to configure the parameters for this errand. The example tile has constraints to ensure the number of simultaneous upgrades is greater than one and the number of canaries is greater than zero.
Tanzu Operations Manager provides a VM extension called public_ip
in the BOSH Director’s cloud config. Use this feature to give Tanzu Operations Manager operators the option to assign a public IP address to instance groups. This IP is only used for outgoing traffic to the internet from VMs with the public_ip
extension. All internal traffic / incoming connections need to go over the private IP.
To allow operators to a assign public IP addresses to on-demand service instance groups, update your tile.yml
file as follows:
Add the following to the form_types
section:
For example:
form_types:
- name: example_form
property_inputs:
- reference: .broker.example_vm_extensions
label: VM options
description: List of VM options for Service Instances
Add the following to the job_types
section:
For example:
job_types:
- name: broker
templates:
- name: broker
release: on-demand-service-broker
manifest: |
service_catalog:
plans:
- name: example-plan
instance_groups:
- name: example-instance-group
vm_extensions: (( .broker.example_vm_extensions.value )) # add this line
Add the following to the property_blueprints
section under the broker job:
For example:
property_blueprints: # add this section
- name: example_vm_extensions
type: multi_select_options
configurable: true
optional: true
options:
- name: "public_ip"
label: "Internet Connected VMs (on supported IaaS providers)"
Tanzu Operations Manager provides a feature called Floating Stemcells that allows Tanzu Operations Manager to quickly propagate a patched stemcell to all VMs in the deployment that have the same compatible stemcell. Both the broker deployment and the service instances deployed by the On-Demand Broker can make use of this feature. Enabling this feature can help ensure that all of your service instances are patched to the latest stemcell.
For the service instances to be installed with the latest stemcell automatically, ensure that the upgrade-all-service-instances
errand is selected.
To enable floating stemcells for your tile, update your tile.yml
file as follows:
Implement floating stemcells.
For example:
job_types:
templates:
- name: broker
manifest: |
service_deployment:
releases:
- name: release-name
version: 1.0.0
jobs: [job_server]
stemcells:
- os: ubuntu-trusty
version: (( $self.stemcell_version )) # Add this line
Configure the stemcell_criteria
.
For example:
---
name: example-on-demand-service
product_version: 1.0.0
stemcell_criteria:
os: ubuntu-trusty
version: '3312'
enable_patch_security_updates: true # Add this line
You can give Tanzu Operations Manager operators the option to enable secure binding. If secure binding is enabled, binding credentials are stored securely in runtime CredHub. When users create bindings or service keys, ODB passes a secure reference to the service credentials through the network instead of plain text.
Important To use the secure binding credentials feature you must use Tanzu Operations Manager v2.0 or later.
To include the option to enable secure binding, update your tile.yml
file as follows:
Add secure_binding_credentials
to the top-level properties block in the on-demand broker manifest.
For example:
secure_binding_credentials:
enabled: true
authentication:
uaa:
client_id: CREDHUB_CLIENT_ID # client ID used by broker when communicating with CredHub
client_secret: CREDHUB_CLIENT_SECRET # client secret used by broker when communicating with CredHub
ca_cert: UAA_CA_CERT
To let users activate and deactivate this feature in the Tanzu Operations Manager UI, you need to make some changes to your tile’s metadata file:
property_blueprints
that reads the setting in the form field and exposes the appropriate manifest snippet for CredHub and secure binding. For an example property_blueprints
section, see the example-kafka-on-demand-tile in GitHub. property_blueprints
section. For an example broker job, see the example-kafka-on-demand-tile in GitHub. Maintenance information is used to uniquely describe the deployed version of a service instance. It is used by the platform to determine when an upgrade is available for that instance, allowing app developers to trigger the upgrade.
In the broker manifest, it is defined in the service_catalog
, at global and plan levels, as in the following example:
# broker properties
service_catalog:
maintenance_info: # applies to all plans
public:
stemcell: 1818
docker: v2.4.6
private:
SECRET: PASSWORD
version: 1.4.7-rc.1 # must be semver
description: "OS image update.\nExpect downtime." # optional
plans:
- name: STABLE
maintenance_info: # plan-specific
public:
size: 3
docker: v3.0.0 # overwrites global configuration
version: 7.0.0
- name: EDGE
maintenance_info: {}
When the Service Catalog is requested by the platform, the broker returns the maintenance_info
properties per plan, where:
maintenance_info.public
is returned as configuredmaintenance_info.version
is returned as configuredmaintenance_info.description
is returned as configuredmaintenance_info.private
are aggregated and hashed into a single stringFor the previous example manifest, the catalog response is:
{
"services": [{
"name": "MY-SERVICE",
"plans": [{
"name": "STABLE",
"maintenance_info": {
"public": {
"stemcell": "1818",
"docker": "v3.0.0",
"size": 3
},
"private": "HASHED-VALUE",
"version": "7.0.0",
"description": "OS image update.\nExpect downtime."
}
}, {
"name": "EDGE",
"maintenance_info": {
"public": {
"stemcell": "1818",
"docker": "v2.4.6"
},
"private": "HASHED-VALUE",
"version": "1.4.7-rc.1"
}
}]
}]
}
VMware recommends using YAML anchors and references to avoid repeating maintenance information values within the manifest. For instance, the stemcell version can be anchored with the &stemcellVersion
annotation and then referenced in the maintenance information with the *stemcellVersion
tag.
Note The Open Service Broker API only supports maintenance_info.version
and maintenance_info.description
. VMware discourages the use of public
and private
if Cloud Foundry is the platform communicating to the broker.