This topic for Tanzu Operations Manager operators and BOSH operators gives you information about operating the on-demand broker (ODB) provided by On-Demand Services SDK.
Operators are responsible for:
Requesting appropriate networking rules for on-demand service tiles. See Set Up Networking.
Configuring the BOSH Director. See Configure Your BOSH Director.
Uploading the required releases for the broker deployment and service instance deployments. See Upload Required Releases.
Writing a broker manifest. See Write a Broker Manifest.
Managing brokers and service plans. See Broker and Service Management.
Note VMware recommends that you provide documentation when you make changes to the manifest to inform other operators about the new configurations.
Before deploying a service tile that uses the on-demand service broker (ODB), you must create networking rules to enable components to communicate with ODB. For instructions for creating networking rules, see the documentation for your IaaS.
The following table lists key components and their responsibilities in the on-demand architecture.
Key Components | Component Responsibilities |
---|---|
BOSH Director | Creates and updates service instances as instructed by ODB. |
BOSH Agent | Adds an agent on every VM that it deploys. The agent listens for instructions from the BOSH Director and executes those instructions. The agent receives job specifications from the BOSH Director and uses them to assign a role or job to the VM. |
BOSH UAA | Issues OAuth2 tokens for clients to use when they act on behalf of BOSH users. |
VMware Tanzu Application Service for VMs | Contains the apps that consume services. |
ODB | Instructs BOSH to create and update services. Connects to services to create bindings. |
Deployed service instance | Runs the given service. For example, a deployed On-Demand Services SDK service instance runs the On-Demand Services SDK service. |
Regardless of the specific network layout, you must ensure network rules are set up so that connections are open as described in the table below.
Source Component | Destination Component | Default TCP Port | Notes |
---|---|---|---|
ODB | BOSH Director BOSH UAA |
25555 8443 |
The default ports are not configurable. |
ODB | Deployed service instances | Specific to the service (such as Redis for VMware Tanzu Application Service). This can be one or more ports. | This connection is for administrative tasks. Avoid opening general use, app-specific ports for this connection. |
ODB | TAS for VMs | 8443 | The default port is not configurable. |
Errand VMs | TAS for VMs ODB Deployed Service Instances |
8443 8080 Specific to the service. May be one or more ports. |
The default port is not configurable. |
BOSH Agent | BOSH Director | 4222 | The BOSH Agent runs on every VM in the system, including the BOSH Director VM. The BOSH Agent initiates the connection with the BOSH Director. The default port is not configurable. The communication between these components is two-way. |
Deployed apps on TAS for VMs | Deployed service instances | Specific to the service. May be one or more ports. | This connection is for general use, app-specific tasks. Avoid opening administrative ports for this connection. |
TAS for VMs | ODB | 8080 | This port can be different for individual services. The operator can also configure this port if the tile developer allows. |
See the following topics for how to set up your BOSH Director:
ODB requires:
Note ODB does not support BOSH Windows. Service instance lifecycle errands require BOSH Director v261 on PCF v1.10 or later. For more information, see Service Instance Lifecycle Errands.
There are two kinds of communication in ODB that use transport layer security (TLS) and need to validate certificates using a certificate authority (CA) certificate:
The CA certificates used to sign the BOSH and Cloud Controller certificates are often generated by BOSH, CredHub, or a customer security team, and so are not publicly trusted certificates. This means VMware might need to provide the CA certificates to ODB to perform the required validation.
In some rare cases where the BOSH Director is not installed through Tanzu Operations Manager, BOSH can be configured to be publicly accessible with a domain name and a TLS certificate issued by a public certificate authority. In such a case, you can navigate to https://BOSH-DOMAIN-NAME:25555/info in a browser and see a trusted certificate padlock in the browser address bar.
In this case, ODB can be configured to use this address for BOSH, and it does not require a CA certificate to be provided. The public CA certificate is already present on the ODB VM.
By contrast, BOSH is usually only accessible on an internal network. It uses a certificate signed by an internal CA. The CA certificate must be provided in the broker configuration so that ODB can validate the BOSH Director’s certificate. ODB always validates BOSH TLS certificates.
You have two options for providing a CA certificate to ODB for validation of the BOSH certificate. You can add the BOSH Director’s root certificate to the ODB manifest or you can use BOSH’s trusted_certs
feature to add a self-signed CA certificate to each VM that BOSH deploys.
To add the BOSH Director’s root certificate to the ODB manifest, edit the following manifest as shown here:
bosh:
root_ca_cert: ROOT-CA-CERT
Where ROOT-CA-CERT
is the root certificate authority (CA) certificate. This is the certificate used when following the steps in Configuring SSL Certificates in the BOSH documentation.
For example:
instance_groups:
- name: broker
jobs:
- name: broker
properties:
bosh:
root_ca_cert:
-----BEGIN CERTIFICATE-----
EXAMPLExxOFxxAxxCERTIFICATE
...
-----END CERTIFICATE-----
authentication:
...
To use BOSH’s trusted_certs
feature to add a self-signed CA certificate to each VM that BOSH deploys, follow the steps below.
trusted_certs
feature. For instructions, see Configuring Trusted Certificates in the BOSH documentation.You can configure a separate root CA certificate that is used when ODB communicates with the Cloud Foundry API (Cloud Controller). This is necessary if the Cloud Controller is configured with a certificate not trusted by the broker.
For an example of how to add a separate root CA certificate to the manifest, see the line containing CA-CERT-FOR-CLOUD-CONTROLLER
in the manifest snippet in Starter Snippet for Your Broker below.
You can use BOSH teams to further control how BOSH operations are available to different clients. For more information about BOSH teams, see Using BOSH Teams in the BOSH documentation.
To use BOSH teams to ensure that your on-demand service broker client can only modify deployments it created:
Run the following UAA CLI (UAAC) command to create the client:
uaac client add CLIENT-ID \
--secret CLIENT-SECRET \
--authorized_grant_types "refresh_token password client_credentials" \
--authorities "bosh.teams.TEAM-NAME.admin"
Where:
CLIENT-ID
is your client ID.CLIENT-SECRET
is your client secret.TEAM-NAME
is the name of the team authorized to modify this deployment.uaac client add admin \ --secret 12345679 \ --authorized\_grant\_types "refresh\_token password client\_credentials" \ --authorities "bosh.teams.my-team.admin"
For more information about using the UAAC, see Creating and Managing Users with the UAA CLI (UAAC).
Configure the broker’s BOSH authentication.
For example:
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
bosh:
url: DIRECTOR-URL
root_ca_cert: CA-CERT-FOR-BOSH-DIRECTOR # optional, see SSL certificates
authentication:
uaa:
client_id: BOSH-CLIENT-ID
client_secret: BOSH-CLIENT-SECRET
Where the BOSH-CLIENT-ID
and BOSH-CLIENT-SECRET
are the CLIENT-ID
and CLIENT-SECRET
you provided in step 1.
The broker can then only perform BOSH operations on deployments it has created. For a more detailed manifest snippet, see Starter Snippet for Your Broker below.
For more information about securing how ODB uses BOSH, see Security.
ODB uses the Cloud Controller as a source of truth for service offerings, plans, and instances.
To reach the Cloud Controller, configure ODB with either client or user credentials in the broker manifest. For more information, see Write a Broker Manifest below.
Note The UAA client or user must have the following permissions.
cloud_controller.admin
and clients.read
.clients.write
authority. For more information about this feature, see Create a Client on CF UAA. scim.read
and cloud_controller.admin
groups. The following is an example broker manifest snippet for the client credentials:
uaa:
...
authentication:
client_credentials:
client_id: UAA-CLIENT-ID
secret: UAA-CLIENT-SECRET
The following is an example broker manifest snippet for the user credentials:
uaa:
...
authentication:
user_credentials:
username: CF-ADMIN-USERNAME
password: CF-ADMIN-PASSWORD
Upload the following releases to your BOSH Director:
To upload a release to your BOSH Director, run:
bosh -e BOSH-DIRECTOR-NAME upload-release RELEASE-FILE-NAME.tgz
Example command for ODB:
$ bosh -e lite upload-release on-demand-service-broker-0.22.0.tgz
Example commands for service adapter or service release:
$ bosh -e lite upload-release my-service-release.tgz
$ bosh -e lite upload-release my-service-adapter.tgz
There are two parts to writing your broker manifest. You must:
If you are unfamiliar with writing BOSH v2 manifests, see Deployment Config.
Here are example manifests:
For a Redis service—redis-example-service-adapter-release in GitHub.
For a Kafka service—kafka-example-service-adapter-release in GitHub.
Your manifest must contain exactly one non-errand instance group that is co-located with both:
on-demand-service-broker
The broker is stateless and does not need a persistent disk. It can have a small VM type: a single CPU and 1 GB of memory is sufficient in most cases.
Use the following snippet to help you to configure your broker. The snippet uses BOSH v2 syntax as well as global cloud config and job-level properties.
For examples of complete broker manifests, see Write a Broker Manifest.
Caution The disable_ssl_cert_verification
option is dangerous and should be set to false
in production.
addons:
# Broker uses bpm to isolate co-located BOSH jobs from one another
- name: bpm
jobs:
- name: bpm
release: bpm
instance_groups:
- name: NAME-OF-YOUR-CHOICE
instances: 1
vm_type: VM-TYPE
stemcell: STEMCELL
networks:
- name: NETWORK
jobs:
- name: SERVICE-ADAPTER-JOB-NAME
release: SERVICE-ADAPTER-RELEASE
- name: broker
release: on-demand-service-broker
properties:
# choose a port and basic authentication credentials for the broker:
port: BROKER-PORT
username: BROKER-USERNAME
password: BROKER-PASSWORD
# optional - defaults to false. This should not be set to true in production.
disable_ssl_cert_verification: TRUE|FALSE
# optional - defaults to 60 seconds. This enables the broker to gracefully wait for any open requests to complete before shutting down.
shutdown_timeout_in_seconds: 60
# optional - defaults to false. This enables BOSH operational errors to be displayed for the CF user.
expose_operational_errors: TRUE|FALSE
# optional - defaults to false. If set to true, plan schemas are included in the catalog, and the broker fails if the adapter does not implement generate-plan-schemas.
enable_plan_schemas: TRUE|FALSE
cf:
url: CF-API-URL
# optional - see the Configure CA Certificates section above:
root_ca_cert: CA-CERT-FOR-CLOUD-CONTROLLER
# either client_credentials or user_credentials, not both as shown:
uaa:
url: CF-UAA-URL
authentication:
client_credentials:
# with cloud_controller.admin and clients.read authorities (or clients.write authority if you want ODB to create UAA clients) and client_credentials in the authorized_grant_type:
client_id: UAA-CLIENT-ID
secret: UAA-CLIENT-SECRET
user_credentials:
# in the cloud_controller.admin and scim.read groups:
username: CF-ADMIN-USERNAME
password: CF-ADMIN-PASSWORD
client_definition:
# if set, the client used to authenticate with UAA must have clients.admin authority or higher
scopes: COMMA-SEPARATED-LIST-OF-SCOPES
authorities: COMMA-SEPARATED-LIST-OF-AUTHORITIES
authorized_grant_types: COMMA-SEPARATED-LIST-OF-GRANT-TYPES
resource_ids: COMMA-SEPARATED-LIST-OF-RESOURCE-IDS
bosh:
url: DIRECTOR-URL
# optional - see the Configure CA Certificates section above:
root_ca_cert: CA-CERT-FOR-BOSH-DIRECTOR
# either basic or uaa, not both as shown, see
authentication:
basic:
username: BOSH-USERNAME
password: BOSH-PASSWORD
uaa:
client_id: BOSH-CLIENT-ID
client_secret: BOSH-CLIENT-SECRET
service_adapter:
# optional - provided by the service author. Defaults to /var/vcap/packages/odb-service-adapter/bin/service-adapter.
path: PATH-TO-SERVICE-ADAPTER-BINARY
# optional - Filesystem paths to be mounted for use by the service adapter. These should include the paths to any config files.
mount_paths: [ PATH-TO-SERVICE-ADAPTER-CONFIG ]
# There are more broker properties that are discussed below
Use the following sections as a guide to configure the service catalog and compose plans in the properties section of broker job. For an example snippet, see the Starter Snippet for the Service Catalog and Plans below.
When configuring the service catalog, supply:
The release jobs specified by the service author:
Stemcells:
latest
and floating stemcells are not supported.Cloud Foundry service metadata for the service offering:
Service authors do not define plans, but instead expose plan properties. Operators compose plans consisting of combinations of these properties, along with IaaS resources and catalog metadata.
When composing plans, supply:
Cloud Foundry plan metadata for each plan:
You can use other arbitrary field names in addition to the OSBAPI recommended fields. For information about the recommended fields for plan metadata, see the Open Service Broker API Profile in GitHub.
Resource mapping:
errand
by setting the lifecycle
field. For an example, see register-broker
in the kafka-example-service-adapter-release in GitHub.Values for plan properties:
(Optional) Provide an update block for each plan
Append the snippet below to the properties section of the broker job that you configured in Configure Your Broker. Ensure that you provide the required information listed in Configure Your Service Catalog and Plan Composition.
For examples of complete broker manifests, see Write a Broker Manifest.
service_deployment:
releases:
- name: SERVICE-RELEASE
# exact release version:
version: SERVICE-RELEASE-VERSION
# service author specifies the list of jobs required:
jobs: [RELEASE-JOBS-NEEDED-FOR-DEPLOYMENT-AND-LIFECYCLE-ERRANDS]
# every instance group in the service deployment has the same stemcell:
stemcells:
- os: SERVICE-STEMCELL
# exact stemcell version:
version: &stemcellVersion SERVICE-STEMCELL-VERSION
service_catalog:
id: CF-MARKETPLACE-ID
service_name: CF-MARKETPLACE-SERVICE-OFFERING-NAME
service_description: CF-MARKETPLACE-DESCRIPTION
bindable: TRUE|FALSE
# optional:
plan_updatable: TRUE|FALSE
# optional:
tags: [TAGS]
# optional:
requires: [REQUIRED-PERMISSIONS]
# optional:
dashboard_client:
id: DASHBOARD-OAUTH-CLIENT-ID
secret: DASHBOARD-OAUTH-CLIENT-SECRET
redirect_uri: DASHBOARD-OAUTH-REDIRECT-URI
# optional:
metadata:
display_name: DISPLAY-NAME
image_url: IMAGE-URL
long_description: LONG-DESCRIPTION
provider_display_name: PROVIDER-DISPLAY-NAME
documentation_url: DOCUMENTATION-URL
support_url: SUPPORT-URL
# optional - applied to every plan:
global_properties: {}
# optional:
global_quotas:
# the maximum number of service instances across all plans:
service_instance_limit: INSTANCE-LIMIT
# optional - global resource usage limits:
resources:
# arbitrary hash of resource types:
ips:
# global limit for this resource type - reaching this limit depends on the resource type’s 'cost', which is defined in each plan:
limit: RESOURCE-LIMIT
memory:
limit: RESOURCE-LIMIT
# optional - applied to every plan.
maintenance_info:
# keys under public are visible in service catalog
public:
# reference to stemcellVersion anchor above
stemcell_version: *stemcellVersion
# arbitrary public maintenance_info
kubernetes_version: 1.13 # optional
# arbitrary public maintenance_info
docker_version: 18.06.1
# all keys under private are hashed to single SHA value in service catalog
private:
# example of private data that would require a service update to change
log_aggregrator_mtls_cert: *YAML_ANCHOR_TO_MTLS_CERT
# optional - should conform to semver
version: 1.2.3-rc2
description: "OS image update.\nExpect downtime."
plans:
- name: CF-MARKETPLACE-PLAN-NAME
# optional - used by the cf CLI to display whether this plan is "free" or "paid":
free: TRUE|FALSE
plan_id: CF-MARKETPLACE-PLAN-ID
description: CF-MARKETPLACE-DESCRIPTION
# optional - enable by default.
cf_service_access: ENABLE|DISABLE|MANUAL
# optional - if specified, this takes precedence over the bindable attribute of the service:
bindable: TRUE|FALSE
# optional:
metadata:
display_name: DISPLAY-NAME
bullets: [BULLET1, BULLET2]
costs:
- amount:
CURRENCY-CODE-STRING: CURRENCY-AMOUNT-FLOAT
unit: FREQUENCY-OF-COST
# optional:
quotas:
# the maximum number of service instances for this plan:
service_instance_limit: INSTANCE-LIMIT
# optional - resource usage limits for this plan:
resources:
# arbitrary hash of resource types:
memory:
# optional - overwrites global limit for this resource type:
limit: RESOURCE-LIMIT
# optional – the amount of the quota that each service instance of this plan uses:
cost: RESOURCE-COST
# resource mapping for the instance groups defined by the service author:
instance_groups:
- name: SERVICE-AUTHOR-PROVIDED-INSTANCE-GROUP-NAME
vm_type: VM-TYPE
# optional:
vm_extensions: [VM-EXTENSIONS]
instances: &instanceCount INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
# optional:
persistent_disk_type: DISK
# optional:
- name: SERVICE-AUTHOR-PROVIDED-LIFECYCLE-ERRAND-NAME
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
# valid property key-value pairs are defined by the service author:
properties: {}
# optional
maintenance_info:
# optional - keys merge with catalog level public maintenance_info keys
public:
# refers to anchor in instance group above
instance_count: *instanceCount
# optional
private: {}
# optional - should conform to semver
version: 1.2.3-rc3
# optional:
update:
# optional:
canaries: 1
# required:
max_in_flight: 2
# required:
canary_watch_time: 1000-30000
# required:
update_watch_time: 1000-30000
# optional:
serial: true
# optional:
lifecycle_errands:
# optional:
post_deploy:
- name: ERRAND-NAME
# optional - for co-locating errand:
instances: [INSTANCE-NAME, ...]
- name: ANOTHER_ERRAND_NAME
# optional:
pre_delete:
- name: ERRAND-NAME
# optional - for co-locating errand:
instances: [INSTANCE-NAME, ...]
Existing brokers operate in a secure network environment.
By default, brokers communicate with the platform over HTTP. This communication is usually not encrypted.
You can configure the broker to accept only HTTPS connections.
To enable HTTPS, provide a server certificate and private key in the broker manifest. For example:
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
tls:
certificate: |
SERVER-CERTIFICATE
private_key: |
SERVER-PRIVATE-KEY
When HTTPS is enabled, the broker only accepts connections that use TLS v1.2 and later. The broker also accepts only the following cipher suites:
Caution This feature does not work if you have configured use_stdin
to be false.
To avoid writing secrets in plaintext in the manifest, you can use ODB-managed secrets to store secrets on BOSH CredHub. When using ODB-managed secrets, the service adapter generates secrets and uses ODB as a proxy to the CredHub config server. For information for service authors about how to store manifest secrets on CredHub, see (Optional) Store Secrets on BOSH CredHub.
Secrets in the manifest can be:
If you use BOSH variables or literal CredHub references in your manifest, do the following in the ODB manifest so that the service adapter can access the secrets:
Set the enable_secure_manifests
flag to true
.
For example:
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
enable_secure_manifests: true
...
Supply details for accessing the credentials stored in BOSH CredHub. Replace the placeholder text below with your values for accessing CredHub:
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
enable_secure_manifests: true
bosh_credhub_api:
url: https://BOSH-CREDHUB-ADDRESS:8844/
root_ca_cert: BOSH-CREDHUB-CA-CERT
authentication:
uaa:
client_credentials:
client_id: BOSH-CREDHUB-CLIENT-ID
client_secret: BOSH-CREDHUB-CLIENT-SECRET
Caution This feature does not work if you have configured use_stdin
to be false.
If you enable secure binding, binding credentials are stored securely in runtime CredHub. When users create bindings or service keys, ODB passes a secure reference to the service credentials through the network instead of in plaintext.
To store service credentials in runtime CredHub, your deployment must meet the following requirements:
It must be able to connect to runtime CredHub v1.6.x or later. This might be provided as part of your Cloud Foundry deployment.
Your instance group must have access to the local DNS provider. This is because the address for runtime CredHub is a local domain name.
Note VMware recommends using BOSH DNS as a DNS provider. If you use TAS for VMs v2.4 or later, you cannot use consul as a DNS provider because consul server VMs have been removed.
To enable secure binding:
Set up a new runtime CredHub client in Cloud Foundry UAA with credhub.write
and credhub.read
in its list of scopes. For how to do this, see Creating and Managing Users with the UAA CLI (UAAC) in the Cloud Foundry documentation.
Update the broker job in the ODB manifest to consume the runtime CredHub link.
For example:
instance_groups:
- name: broker
...
jobs:
- name: broker
consumes:
credhub:
from: credhub
deployment: cf
Update the broker job in the ODB manifest to include the secure_binding_credentials
section. The CA certificate can be a reference to the certificate in the cf deployment or inserted manually.
For example:
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
secure_binding_credentials:
enabled: true
authentication:
uaa:
client_id: NEW-CREDHUB-CLIENT-ID
client_secret: NEW-CREDHUB-CLIENT-SECRET
ca_cert: ((cf.uaa.ca_cert))
Where NEW-CREDHUB-CLIENT-ID
and NEW-CREDHUB-CLIENT-SECRET
are the runtime CredHub client credentials you created in step 1.
For a more detailed manifest snippet, see Starter Snippet for Your Broker.
The credentials for a given service binding are stored with the following format:
/C/:SERVICE-GUID/:SERVICE-INSTANCE-GUID/:BINDING-GUID/CREDENTIALS
The plaintext credentials are stored in runtime CredHub under this key, and the key is available under the VCAP_SERVICES
environment variable for the app.
As of OSBAPI Spec v2.13 ODB supports enabling plan schemas. For more information, see OSBAPI Spec v2.13 in GitHub.
When this feature is enabled, the broker validates incoming configuration parameters against a schema during the provision, binding, and update of service instances. The broker produces an error if the parameters do not conform.
To enable plan schemas:
Ensure that the service adapter implements the command generate-plan-schemas
. When it is not implemented, the broker fails to deploy. For more information about this command, see generate-plan-schemas.
In the manifest, set the enable_plan_schemas
flag to true
as shown below. The default is false
.
instance_groups:
- name: broker
...
jobs:
- name: broker
...
properties:
...
enable_plan_schemas: true
For a more detailed manifest snippet, see Starter Snippet for Your Broker.
You can register a route to the broker using the route_registrar
job from the routing release. The route_registrar
job:
For more information, see route_registrar job.
To register the route, co-locate the route_registrar
job with on-demand-service-broker
:
Download the routing release. See cf-routing release for more information about doing so.
Upload the routing release to your BOSH Director.
Add the route_registrar
job to your deployment manifest and configure it with an HTTP route. This creates a URI for your broker.
Important You must use the same port for the broker and the route. The broker defaults to 8080.
For how to configure the route_registrar
job, see routing release in GitHub.
If you configure a route, set the broker_uri
property in the register-broker errand.
You can set service instance quotas to limit the number of service instances ODB can create.
There are two types service instances quotas:
Global quotas – limit the number of service instances across all plans
Plan quotas – limit the number of service instances for a given plan
Note These limits do not include orphaned deployments. For more information, see List Orphan Deployments and Delete Orphaned Deployments.
When creating a service instance, ODB checks the global service instance limit. If this limit has not been reached, ODB checks the plan service instance limit. If no limits have been reached, the service instance is created.
To set service instance quotas, do the following in the manifest:
To set global quotas: add a global_quotas
section to the service catalog:
service_catalog:
...
global_quotas:
service_instance_limit: INSTANCE-LIMIT
...
To set plan quotas: add a quotas
section to the plans that you want to limit:
service_catalog:
...
plans:
- name: CF-MARKETPLACE-PLAN-NAME
quotas:
service_instance_limit: INSTANCE-LIMIT
Where INSTANCE-LIMIT
is the maximum number of service instances allowed.
For a more detailed manifest snippet, see the Starter Snippet for the Service Catalog and Plans.
You can set resource quotas to limit the amount of a particular resource that each service instance can use. To limit physical resources, such as memory, persistent disk size, or the number of IP addresses in the network, setting resource quotas can give you more control than service instance quotas.
A resource quota is defined by an arbitrary resource type with two associated keys, limit
and cost
. The resource limit is the maximum amount of a resource that is permitted. The resource cost represents how much of the resource limit that a service instance of a plan consumes.
There are two types of resource quotas:
Global quotas – limit how much of a resource is available for all plans to consume. ODB allows new instances to be created until the sum of resources consumed reach the global quota, unless a plan quota is reached first. You cannot define resource costs at the global level.
Plan quotas – limit how much of a resource is available for a specific plan to consume. ODB allows new instances of a plan to be created until the resources consumed reach the plan’s quota. If there is no plan limit, then instances can be created until the global quota is reached. You can define resource costs at the plan level.
When creating a service instance, ODB checks the global resource limit for each resource type. If these limits have not been reached, ODB checks the plan resource limits. If no limits have been reached, the service instance is created.
Note When calculating the amount of resources used, ODB does not take orphan deployments into consideration. For more information, see List Orphan Deployments and Delete Orphaned Deployments.
To set resource quotas, do the following in the manifest:
To set global quotas: add a global_quotas
section to the service catalog:
global_quotas:
resources:
RESOURCE-NAME:
limit: RESOURCE-LIMIT
Where:
RESOURCE-NAME
is a string defining the resource you want to limit.RESOURCE-LIMIT
is a value for the maximum allowed for each resource.service_catalog:
...
global_quotas:
resources:
ips:
limit: 50
memory:
limit: 150
To set plan quotas: add a quotas
section to the plans that you want to limit resources in:
quotas:
resources:
RESOURCE-NAME:
limit: RESOURCE-LIMIT # optional - if not set the limit defaults to the global limit
cost: RESOURCE-COST
Where:
RESOURCE-NAME
is a string defining the resource you want to limit.RESOURCE-LIMIT
is a value for the maximum allowed for each resource.RESOURCE-COST
is a value of how much of the quota a service instance of the plan consumes for that resource.service_catalog:
...
plans:
- name: my-plan
quotas:
resources:
ips:
cost: 2 # each service instance consumes 2, up to 50 "ips" from the global resource limit
memory:
limit: 25 # maximum limit of "memory" to be consumed by this plan
cost: 5 # each service instance will consume 5, up to 25 of plan resource limit
For a more detailed manifest snippet, see the Starter Snippet for the Service Catalog and Plans.
The ODB BOSH release contains a metrics job that can be used to emit metrics when co-located with the Service Metrics SDK. To do this, you must include the Loggregator release. For more information, see Loggregator in GitHub.
To download the Service Metrics Release, see VMware Tanzu Network.
Add the following jobs to the broker instance group:
- name: service-metrics
release: service-metrics
properties:
service_metrics:
execution_interval_seconds: INTERVAL-BETWEEN-SUCCESSIVE-METRICS-COLLECTIONS
origin: ORIGIN-TAG-FOR-METRICS
monit_dependencies: [broker] # you should hardcode this
....snip....
#Add Loggregator configurations here. For example, see https://github.com/pivotal-cf/service-metrics-release/blob/master/manifests
....snip....
- name: service-metrics-adapter
release: ODB-RELEASE
properties:
# The broker URI valid for the broker certificate including http:// or https://
broker_uri: BROKER-URI
tls:
# The CA certificate to use when communicating with the broker
ca_cert: CA-CERT
disable_ssl_cert_verification: TRUE|FALSE # defaults to false
Where:
INTERVAL-BETWEEN-SUCCESSIVE-METRICS-COLLECTIONS
is the interval in seconds between successive metrics collections.ORIGIN-TAG-FOR-METRICS
is the origin tag for metrics.LOGGREGATOR-CONFIGURATION
is your Loggregator configuration. For example manifests, see service-metrics-release in GitHub.ODB-RELEASE
is the on-demand broker release.For an example of how the service metrics can be configured for an on-demand-broker deployment, see the kafka-example-service-adapter-release manifest in GitHub.
VMware has tested this example configuration with Loggregator v58 and service-metrics v1.5.0.
For more information about service metrics, see Service Metrics for SDK for VMware Tanzu.
Caution When service-metrics-adapter
is not configured, it defaults to a BOSH-provided IP address or BOSH-provided BOSH DNS address, depending on the configuration on the broker URI. See Impact on links in the BOSH documentation.
When the broker is using TLS, the broker certificate must contain this BOSH provided address in its Subject Alternative Names section, otherwise Cloud Foundry cannot verify the certificate. For details about how to insert a BOSH DNS address into a config server generated certificate, see BOSH DNS Addresses in Config Server Generated Certs in the BOSH documentation.
You can configure ODB to retrieve BOSH DNS addresses for service instances. These addresses are passed to the service adapter when you create or delete a binding.
To enable ODB to obtain BOSH DNS addresses, configure the binding_with_dns
property in the manifest as follows on plans that require DNS addresses to create and delete bindings:
binding_with_dns:
- name: ADDRESS-NAME
link_provider: LINK-NAME
instance_group: INSTANCE-GROUP
properties:
azs: [AVAILABILITY-ZONES] # Optional
status: ADDRESS-STATUS # Optional
Where:
ADDRESS-NAME
is an arbitrary identifier used to identify the address when creating a binding.LINK-NAME
is the exposed name of the link. You can find this in the documentation for the service and under provides.name
in the release spec
file. You can override it in the deployment manifest by setting the as
property of the link.INSTANCE-GROUP
is the name of the instance group sharing the link. The resultant DNS address resolves to IP addresses of this instance group.AVAILABILITY-ZONES
is a list of availability zone names. When this is provided, the resultant DNS address resolves to IP addresses in these zones.ADDRESS-STATUS
is a filter for link address status. The permitted statuses are healthy
, unhealthy
, all
, or default
. When this is provided, the resultant DNS address resolves to IP addresses with this status.For example:
service_catalog:
...
plans:
...
- name: plan-requiring-dns-addresses
...
binding_with_dns: # add this section
- name: leader-address
link_provider: example-link-1
instance_group: leader-node
- name: follower-address
link_provider: example-link-2
instance_group: follower-node
properties:
azs: [z1, z2]
status: healthy
Each entry in binding_with_dns
is converted to a BOSH DNS address that is passed to the service adapter when you create a binding.
The telemetry program enables VMware to collect data from customer installations to improve your enterprise experience. Collecting data at scale enables VMware to identify patterns and alert you to warning signals in your installation.
For more information about the telemetry program, see Telemetry.
To enable your broker to send telemetry data, add the following to your deployment manifest:
instance_groups:
- name: broker
jobs:
- name: broker
properties:
...
enable_telemetry: true
The on-demand broker can create a client on TAS for VMs UAA during the provisioning of service instances. One client is created per service instance. Any client created by the broker is removed when the service instance is deleted. Any client created by the broker is updated when the service instance is updated. If the service broker is deleted without running the delete-all-services-instances
errand, the clients are left in TAS for VMs UAA.
To configure ODB to create UAA clients for service instances:
Ensure the UAA client that ODB uses to communicate with Cloud Foundry has the clients.write
and cloud_controller.admin
authorities. For more information about this UAA client, see Set Up Cloud Controller.
Configure the cf
property in the broker job as follows:
cf:
uaa:
url: UAA-URL
client_definition:
scopes: COMMA-SEPARATED-LIST-OF-SCOPES
authorities: COMMA-SEPARATED-LIST-OF-AUTHORITIES
authorized_grant_types: COMMA-SEPARATED-LIST-OF-GRANT-TYPES
resource_ids: COMMA-SEPARATED-LIST-OF-RESOURCE-IDS
ODB then generates and appends the following properties to the client:
client_id
client_secret
redirect_uri
The service adapter receives the client that ODB has generated as part of the generate-manifest
input parameters.
The ODB does the following startup checks:
It verifies that the CF and BOSH versions satisfy the minimum versions required. If your service offering includes lifecycle errands, the minimum required version for BOSH is higher. For more information, see Configure Your BOSH Director.
If your system does not meet minimum requirements, you see an insufficient version error. For example:
CF API error: Cloud Foundry API version is insufficient, ODB requires CF v238+.
It verifies that, for the service offering, no plan IDs have changed for plans that have existing service instances. If there are instances, you see the following error:
You cannot change the plan_id of a plan that has existing service instances.
The broker tries to wait for any incomplete HTTPS requests to complete before shutting down. This reduces the risk of leaving orphan deployments in the event that the BOSH Director does not respond to the initial bosh deploy
request.
You can determine how long the broker waits before being forced to shut down by using the broker.shutdown_timeout
property in the manifest. The default is 60 seconds. For more information, see Write a Broker Manifest.
Starting in ODB version v0.27.0, the broker binary adopted BOSH Process Manager (bpm) for better job isolation and security. Starting in ODB version v0.30.0, all broker management errands also use bpm. For more information, see bpm in the BOSH documentation.
For bpm to work with a broker:
broker
job’s service_adapter
configuration must specify the mount_paths
to the service adapter. An example of this configuration is at the bottom of the manifest snippet in Starter Snippet for Your Broker.For broker management errands that are not co-located with the broker, the bpm release must be included in each errand job.
Important This feature requires BOSH Director v261 or later.
Service instance lifecycle errands allow additional short-lived jobs to run as part of service instance deployment. A deployment is only considered successful if all lifecycle errands exit successfully.
The service adapter must offer the errands as part of the service instance deployment.
ODB supports the following lifecycle errands:
post_deploy
runs after creating or updating a service instance. An example use case is running a health check to ensure the service instance is functioning.pre_delete
runs before the deletion of a service instance. An example use case is cleaning up data before a service shutdown. For more information about the workflow, see Delete a Service Instance with Pre-Delete Errands.Service instance lifecycle errands are configured on a per-plan basis. Lifecycle errands do not run if you change a plan’s lifecycle errand configuration while an existing deployment is in progress.
To enable lifecycle errands, add each errand job in the following manifest places:
service_deployment
lifecycle_errands
configurationinstance_groups
Here is an example manifest snippet that configures lifecycle errands for a plan:
service_deployment:
releases:
- name: SERVICE-RELEASE
version: SERVICE-RELEASE-VERSION
jobs:
- SERVICE-RELEASE-JOB
- POST-DEPLOY-ERRAND-JOB
- PRE-DELETE-ERRAND-JOB
- ANOTHER-POST-DEPLOY-ERRAND-JOB
service_catalog:
plans:
- name: CF-MARKETPLACE-PLAN-NAME
lifecycle_errands:
post_deploy:
- name: POST-DEPLOY-ERRAND-JOB
- name: ANOTHER-POST-DEPLOY-ERRAND-JOB
disabled: true
pre_delete:
- name: PRE-DELETE-ERRAND-JOB
instance_groups:
- name: SERVICE-RELEASE-JOB
...
- name: POST-DEPLOY-ERRAND-JOB
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
- name: ANOTHER-POST-DEPLOY-ERRAND-JOB
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
- name: PRE-DELETE-ERRAND-JOB
lifecycle: errand
vm_type: VM-TYPE
instances: INSTANCE-COUNT
networks: [NETWORK]
azs: [AZ]
Where POST-DEPLOY-ERRAND-JOB
is the errand job you want to add.
Important This feature requires BOSH Director v263 or later.
You can run both post-deploy
and pre-delete
errands as co-located errands. Co-located errands run on an existing service instance group instead of a separate one. This avoids additional resource allocation.
Like other lifecycle errands, co-located errands are deployed on a per-plan basis. Currently the ODB supports co-locating only the post-deploy
or pre-delete
errands.
For more information, see the Errands in the BOSH documentation.
To enable co-located errands for a plan, add each co-located errand job to the manifest as follows:
service_deployment
.lifecycle_errands
configuration.lifecycle_errands
.Below is an example manifest that includes a co-located post-deploy errand:
service_deployment:
releases:
- name: SERVICE-RELEASE
version: SERVICE-RELEASE-VERSION
jobs:
- SERVICE-RELEASE-JOB
- CO-LOCATED-POST-DEPLOY-ERRAND-JOB
service_catalog:
plans:
- name: CF-MARKETPLACE-PLAN-NAME
lifecycle_errands:
post_deploy:
- name: CO-LOCATED-POST-DEPLOY-ERRAND-JOB
instances:
- SERVICE-RELEASE-JOB/0
- name: NON-CO-LOCATED-POST-DEPLOY-ERRAND
instance_groups:
- name: NON-CO-LOCATED-POST-DEPLOY-ERRAND
...
- name: SERVICE-RELEASE-JOB
...
Where CO-LOCATED-POST-DEPLOY-ERRAND-JOB
is the co-located errand you want to run and SERVICE-RELEASE-JOB/0
is the instances you want the errand to run on.