This site will be decommissioned on January 30th 2025. After that date content will be available at techdocs.broadcom.com.

This topic for operators gives you basic troubleshooting techniques and FAQs for on-demand VMware Tanzu RabbitMQ for Tanzu Application Service.

How to Retrieve a Service Instance GUID

You need the GUID of your service instance to run some BOSH commands. To retrieve the GUID, run the command:

service SERVICE-INSTANCE-NAME --guid

If you do not know the name of the service instance, run cf services to see a listing of all service instances in the space. The service instances are listed in the name column.

Troubleshoot Errors

Start here if you are responding to a specific error or error messages.

Common Services Errors

The following errors occur in multiple services:


Failed installation

Symptom Tanzu RabbitMQ for Tanzu Application Service fails to install.
Cause Reasons for a failed installation include:
  • Certificate issues: The on-demand broker (ODB) requires valid certificates.
  • Deploy fails. There are multiple possible causes.
  • Networking problems:
    • Cloud Foundry cannot reach the Tanzu RabbitMQ for Tanzu Application Service broker
    • Cloud Foundry cannot reach the service instances
    • The service network cannot access the BOSH Director
  • The register broker errand fails.
  • The smoke test errand fails.
  • Resource sizing issues: These occur when the resource sizes selected for a plan are lower than Tanzu RabbitMQ for Tanzu Application Service requires to function.
  • Other service-specific issues.
Solution To troubleshoot:
  • Certificate issues: Ensure that your certificates are valid and generate new ones if necessary. To generate new certificates, contact Support.
  • Deploy fails: View the logs using Ops Manager to find out why the deployment is failing.
  • Networking problems: For how to troubleshoot, see Networking problems.
  • Register broker errand fails: For how to troubleshoot, see Register broker errand.
  • Resource sizing issues: Verify your resource configuration in Ops Manager and ensure that the configuration matches that recommended by the service.


Cannot create or delete service instances

Symptom If developers report errors such as:
Instance provisioning failed: There was a problem completing your request. Please contact your operations team providing the following information: service: redis-acceptance, service-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c70089, broker-request-id: 63da3a35-24aa-4183-aec6-db8294506bac, task-id: 442, operation: create
Cause Reasons include:
  • Problems with the deployment manifest
  • Authentication errors
  • Network errors
  • Quota errors
Solution To troubleshoot:
  1. If the BOSH error shows a problem with the deployment manifest, open the manifest in a text editor to inspect it.
  2. To continue troubleshooting, Log in to BOSH and target the Tanzu RabbitMQ for Tanzu Application Service instance using the instructions on parsing a Cloud Foundry error message.
  3. Retrieve the BOSH task ID from the error message and run:
    bosh task TASK-ID
  4. See Access the broker logs and use the broker-request-id from the error message to search the logs for more information. Check for:


Broker request timeouts

Symptom If developers report errors such as:
Server error, status code: 504, error code: 10001, message: The request to the service broker timed out: https://BROKER-URL/v2/service_instances/e34046d3-2379-40d0-a318-d54fc7a5b13f/service_bindings/aa635a3b-ef6d-41c3-a23f-55752f3f651b
Cause Cloud Foundry might not be connected to the service broker, or there might be a large number of queued tasks.
Solution To troubleshoot:
  1. Confirm that Cloud Foundry (CF) is connected to the service broker.
  2. Verify the BOSH queue size:
    1. Log in to BOSH as an admin.
    2. Run
      bosh tasks
    If there are a large number of queued tasks, the system might be under too much load. BOSH is configured with two workers and one status worker, which might not be enough for the level of load.
  3. If the task queue is long, advise app developers to try again after the system is under less load.


Instance does not exist

Symptom If developers report errors such as:
Server error, status code: 502, error code: 10001, message: Service broker error: instance does not exist
Cause The instance might have been deleted.
Solution To troubleshoot:
  1. Confirm that the Tanzu RabbitMQ for Tanzu Application Service instance exists in BOSH and obtain the GUID CF by running:
    cf service MY-INSTANCE --guid
  2. Using the --guid flag you obtained, run:
    bosh -d service-instance_GUID vms
If the BOSH deployment is not found, it was deleted from BOSH. Contact VMware Tanzu Support for help.


Cannot bind to or unbind from service instances

Symptom If developers report errors such as:
Server error, status code: 502, error code: 10001, message: Service broker error: There was a problem completing your request. Please contact your operations team providing the following information: service: example-service, service-instance-guid: 8d69de6c-88c6-4283-b8bc-1c46103714e2, broker-request-id: 15f4f87e-200a-4b1a-b76c-1c4b6597c2e1, operation: bind
Cause This might be due to authentication or network errors.
Solution To find out the issue with the binding:
  1. Access the service broker logs.
  2. Search the logs for the broker-request-id string listed in the error message above.
  3. Check for:
  4. Contact VMware Tanzu Support for help if you are unable to resolve the problem.


Cannot connect to a service instance

Symptom Developers report that their app cannot use service instances that they created and bound.
Cause The error might originate from the service or be network related.
Solution To solve this issue, ask the user to send application logs that show the connection error. If the error originates from the service, then follow Tanzu RabbitMQ for Tanzu Application Service-specific instructions. If the issue appears to be network-related, then:
  1. Verify that application security groups are configured correctly. Configured access for the service network that the tile is deployed to.
  2. Ensure that the network the TAS for VMs tile is deployed to has network access to the service network. You can find the network definition for this service network in the BOSH Director tile.
  3. In Ops Manager go into the service tile and see the service network that is configured in the networks tab.
  4. In Ops Manager go into the TAS for VMs tile and see the network it is assigned to. Ensure that these networks can access each other.


Cannot update a service instance

Symptom If developers report errors such as the following when trying to run cf-update-service:
Server error, status code: 502, error code: 10001, message: Service broker error: Service cannot be updated at this time, try again later or contact your operator for more information.
Cause Their service instance might not be running the latest service offering.
Solution You must run the upgrade-all-service-instances errand after upgrading to ensure that all existing service instances are upgraded to the latest service offering. See Upgrade all service instances.

App developers cannot upgrade individual service instances to the latest service offering. They cannot set parameters or change plan until you upgrade their service instances.


Upgrade all service instances errand fails

Symptom The upgrade-all-service-instances errand fails.
Cause There might be a problem with a particular instance.
Solution To troubleshoot:
  1. Look at the errand output in the Ops Manager log.
  2. If an instance has failed to upgrade, debug and fix it before running the errand again to prevent any failure issues from spreading to other on-demand instances.
  3. After the Ops Manager log no longer lists the deployment as failing, re-run the errand to upgrade the rest of the instances.


Missing logs and metrics

Symptom No logs are being emitted by the on-demand broker.
Cause Syslog might not be configured correctly, or you might have network access issues.
Solution To troubleshoot:
  1. Ensure that you have configured syslog for the tile.
  2. Verify that your syslog forwarding address is correct in Ops Manager.
  3. Ensure that you have network connectivity between the networks that the tile is using and the syslog destination. If the destination is external, use the public ip VM extension feature available in your Ops Manager tile configuration settings.
  4. Verify that Loggregator is emitting metrics:
    1. Install the cf log-cache plug-in. For instructions, see the Log Cache CLI Plugin GitHub repository.
    2. Find logs from your service instance by running:
      cf tail -f SERVICE_INSTANCE
    3. If no metrics appear within five minutes, verify that the broker network has access to the Loggregator system on all required ports.
  5. If you are unable to resolve the issue, contact Support.

Tanzu RabbitMQ for Tanzu Application Service-Specific Errors

The following troubleshooting errors are specific to Tanzu RabbitMQ for Tanzu Application Service:


Failed Deployment on Upgrade or after Apply Changes

Symptom Your deployment fails after editing the Assign AZs and Networks pane in the Tanzu RabbitMQ for Tanzu Application Service tile.
Cause This error might occur if you change the IP addresses assigned to the RabbitMQ Server job. Tanzu RabbitMQ for Tanzu Application Service requires that you do not change these IP addresses after they are assigned. This includes changes made to your current installation or during an upgrade.
Solution To diagnose and solve this issue, see Changing Network or IP Addresses Results in a Failed Deployment


Pre-Stop Script Times Out When Waiting for Queue Synchronization

Symptom A pre-stop script times out with the error message:
Timed out waiting for mirror queue critical node to sync after 3600 seconds
Cause You have not manually synced your queues, but you selected the Wait for Queue Synchronization checkbox and have mirrored queues with the default policy setting ha-sync-mode: manual.
Solution To manually sync your queues, run rabbitmqctl sync_queue. To set ha-sync-mode to automatic instead, see Setting or Changing the Policy.

Troubleshoot Components

Guidance on checking for and fixing issues in on-demand service components.

BOSH Problems

Large BOSH Queue

On-demand service brokers add tasks to the BOSH request queue, which can back up and cause delay under heavy loads. An app developer who requests a new Tanzu RabbitMQ for Tanzu Application Service instance sees create in progress in the Cloud Foundry Command Line Interface (cf CLI) until BOSH processes the queued request.

Ops Manager deploys two BOSH workers to process its queue.

Configuration

Service Instances in Failing State

The VM or disk type that you configured in the plan page of the tile in Ops Manager might not be large enough for the Tanzu RabbitMQ for Tanzu Application Service service instance to start. See tile-specific guidance on resource requirements.

Authentication

UAA Changes

If you rotated any UAA user credentials then you might see authentication issues in the service broker logs.

To resolve this, redeploy the Tanzu RabbitMQ for Tanzu Application Service tile in Ops Manager. This provides the broker with the latest configuration.

You must ensure that any changes to UAA credentials are reflected in the Ops Manager credentials tab of the VMware Tanzu Application Service for VMs tile.

Networking

Common issues with networking include:

Issue Solution
Latency when connecting to the Tanzu RabbitMQ for Tanzu Application Service service instance to create or delete a binding. Try again or improve network performance.
Firewall rules are blocking connections from the Tanzu RabbitMQ for Tanzu Application Service service broker to the service instance. Open the Tanzu RabbitMQ for Tanzu Application Service tile in Ops Manager and verify that the two networks configured in the Networks pane allow access to each other.
Firewall rules are blocking connections from the service network to the BOSH Director network. Ensure that service instances can access the Director so that the BOSH agents can report in.
Apps cannot access the service network. Configure Cloud Foundry application security groups to allow runtime access to the service network.
Problems accessing BOSH’s UAA or the BOSH director. Follow network troubleshooting and verify that the BOSH Director is online.

Validate Service Broker Connectivity to Service Instances

To validate connectivity:

  1. View the BOSH deployment name for your service broker by running:

    bosh deployments
  2. SSH into the Tanzu RabbitMQ for Tanzu Application Service service broker by running:

    bosh -d DEPLOYMENT-NAME ssh
  3. If no BOSH task-id appears in the error message, look in the broker log using the broker-request-id from the task.

Validate App Access to Service Instance

Use the cf ssh command to access to the app container, then connect to the Tanzu RabbitMQ for Tanzu Application Service service instance using the binding included in the VCAP_SERVICES environment variable.

Quotas

Plan Quota Issues

If developers report errors such as:

Message: Service broker error: The quota for this service plan has been exceeded.
Please contact your Operator for help.
  1. Verify your current plan quota.
  2. Increase the plan quota.
  3. Log in to Ops Manager.
  4. Reconfigure the quota on the plan page.
  5. Deploy the tile.
  6. Find who is using the plan quota and take the appropriate action.

Global Quota Issues

If developers report errors such as:

Message: Service broker error: The quota for this service has been exceeded.
Please contact your Operator for help.
  1. Verify your current global quota.
  2. Increase the global quota.
  3. Log in to Ops Manager.
  4. Reconfigure the quota on the on-demand settings page.
  5. Deploy the tile.
  6. Find out who is using the quota and take the appropriate action.

Failing Jobs and Unhealthy Instances

To find out if there is an issue with the Tanzu RabbitMQ for Tanzu Application Service deployment:

  1. Inspect the VMs by running:

    bosh -d service-instance_GUID vms --vitals
  2. For additional information, run:

    bosh -d service-instance_GUID instances --ps --vitals

If the VM is failing, follow the service-specific information. Any unadvised corrective actions (such as running BOSH restart on a VM) can cause issues in the service instance.

Techniques for Troubleshooting

This section contains instructions on interacting with the on-demand service broker and on-demand service instance BOSH deployments, and on performing general maintenance and housekeeping tasks.

Parse a Cloud Foundry (CF) Error Message

Failed operations (create, update, bind, unbind, delete) cause an error message. You can retrieve the error message later by running the cf CLI command cf service INSTANCE-NAME.

$ cf service myservice

Service instance: myservice
Service: super-db
Bound apps:
Tags:
Plan: dedicated-vm
Description: Dedicated Instance
Documentation url:
Dashboard:

Last Operation
Status: create failed
Message: Instance provisioning failed: There was a problem completing your request.
     Please contact your operations team providing the following information:
     service: redis-acceptance,
     service-instance-guid: ae9e232c-0bd5-4684-af27-1b08b0c70089,
     broker-request-id: 63da3a35-24aa-4183-aec6-db8294506bac,
     task-id: 442,
     operation: create
Started: 2017-03-13T10:16:55Z
Updated: 2017-03-13T10:17:58Z

Use the information in the Message field to debug further. Provide this information to Support when filing a ticket.

The task-id field maps to the BOSH task ID. For more information about a failed BOSH task, use the bosh task TASK-ID.

The broker-request-guid maps to the portion of the On-Demand Service Broker log containing the failed step. Access the broker log through your syslog aggregator, or access BOSH logs for the broker by typing bosh logs broker 0. If you have more than one broker instance, repeat this process for each instance.

Access Broker and Instance Logs and VMs

Before following these procedures, log in to the cf CLI and the BOSH CLI.

Access Broker Logs and VMs

You can access logs using Ops Manager by clicking on the Logs tab in the tile and downloading the broker logs.

To access logs using the BOSH CLI:

  1. To identify the on-demand broker (ODB) deployment run:

    bosh deployments
  2. To view VMs in the deployment run:

    bosh -d DEPLOYMENT-NAME instances
  3. To SSH onto the VM run:

    bosh -d DEPLOYMENT-NAME ssh
  4. To Download the broker logs run:

    bosh -d DEPLOYMENT-NAME logs

The archive generated by BOSH includes the following logs:

Log Name Description
broker.stdout.log Requests to the on-demand broker and the actions the broker performs while orchestrating the request (e.g. generating a manifest and calling BOSH). Start here when troubleshooting.
bpm.log Control script logs for starting and stopping the on-demand broker.
post-start.stderr.log Errors that occur during post-start verification.
post-start.stdout.log Post-start verification.
drain.stderr.log Errors that occur while running the drain script.

Access Service Instance Logs and VMs

  1. To target an individual service instance deployment, retrieve the GUID of your service instance with the following cf CLI command:

    cf service MY-SERVICE --guid
  2. To view VMs in the deployment, run:

    bosh -d service-instance_GUID instances
  3. To SSH into a VM, run:

    bosh -d service-instance_GUID ssh
  4. To download the instance logs, run:

    bosh -d service-instance_GUID logs

Run Service Broker Errands to Manage Brokers and Instances

From the BOSH CLI, you can run service broker errands that manage the service brokers and perform mass operations on the service instances that the brokers created. These service broker errands include:

To run an errand:

bosh -d DEPLOYMENT-NAME run-errand ERRAND-NAME

For example:

bosh -d my-deployment run-errand deregister-broker

Register Broker

The register-broker errand:

  • Registers the service broker with Cloud Controller.
  • Activates service access for any plans that are enabled on the tile.
  • Deactivates service access for any plans that are deactivated on the tile.
  • Does nothing for any plans that are set to manual on the tile.

Run this errand whenever the broker is re-deployed with new catalog metadata to update the Marketplace.

Plans with deactivated service access are only visible to admin Cloud Foundry users. Non-admin Cloud Foundry users, including Org Managers and Space Managers, cannot see these plans.

Deregister Broker

This errand deregisters a broker from Cloud Foundry.

The errand:

  • Deletes the service broker from Cloud Controller
  • Fails if there are any service instances, with or without bindings

Use the Delete All Service Instances errand to delete any existing service instances.

To run the errand:

bosh -d DEPLOYMENT-NAME run-errand deregister-broker

Upgrade All Service Instances

The upgrade-all-service-instances errand:

  • Collects all the service instances that the on-demand broker has registered.
  • Issues an upgrade command and deploys the a new manifest to the on-demand broker for each service instance.
  • Adds to a retry list any instances that have ongoing BOSH tasks at the time of upgrade.
  • Retries any instances in the retry list until all instances are upgraded.

When you make changes to the plan configuration, the errand upgrades all the Tanzu RabbitMQ for Tanzu Application Service service instances to the latest version of the plan.

If any instance fails to upgrade, the errand fails immediately. This prevents systemic problems from spreading to the rest of your service instances.

Delete All Service Instances

This errand uses the Cloud Controller API to delete all instances of your broker service offering in every Cloud Foundry org and space. It deletes only instances the Cloud Controller knows about. It does not delete orphan BOSH deployments.

Orphan BOSH deployments do not correspond to a known service instance. While rare, orphan deployments can occur. Use the orphan-deployments errand to identify them.

The delete-all-service-instances:

  1. Unbinds all apps from the service instances.
  2. Deletes all service instances sequentially. Each service instance deletion includes:
    1. Running any pre-delete errands
    2. Deleting the BOSH deployment of the service instance
    3. Removing any ODB-managed secrets from BOSH CredHub
    4. Checking for instance deletion failure, which causes the errand to failfailing immediately
  3. Determines whether any instances were created while the errand was running. If new instances are detected, the errand returns an error. In this case, VMware recommends running the errand again.

Use extreme caution when running this errand. Use it only when you want to destroy all of the on-demand service instances in an environment.

To run the errand:

bosh -d service-instance_GUID delete-deployment

Detect Orphaned Service Instances

A service instance is defined as "orphaned" when the BOSH deployment for the instance is still running, but the service is no longer registered in Cloud Foundry.

The orphan-deployments errand collates a list of service deployments that have no matching service instances in Cloud Foundry and return the list to the operator. It is then up to the operator to remove the orphaned BOSH deployments.

To run the errand:

bosh -d DEPLOYMENT-NAME run-errand orphan-deployments

If orphan deployments exist---The errand script does the following:

  • Exit with exit code 10
  • Output a list of deployment names under a [stdout] header
  • Provide a detailed error message under a [stderr] header

For example:

[stdout]
[{"deployment\_name":"service-instance\_80e3c5a7-80be-49f0-8512-44840f3c4d1b"}]

[stderr]
Orphan BOSH deployments detected with no corresponding service instance in Cloud Foundry. Before deleting any deployment it is recommended to verify the service instance no longer exists in Cloud Foundry and any data is safe to delete.

Errand 'orphan-deployments' completed with error (exit code 10)

These details are also available through the BOSH /tasks/ API endpoint for use in scripting:

$ curl 'https://bosh-user:bosh-password@bosh-url:25555/tasks/task-id/output?type=result' | jq .
{
  "exit_code": 10,
  "stdout": "[{"deployment_name":"service-instance_80e3c5a7-80be-49f0-8512-44840f3c4d1b"}]\n",
  "stderr": "Orphan BOSH deployments detected with no corresponding service instance in Cloud Foundry. Before deleting any deployment it is recommended to verify the service instance no longer exists in Cloud Foundry and any data is safe to delete.\n",
  "logs": {
    "blobstore_id": "d830c4bf-8086-4bc2-8c1d-54d3a3c6d88d"
  }
}

If no orphan deployments exist---The errand script:

  • Exit with exit code 0
  • Stdout is an empty list of deployments
  • Stderr is None
[stdout]
[]

[stderr]
None

Errand 'orphan-deployments' completed successfully (exit code 0)

If the errand encounters an error during running---The errand script does the following:

  • Exit with exit 1
  • Stdout is empty
  • Any error messages are under stderr

To clean up orphaned instances, run the following command on each instance:

Running this command might leave IaaS resources in an unusable state.

bosh delete-deployment service-instance_SERVICE-INSTANCE-GUID

Get Admin Credentials for a Service Instance

To retrieve the admin credentials for a service instance from BOSH CredHub:

  1. Use the cf CLI to find the GUID associated with the service instance for which you want to retrieve credentials by running:
    cf service SERVICE-INSTANCE-NAME --guid
    For example:
    $ cf service my-service-instance --guid
    
    12345678-90ab-cdef-1234-567890abcdef
    If you do not know the name of the service instance, you can list service instances in the space with cf services.
  2. Follow the steps in Gather Credential and IP Address information and Log in to the Ops Manager VM with SSH of Advanced Troubleshooting with the BOSH CLI to SSH into the Ops Manager VM.
  3. From the Ops Manager VM, log in to your BOSH Director with the BOSH CLI. See Authenticate with the BOSH Director VM in Advanced Troubleshooting with the BOSH CLI.
  4. Find the values for BOSH_CLIENT and BOSH_CLIENT_SECRET:

    1. In the Ops Manager Installation Dashboard, click the BOSH Director tile.
    2. Click the Credentials tab.
    3. In the BOSH Director section, click the link to the BOSH Commandline Credentials .
    4. Record the values for BOSH_CLIENT and BOSH_CLIENT_SECRET.
  5. Set the API target of the CredHub CLI to your BOSH CredHub server by running:
    credhub api https://BOSH-DIRECTOR-IP:8844 \
          --ca-cert=/var/tempest/workspaces/default/root_ca_certificate
    Where BOSH-DIRECTOR-IP is the IP address of the BOSH Director VM.

    For example:
    $ credhub api https://10.0.0.5:8844 \
          --ca-cert=/var/tempest/workspaces/default/root_ca_certificate
  6. Log in to CredHub by running:
    credhub login \
        --client-name=BOSH-CLIENT \
        --client-secret=BOSH-CLIENT-SECRET

    For example:

    $ credhub login \
          --client-name=credhub \
          --client-secret=abcdefghijklm123456789
  7. Use the CredHub CLI to retrieve the credentials :

    • Retrieve the password for the admin user by running:
      credhub get -n /p-bosh/service-instance_GUID/admin_password
      In the output, the password appears under value. Record the password.
      For example:
      $ credhub get \
        -n /p-bosh/service-instance_70d30bb6-7f30-441a-a87c-05a5e4afff26/admin_password 
      id: d6e5bd10-3b60-4a1a-9e01-c76da688b847 name: /p-bosh/service-instance_70d30bb6-7f30-441a-a87c-05a5e4afff26/admin_password type: password value: UMF2DXsqNPPlCNWMdVMcNv7RC3Wi10 version_created_at: 2018-04-02T23:16:09Z

Reinstall a Tile

To reinstall a tile in the same environment where it was previously uninstalled:

  1. Ensure that the previous tile was correctly uninstalled as follows:
    1. Log in as an admin by running:
      cf login
    2. Confirm that the Marketplace does not list Tanzu RabbitMQ for Tanzu Application Service by running:
      cf m
    3. Log in to BOSH as an admin by running:
      bosh log-in
    4. Display your BOSH deployments to confirm that the output does not show the Tanzu RabbitMQ for Tanzu Application Service deployment by running:
      bosh deployments
    5. Run the "delete-all-service-instances" errand to delete every instance of the service.
    6. Run the "deregister-broker" errand to delete the service broker.
    7. Delete the service broker BOSH deployment by running:
      bosh delete-deployment BROKER-DEPLOYMENT-NAME
    8. Reinstall the tile.

View Resource Saturation and Scaling

To view usage statistics for any service, run:

  1. Run:

    bosh -d DEPLOYMENT-NAME vms --vitals
  2. To view process-level information, run:

    bosh -d DEPLOYMENT-NAME instances --ps

Identify Apps using a Service Instance

To identify which apps are using a specific service instance using the name of the BOSH deployment:

  1. Take the deployment name and strip the service-instance_ leaving you with the GUID.
  2. Log in to Cloud Foundry as an admin.
  3. Obtain a list of all service bindings by running::
    cf curl /v2/service_instances/GUID/service_bindings
  4. The output from the curl gives you a list of resources, with each item referencing a service binding, which contains the APP-URL. To find the name, org, and space for the app, run:
    1. cf curl APP-URL and record the app name under entity.name.
    2. cf curl SPACE-URL to obtain the space, using the entity.space_url from the curl. Record the space name under entity.name.
    3. cf curl ORGANIZATION-URL to obtain the org, using the entity.organization_url from the curl. Record the organization name under entity.name.

When running cf curl ensure that you query all pages, because the responses are limited to a certain number of bindings per page. The default is 50. To find the next page, curl the value under next_url.

Monitor the Quota Saturation and Service Instance Count

Quota saturation and total number of service instances are available through ODB metrics emitted to Loggregator. These are the metric names:

Metric Name Description
on-demand-broker/SERVICE-NAME-MARKETPLACE/quota_remaining global quota remaining for all instances across all plans
on-demand-broker/SERVICE-NAME-MARKETPLACE/PLAN-NAME/quota_remaining quota remaining for a particular plan
on-demand-broker/SERVICE-NAME-MARKETPLACE/total_instances total instances created across all plans
on-demand-broker/SERVICE-NAME-MARKETPLACE/PLAN-NAME/total_instances total instances created for a given plan

Quota metrics are not emitted if no quota was set.

Drop and Restore AMQP or AMQPS Traffic to a Tanzu RabbitMQ for Tanzu Application Service Service Instance

While debugging a Tanzu RabbitMQ for Tanzu Application Service service instance, you can prevent apps from sending and receiving messages, for example, to decrease the server load. You can use drop-amqp-traffic and restore-amqp-traffic scripts, which run the necessary iptables commands to achieve that.

To stop and then restore traffic to a Tanzu RabbitMQ for Tanzu Application Service service instance, do the following:

  1. To stop all AMQP or AMQPS traffic to a Tanzu RabbitMQ for Tanzu Application Service service instance, enter the following command:
        bosh -d service-instance_GUID ssh rabbitmq-server
        "echo y | sudo /var/vcap/packages/rabbitmq-admin/bin/drop-amqp-traffic"
        
  2. After performing the troubleshooting steps, restore the traffic. To do this, enter the following command:
        bosh -d service-instance_GUID ssh rabbitmq-server
        "echo y | sudo /var/vcap/packages/rabbitmq-admin/bin/restore-amqp-traffic"
        

Alternatively, you can run these scripts on individual nodes:

  1. bosh ssh to a rabbitmq-server instance.
  2. sudo -s to gain root privileges.
  3. Execute drop-amqp-traffic to drop all AMQP or AMQPS traffic to this instance, or restore-amqp-traffic to start accepting traffic again.

Troubleshoot Service-Gateway Access

Service-gateway access leverages the TAS for VMs TCP Router. Each service instance and protocol for which service-gateway access is enabled corresponds to a route in the TCP Router.

When debugging problems with service-gateway access:

  1. See Troubleshooting TCP Routing in the TAS for VMs documentation.

Frequently Asked Questions

What should I check before deploying a new version of the tile?

Ensure that all nodes in the cluster are healthy from the RabbitMQ Management UI, or health metrics exposed through Firehose. You cannot rely solely on the BOSH instances output as that reflects the state of the Erlang VM used by RabbitMQ and not the RabbitMQ app.

What is the correct way to stop and start Tanzu RabbitMQ for Tanzu Application Service?

Only BOSH commands should be used by the operator to interact with the RabbitMQ app.

For example:

bosh stop rabbitmq-server and bosh start rabbitmq-server.

There are BOSH job lifecycle hooks which are only fired when rabbitmq-server is stopped through BOSH. You can also stop individual instances by running the stop command and specifying JOB [index].

Note

Do not use monit stop rabbitmq-server as this does not call the drain scripts.

What happens when I run bosh stop rabbitmq-server?

BOSH starts the shutdown sequence from the bootstrap instance.

We start by telling the RabbitMQ app to shutdown and then shutdown the Erlang VM within which it is running. If this succeeds, we run the following checks to ensure that the RabbitMQ app and Erlang VM have stopped:

  1. If /var/vcap/sys/run/rabbitmq-server/pid exists, check that the PID inside this file does not point to a running Erlang VM process. Notice that we are tracking the Erlang PID and not the RabbitMQ PID.
  2. Check that rabbitmqctl does not return an Erlang VM PID.
Once this completes on the bootstrap instance, BOSH continues the same sequence on the next instance. All remaining rabbitmq-server instances stop one by one.

What happens when bosh stop rabbitmq-server fails?

If the BOSH stop fails, you will likely get an error saying that the drain script failed with:

result: 1 of 1 drain scripts failed. Failed Jobs: rabbitmq-server.

What do I do when bosh stop rabbitmq-server fails?

The drain script logs to /var/vcap/sys/log/rabbitmq-server/drain.log. If you have a remote syslog configured, this appears as the rmq_server_drain program.

First, BOSH ssh into the failing rabbitmq-server instance and start the rabbitmq-server job by running monit start rabbitmq-server. You will not be able to start the job with BOSH start as this always runs the drain script first and will fail as the drain script is failing.

Once rabbitmq-server job is running (confirm this with monit status), run DEBUG=1 /var/vcap/jobs/rabbitmq-server/bin/drain. This tells you exactly why it is failing.

How can I manually back up the state of the RabbitMQ cluster?

It is possible to back up the state of a RabbitMQ cluster for both the on-demand and pre-provisioned services using the RabbitMQ Management API. Backups include virtual hosts, exchanges, queues, and users.

Back up Manually

  1. Log in to the RabbitMQ Management UI as the admin user you created. For instructions about how to do so with or without OAuth enabled, see Using the RabbitMQ Management UI.
  2. Select export definitions from the main page.

Back up and Restore with a Script

Use the API to run scripts with code similar to the following:

  • For the backup:
    curl -u "$USERNAME:$PASSWORD" "http://$RABBIT-ADDRESS:15672/api/definitions"
    -o "$BACKUP-FOLDER/rabbit-backup.json"
            
  • For the restore:
    curl -u "$USERNAME:$PASSWORD" "http://$RABBIT-ADDRESS:15672/api/definitions"
    -X POST -H "Content-Type: application/json" -d
    "@$BACKUP-FOLDER/rabbit-backup.json"
            

What pre-upgrade checks should I do?

Before doing any upgrade of RabbitMQ, VMware recommends checking the following:

  1. In Ops Manager check that the status of all of the instances is healthy.
  2. Log in to the RabbitMQ Management UI. For instructions about how to do so with or without OAuth enabled, see Using the RabbitMQ Management UI.
  3. Check that no alarms have been triggered and that all nodes display as green, showing they are healthy.
  4. Check that the system is not close to hitting either the memory or disk alarm. Do this by looking at what has been consumed by each node in the RabbitMQ Management UI.

Knowledge Base (Community)

Find the answer to your question and navigate product discussions and solutions by searching Broadcom Support.

File a Support Ticket

You can file a ticket with Support. Include the error message from cf service YOUR-SERVICE-INSTANCE.

To expedite troubleshooting, provide your service broker logs and your service instance logs. If your cf service YOUR-SERVICE-INSTANCE output includes a task-id, provide the BOSH task output.

check-circle-line exclamation-circle-line close-line
Scroll to top icon