This site will be decommissioned on December 31st 2024. After that date content will be available at techdocs.broadcom.com.

This topic describes how to use BOSH Backup and Restore (BBR) to back up Kubernetes clusters provisioned by VMware Tanzu Kubernetes Grid Integrated Edition.

Overview

You can use BOSH Backup and Restore (BBR) to backup Kubernetes clusters provisioned by TKGI, including the control plane nodes, etcd database, and worker node VMs.

Kubernetes clusters provisioned by TKGI include custom backup and restore scripts which encapsulate the correct procedure for backing up and restoring the cluster nodes and etcd database.

BBR orchestrates running the backup and restore scripts and transferring the generated backup artifacts to and from a backup directory. If configured correctly, BBR can use TLS to communicate securely with backup targets.

To view the BBR release notes, see the Cloud Foundry documentation, BOSH Backup and Restore Release Notes.

Recommendations

VMware recommends:

  • Follow the full procedure documented in this topic when creating a backup. This ensures that you always have a consistent backup of Ops Manager and Tanzu Kubernetes Grid Integrated Edition to restore from.

  • Back up your Kubernetes clusters frequently, especially before upgrading your Tanzu Kubernetes Grid Integrated Edition deployment.

  • For BOSH v270.0 and above (currently in Ops Manager 2.7), prune the BOSH blobstore by running bosh clean-up --all prior to running a backup of the BOSH director. This removes all unused resources, including packages compiled against older stemcell versions, which can result in a smaller, faster backup of the BOSH Director. For more information see the clean-Up command.

Note:The command bosh clean-up –all is a destructive operation and can remove resources that are unused but needed. For example, if an On-Demand Service Broker such as Tanzu Kubernetes Grid Integrated Edition is deployed and no service instances have been created, the releases needed to create a service instance will be categorized as unused and removed.

Prepare to Back Up

Before you use BBR to either back up TKGI or restore TKGI from backup, follow these steps to retrieve deployment information and credentials:

Verify Your BBR Version

Before running BBR, verify that the installed version of BBR is compatible with your deployment’s current Tanzu Kubernetes Grid Integrated Edition release.

  1. For your current Tanzu Kubernetes Grid Integrated Edition release’s minimum version information, see the Tanzu Kubernetes Grid Integrated Edition Release Notes.

  2. To verify the currently installed BBR version, run the following command:

    bbr version  
    

If you do not have BBR installed, or your installed version does not meet the minimum version requirement, see Installing BOSH Backup and Restore.

Retrieve the BBR SSH Credentials

There are two ways to retrieve BOSH Director credentials:

Ops Manager Installation Dashboard

To retrieve your Bbr Ssh Credentials using the Ops Manager Installation Dashboard, perform the following steps:

  1. Navigate to the Ops Manager Installation Dashboard.
  2. Click the BOSH Director tile.
  3. Click the Credentials tab.

  4. Locate Bbr Ssh Credentials.

  5. Click Link to Credentials next to it.
  6. Copy the private_key_pem field value.

Ops Manager API

To retrieve your Bbr Ssh Credentials using the Ops Manager API, perform the following steps:

  1. Obtain your UAA access token. For more information, see Access the Ops Manager API.
  2. Retrieve the Bbr Ssh Credentials by running the following command:

    curl "https://OPS-MAN-FQDN/api/v0/deployed/director/credentials/bbr_ssh_credentials" \
    -X GET \
    -H "Authorization: Bearer UAA-ACCESS-TOKEN"
    

    Where:

    • OPS-MAN-FQDN is the fully-qualified domain name (FQDN) for your Ops Manager deployment.
    • UAA-ACCESS-TOKEN is your UAA access token.
  3. Copy the value of the private_key_pem field.

Save the BBR SSH Credentials to File

  1. To reformat the copied private_key_pem value and save it to a file in the current directory, run the following command:

    printf -- "YOUR-PRIVATE-KEY" > PRIVATE-KEY-FILE
    

    Where:

    • YOUR-PRIVATE-KEY is the text of your private key.
    • PRIVATE-KEY-FILE is the path to the private key file you are creating.

    For example:

     $ printf –  “—–begin rsa private key—– fake key contents —-end rsa private key—–” > bbr_key.pem

Retrieve the BOSH Director Credentials

There are two ways to retrieve BOSH Director credentials:

Ops Manager Installation Dashboard

To retrieve your BOSH Director credentials using the Ops Manager Installation Dashboard, perform the following steps:

  1. Navigate to the Ops Manager Installation Dashboard.
  2. Click the BOSH Director tile.
  3. Click the Credentials tab.

  4. Locate Director Credentials.

  5. Click Link to Credentials next to it.
  6. Copy and record the value of the password field.

Ops Manager API

To retrieve your BOSH Director credentials using the Ops Manager API, perform the following steps:

  1. Obtain your UAA access token. For more information, see Access the Ops Manager API.
  2. Retrieve the Director Credentials by running the following command:

    curl "https://OPS-MAN-FQDN/api/v0/deployed/director/credentials/bbr_ssh_credentials" \
    -X GET \
    -H "Authorization: Bearer UAA-ACCESS-TOKEN"
    

    Where: OPS-MAN-FQDN is the fully-qualified domain name (FQDN) for your Ops Manager deployment. UAA-ACCESS-TOKEN is your UAA access token.

  3. Copy and record the value of the password field.

Retrieve the UAA Client Credentials

To obtain BOSH credentials for your BBR operations, perform the following steps:

  1. From the Ops Manager Installation Dashboard, click the Tanzu Kubernetes Grid Integrated Edition tile.
  2. Select the Credentials tab.
  3. Navigate to Credentials > UAA Client Credentials.
  4. Record the value for uaa_client_secret.
  5. Record the value for uaa_client_name.

Note: You must use BOSH credentials that limit the scope of BBR activity to your cluster deployments.

Retrieve the BOSH Director Address

You access the BOSH Director using an IP address.

To obtain your BOSH Director’s IP address:

  1. Open the Ops Manager Installation Dashboard.
  2. Select BOSH Director > Status.
  3. Select the listed Director IP Address.

Log In To BOSH Director

  1. If you are not using the Ops Manager VM as your jumpbox, install the latest BOSH CLI on your jumpbox.
  2. To log in to BOSH Director, using the IP address that you recorded above, run the following command line:

    bosh -e BOSH-DIRECTOR-IP \
    --ca-cert PATH-TO-BOSH-SERVER-CERTIFICATE log-in
    

    Where:

    • BOSH-DIRECTOR-IP is the BOSH Director IP address recorded above.
    • PATH-TO-BOSH-SERVER-CERTIFICATE is the path to the root Certificate Authority (CA) certificate as outlined in Download the Root CA Certificate.
  3. To specify Email, specify director.

  4. To specify Password, enter the Director Credentials that you obtained in Retrieve the BOSH Director Credentials.

    For example:

     $ bosh -e 10.0.0.3 \
    –ca-cert /var/tempest/workspaces/default/root_ca_certificate log-in
    Email (): director
    Password (): *******************
    Successfully authenticated with UAA
    Succeeded

Download the Root CA Certificate

To download the root CA certificate for your Tanzu Kubernetes Grid Integrated Edition deployment, perform the following steps:

  1. Open the Ops Manager Installation Dashboard.
  2. In the top right corner, click your username.
  3. Navigate to Settings > Advanced.
  4. Click Download Root CA Cert.

Retrieve the BOSH Command Line Credentials

  1. Open the Ops Manager Installation Dashboard.
  2. Click the BOSH Director tile.
  3. In the BOSH Director tile, click the Credentials tab.
  4. Navigate to Bosh Commandline Credentials.
  5. Click Link to Credential.
  6. Copy the credential value.

Retrieve Your Cluster Deployment Names

To locate and record a cluster deployment name, follow the steps below for each cluster:

  1. On the command line, run the following command to log in:

     tkgi login -a TKGI-API -u USERNAME -k 
    Where:

    • TKGI-API is the domain name for the TKGI API that you entered in Ops Manager > Tanzu Kubernetes Grid Integrated Edition > TKGI API > API Hostname (FQDN). For example, api.tkgi.example.com.
    • USERNAME is your user name.

      See Logging in to Tanzu Kubernetes Grid Integrated Edition for more information about the tkgi login command.

      Note: If your operator has configured Tanzu Kubernetes Grid Integrated Edition to use a SAML identity provider, you must include an additional SSO flag to use the above command. For information about the SSO flags, see the section for the above command in TKGI CLI. For information about configuring SAML, see Connecting Tanzu Kubernetes Grid Integrated Edition to a SAML Identity Provider

  2. Identify the cluster ID:

    tkgi cluster CLUSTER-NAME
    

    Where CLUSTER-NAME is the name of your cluster.

  3. From the output of this command, record the UUID value.

  4. Open the Ops Manager Installation Dashboard.

  5. Click the BOSH Director tile.

  6. Select the Credentials tab.

  7. Navigate to Bosh Commandline Credentials and click Link to Credential.

  8. Copy the credential value.

  9. SSH into your jumpbox. For more information about the jumpbox, see Installing BOSH Backup and Restore.

  10. To retrieve your cluster deployment name, run the following command:

    BOSH-CLI-CREDENTIALS deployments | grep UUID
    

    Where:

Back Up Tanzu Kubernetes Grid Integrated Edition

To back up your Tanzu Kubernetes Grid Integrated Edition environment you must first connect to your jumpbox before executing bbr backup commands.

Connect to Your Jumpbox

You can establish a connection to your jumpbox in one of the following ways:

For general information about the jumpbox, see Installing BOSH Backup and Restore.

Connect with SSH

To connect to your jumpbox with SSH, do one of the following:

  • If you are using the Ops Manager VM as your jumpbox, log in to the Ops Manager VM. See Log in to the Ops Manager VM with SSH in Advanced Troubleshooting with the BOSH CLI.

  • If you want to connect to your jumpbox using the command line, run the following command:

    ssh -i PATH-TO-KEY JUMPBOX-USERNAME@JUMPBOX-ADDRESS
    

    Where:

    • PATH-TO-KEY is the local path to your private key for the jumpbox host.
    • JUMPBOX-USERNAME is your jumpbox username.
    • JUMPBOX-ADDRESS is the address of the jumpbox.

Note: If you connect to your jumpbox with SSH, you must run the BBR commands in the following sections from within your jumpbox.

Connect with BOSH_ALL_PROXY

You can use the BOSH_ALL_PROXY environment variable to open an SSH tunnel with SOCKS5 to your jumpbox. This tunnel enables you to forward requests from your local machine to the BOSH Director through the jumpbox. When BOSH_ALL_PROXY is set, BBR always uses its value to forward requests to the BOSH Director.

Note: For the following procedures to work, ensure the SOCKS port is not already in use by a different tunnel or process.

To connect with BOSH_ALL_PROXY, do one of the following:

  • If you want to establish the tunnel separate from the BOSH CLI, do the following:

    1. Establish the tunnel and make it available on a local port by running the following command:

      ssh -4 -D SOCKS-PORT -fNC JUMPBOX-USERNAME@JUMPBOX-ADDRESS -i JUMPBOX-KEY-FILE -o ServerAliveInterval=60
      

      Where:

      • SOCKS-PORT is the local SOCKS port.
      • JUMPBOX-USERNAME is your jumpbox username.
      • JUMPBOX-ADDRESS is the address of the jumpbox.
      • JUMPBOX-KEY-FILE is the local SSH private key for accessing the jumpbox.

      For example:

      $ ssh -4 -D 12345 -fNC jumpbox@203.0.113.0 -i jumpbox.key -o ServerAliveInterval=60

    2. Provide the BOSH CLI with access to the tunnel through BOSH_ALL_PROXY by running the following command:

      export BOSH_ALL_PROXY=socks5://localhost:SOCKS-PORT
      

      Where is SOCKS-PORT is your local SOCKS port.

  • If you want to establish the tunnel using the BOSH CLI, do the following:

    1. Provide the BOSH CLI with the necessary SSH credentials to create the tunnel by running the following command:

      export BOSH_ALL_PROXY=ssh+socks5://JUMPBOX-USERNAME@JUMPBOX-ADDRESS:SOCKS-PORT?private_key=JUMPBOX-KEY-FILE
      

      Where:

      • JUMPBOX-USERNAME is your jumpbox username.
      • JUMPBOX-ADDRESS is the address of the jumpbox.
      • SOCKS-PORT is your local SOCKS port.
      • JUMPBOX-KEY-FILE is the local SSH private key for accessing the jumpbox.

      For example:

      $ export BOSH_ALL_PROXY=ssh+socks5://jumpbox@203.0.113.0:12345?private_key=jumpbox.key

Note: Using BOSH_ALL_PROXY can result in longer backup and restore times because of network performance degradation. All operations must pass through the proxy which means moving backup artifacts can be significantly slower.

Warning: In BBR v1.5.0 and earlier, the tunnel created by the BOSH CLI does not include the ServerAliveInterval flag. This may result in your SSH connection timing out when transferring large artifacts. In BBR v1.5.1, the ServerAliveInterval flag is included. For more information, see bosh-backup-and-restore v1.5.1 on GitHub.

Back Up Kubernetes Clusters Provisioned by TKGI

Before backing up your TKGI cluster deployments you should verify that they can be backed up.

Verify Your Provisioned Clusters

To verify that you can reach your TKGI cluster deployments and that the deployments can be backed up, follow the steps below.

  1. SSH into your jumpbox. For more information about the jumpbox, see Configure Your Jumpbox in Installing BOSH Backup and Restore.
  2. To perform the BBR pre-backup check, run the following command from your jumpbox:

    BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET  bbr deployment \
    --all-deployments  --target BOSH-TARGET  --username TKGI-UAA-CLIENT-NAME \
    --ca-cert PATH-TO-BOSH-SERVER-CERT \
    pre-backup-check
    

    Where:

    For example:

     $ BOSH_CLIENT_SECRET=p455w0rd  bbr deployment \
    –all-deployments –target bosh.example.com –username pivotal-container-service-12345abcdefghijklmn \
    –ca-cert /var/tempest/workspaces/default/root_ca_certificate \
    pre-backup-check

  3. If the pre-backup-check command is successful, the command returns a list of cluster deployments that can be backed up.


    For example:

     [21:51:23] Pending: service-instance_abcdeg-1234-5678-hijk-90101112131415
    [21:51:23] ————————-
    [21:51:31] Deployment ‘service-instance_abcdeg-1234-5678-hijk-90101112131415’ can be backed up.
    [21:51:31] ————————-
    [21:51:31] Successfully can be backed up: service-instance_abcdeg-1234-5678-hijk-90101112131415

    In the output above, service-instance_abcdeg-1234-5678-hijk-90101112131415 is the BOSH deployment name of a TKGI cluster.

  4. If the pre-backup-check command fails, do one or more of the following:

    • Make sure you are using the correct Tanzu Kubernetes Grid Integrated Edition credentials.
    • Run the command again, adding the --debug flag to enable debug logs. For more information, see BBR Logging.
    • Make the changes suggested in the output and run the pre-backup check again. For example, the deployments might not have the correct backup scripts, or the connection to the BOSH Director failed.

Back Up Kubernetes Clusters Provisioned by TKGI

When backing up your TKGI cluster, you can choose to back up only one cluster or to backup all cluster deployments in scope. The procedures to do this are the following:

Back Up All Kubernetes Clusters

The following procedure backs up all cluster deployments.

Make sure you use the TKGI UAA credentials that you recorded in Download the UAA Client Credentials. These credentials limit the scope of the backup to cluster deployments only.

Note: The BBR backup command can take a long time to complete. You can run it independently of the SSH session so that the process can continue running even if your connection to the jumpbox fails. The command above uses nohup, but you could also run the command in a screen or tmux session.

  1. To back up all cluster deployments, run the following command from your jumpbox:

    BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET nohup bbr deployment \
    --all-deployments --target BOSH-TARGET --username TKGI-UAA-CLIENT-NAME \
    --ca-cert PATH-TO-BOSH-SERVER-CERT \
    backup [--with-manifest] [--artifact-path]
    

    Where:

    • TKGI-UAA-CLIENT-SECRET is the value you recorded for uaa_client_secret in Download the UAA Client Credentials above.
    • BOSH-TARGET is the value you recorded for the BOSH Director’s address in Retrieve the BOSH Director Address above. You must be able to reach the target address from the machine where you run bbr commands.
    • TKGI-UAA-CLIENT-NAME is the value you recorded for uaa_client_name in Download the UAA Client Credentials above.
    • PATH-TO-BOSH-SERVER-CERT is the path to the root CA certificate that you downloaded in Download the Root CA Certificate above.
    • --with-manifest is an optional backup parameter to include the manifest in the backup artifact. If you use this flag, the backup artifact then contains credentials that you should keep secret.
    • --artifact-path is an optional backup parameter to specify the output path for the backup artifact.

    For example:

     $ BOSH_CLIENT_SECRET=p455w0rd \
    nohup bbr deployment \
    –all-deployments \
    –target bosh.example.com \
    –username pivotal-container-service-12345abcdefghijklmn \
    –ca-cert /var/tempest/workspaces/default/root_ca_certificate \
    backup

    Note: The optional –with-manifest flag directs BBR to create a backup containing credentials. You should manage the generated backup artifact knowing it contains secrets for administering your environment.

  2. If the backup command completes successfully, follow the steps in Manage Your Backup Artifact below.

  3. If the backup command fails, the backup operation exits. BBR does not attempt to continue backing up any non-backed up clusters. To troubleshoot a failing backup, do one or more of the following:

Back Up a Single Kubernetes Cluster
  1. To backup a single, specific cluster deployment, run the following command from your jumpbox:

    BOSH_CLIENT_SECRET=TKGI-UAA-CLIENT-SECRET \
    nohup bbr deployment \
    --deployment CLUSTER-DEPLOYMENT-NAME \
    --target BOSH-DIRECTOR-IP \
    --username TKGI-UAA-CLIENT-NAME \
    --ca-cert PATH-TO-BOSH-SERVER-CERT \
    backup [--with-manifest] [--artifact-path]
    

    Where:

    • TKGI-UAA-CLIENT-SECRET is the value you recorded for uaa_client_secret in Download the UAA Client Credentials above.
    • CLUSTER-DEPLOYMENT-NAME is the value you recorded in Retrieve your Cluster Deployment Name above.
    • BOSH-TARGET is the value you recorded for the BOSH Director’s address in Retrieve the BOSH Director Address above. You must be able to reach the target address from the machine where you run bbr commands.
    • TKGI-UAA-CLIENT-NAME is the value you recorded for uaa_client_name in Download the UAA Client Credentials above.
    • PATH-TO-BOSH-SERVER-CERT is the path to the root CA certificate that you downloaded in Download the Root CA Certificate above.
    • --with-manifest is an optional backup parameter to include the manifest in the backup artifact. If you use this flag, the backup artifact then contains credentials that you should keep secret.
    • --artifact-path is an optional backup parameter to specify the output path for the backup artifact.

    For example:

     $ BOSH_CLIENT_SECRET=p455w0rd  nohup bbr deployment \
    –deployment service-instance_abcdeg-1234-5678-hijk-9010111213141 \
    –target bosh.example.com –username pivotal-container-service-12345abcdefghijklmn \
    –ca-cert /var/tempest/workspaces/default/root_ca_certificate \
    backup

    Note: The optional –with-manifest flag directs BBR to create a backup containing credentials. You should manage the generated backup artifact knowing it contains secrets for administering your environment.

  2. If the backup command completes successfully, follow the steps in Manage Your Backup Artifact below.

  3. If the backup command fails, do one or more of the following:

Cancel a Cluster Backup

Backups can take a long time. If you realize that the backup is going to fail or that your developers need to push an app immediately, you might need to cancel the backup.

To cancel a backup, perform the following steps:

  1. Terminate the BBR process by pressing Ctrl-C and typing yes to confirm.
  2. Because stopping a backup can leave the system in an unusable state and prevent additional backups, follow the procedures in Clean up After a Failed Backup below.

After Backing Up Tanzu Kubernetes Grid Integrated Edition

After the backup has completed you should review and manage the generated backup artifacts.

Manage Your Backup Artifact

The BBR-created backup consists of a directory containing the backup artifacts and metadata files. BBR stores each completed backup directory within the current working directory.

Note: The optional –with-manifest flag directs BBR to create a backup containing credentials. You should manage the generated backup artifact knowing it contains secrets for administering your environment.

BBR backup artifact directories are named using the following formats:

  • DIRECTOR-IP-TIMESTAMP for the BOSH Director backups.
  • DEPLOYMENT-TIMESTAMP for the Control Plane backup.
  • DEPLOYMENT-TIMESTAMP for the cluster deployment backups.

Keep your backup artifacts safe by following these steps:

  1. Move the backup artifacts off the jumpbox to your storage space.

  2. Compress and encrypt the backup artifacts when storing them.

  3. Make redundant copies of your backup and store them in multiple locations. This minimizes the risk of losing your backups in the event of a disaster.

  4. Each time you redeploy Tanzu Kubernetes Grid Integrated Edition, test your backup artifact by following the procedures in:

Recover from a Failing Command

If the backup fails, follow these steps:

  1. Ensure that you set all the parameters in the backup command.
  2. Ensure the credentials previously obtained are valid.
  3. Ensure the deployment that you specify in the BBR command exists.
  4. Ensure that the jumpbox can reach the BOSH Director.
  5. Consult BBR Logging.
  6. If you see the error message: Directory /var/vcap/store/bbr-backup already exists on instance, run the appropriate cleanup command. See Clean Up After a Failed Backup below for more information.
  7. If the backup artifact is corrupted, discard the failing artifacts and run the backup again.

Clean Up after a Failed Backup

If your backup process fails, use the BBR cleanup script to clean up the failed run.

Warning: It is important to run the BBR cleanup script after a failed BBR backup run. A failed backup run might leave the BBR backup directory on the instance, causing any subsequent attempts to backup to fail. In addition, BBR might not have run the post-backup scripts, leaving the instance in a locked state.

  • If the TKGI BOSH Director backup failed, run the following BBR cleanup script command to clean up:

    bbr director --host BOSH-DIRECTOR-IP \
    --username bbr  --private-key-path PRIVATE-KEY-FILE \
    backup-cleanup
    

    Where:

    • BOSH-DIRECTOR-IP is the address of the BOSH Director. If the BOSH Director is public, BOSH-DIRECTOR-IP is a URL, such as https://my-bosh.xxx.cf-app.com. Otherwise, this is the internal IP BOSH-DIRECTOR-IP which you can retrieve as show in Retrieve the BOSH Director Address above.
    • PRIVATE-KEY-FILE is the path to the private key file that you can create from Bbr Ssh Credentials as shown in Download the BBR SSH Credentials above. Replace the placeholder text using the information in the following table.

    For example:

     $ bbr director  –host 10.0.0.5  –username bbr \
    –private-key-path private-key.pem \
    backup-cleanup

  • If the TKGI control plane or TKGI clusters backups fail, run the following BBR cleanup script command to clean up:

    BOSH_CLIENT_SECRET=BOSH-CLIENT-SECRET \
    bbr deployment \
    --target BOSH-TARGET \
    --username BOSH-CLIENT \
    --deployment DEPLOYMENT-NAME \
    --ca-cert PATH-TO-BOSH-CA-CERT \
    backup-cleanup
    

    Where:

    • BOSH-CLIENT-SECRET is your BOSH client secret. If you do not know your BOSH Client Secret, open your BOSH Director tile, navigate to Credentials > Bosh Commandline Credentials and record the value for BOSH_CLIENT_SECRET.
    • BOSH-TARGET is your BOSH Environment setting. If you do not know your BOSH Environment setting, open your BOSH Director tile, navigate to Credentials > Bosh Commandline Credentials and record the value for BOSH_ENVIRONMENT. You must be able to reach the target address from the workstation where you run bbr commands.
    • BOSH-CLIENT is your BOSH Client Name. If you do not know your BOSH Client Name, open your BOSH Director tile, navigate to Credentials > Bosh Commandline Credentials and record the value for BOSH_CLIENT.
    • DEPLOYMENT-NAME is the Tanzu Kubernetes Grid Integrated Edition BOSH deployment name that you located in the Locate the Tanzu Kubernetes Grid Integrated Edition Deployment Names section above.
    • PATH-TO-BOSH-CA-CERT is the path to the root CA certificate that you downloaded in Download the Root CA Certificate above.

    For example:

     $ BOSH_CLIENT_SECRET=p455w0rd bbr deployment \
    –target bosh.example.com –username admin –deployment cf-acceptance-0 \
    –ca-cert bosh.ca.crt \
    backup-cleanup

If the cleanup script fails, consult the following table to match the exit codes to an error message.

Value Error
0 Success
1 General failure
8 The post-backup unlock failed. One of your deployments might be in a bad state and require attention.
16 The cleanup failed. This is a non-fatal error indicating that the utility has been unable to clean up open BOSH SSH connections to a deployment’s VMs. Manual cleanup might be required to clear any hanging BOSH users and connections.

check-circle-line exclamation-circle-line close-line
Scroll to top icon