Here you will learn how to diagnose issues that you encounter when installing products such as VMware Tanzu Application Service for VMs (TAS for VMs) with VMware Tanzu Operations Manager.

An important consideration when diagnosing issues is communication between VMs deployed by Operations Manager. Communication takes the form of messaging, routing, or both. If either of these go wrong, an installation can fail. For example, after installing TAS for VMs, one of the VMs must deploy a test app to the cloud during post-installation testing. The installation fails if the resulting traffic cannot be routed to the HA Proxy load balancer.

Viewing the debug endpoint

The debug endpoint is a web page that provides diagnostics information. If you have superuser privileges and can view the Tanzu Operations Manager Installation Dashboard, you can access the debug endpoint.

To access the debug endpoint, open the following URL in a web browser:

https://OPS-MANAGER-FQDN/debug

Where OPS-MANAGER-FQDN is the fully-qualified domain name (FQDN) of your Operations Manager installation.

The debug endpoint offers three links:

  • Files allows you to view the YAML files that Operations Manager uses to configure products that you install. The most important YAML file, installation.yml, provides networking settings and describes microbosh. In this case, microbosh is the VM whose BOSH Director component is used by Operations Manager to perform installations and updates of TAS for VMs and other products.

  • Components describes the components in detail.

  • Rails log shows errors thrown by the VM where the web app, such as a Rails app, is running, as recorded in the production.log file. To explore other logs, see Logging Tips.

Logging tips

Identifying where to start

This section contains general tips for locating where a particular problem is called out in the log files. For guidance regarding specific logs, such as those for TAS for VMs components, see the following sections:

  • Start with the largest and most recently updated files in the job log.
  • Identify logs that contain err in the name.
  • Scan the file contents for a “failed” or “error” string.

Viewing logs for TAS for VMs components

To troubleshoot specific TAS for VMs components by viewing their log files:

  1. Go to your Operations Manager Installation Dashboard and click on the TAS for VMs tile.

  2. Select the Status tab.

  3. In the Job column, locate the component that you want to troubleshoot.

  4. In the Logs column for the component, click the download icon.

    Tile configuration is in the Status tab and has information for each job. The download icon is in the Logs column.

  5. Select the Logs tab.

    The tile configuration is shown in the Logs tab. A list of zip files shows the file paths next to timestamps for when the files were last updated.

  6. Once the ZIP file corresponding to the component moves to the Downloaded list, click the linked file path to download the ZIP file.

  7. Once the download completes, unzip the file.

The contents of the log directory vary depending on which component you view. For example, the Diego Cell log directory contains subdirectories for the metron_agent rep, monit, and garden processes. To view the standard error stream for garden, download the Diego Cell logs and open diego.0.job > garden > garden.stderr.log.

Viewing web app and BOSH failure logs in a terminal window

You can obtain diagnostic information by logging in to the VM where the BOSH Director job is running. To log in to the BOSH Director VM, you need:

  • The IP address of the VM shown in the Settings tab of the BOSH Director tile.

  • Your import credentials. Import credentials are the username and password used to import the .ova or .ovf file into your virtualization system.

To log in to the VM:

  1. Open a terminal window.

  2. To connect to the BOSH Director VM, run:

    ssh IMPORT-USERNAME@PCF-VM-IP-ADDRESS
    

    Where:

    • IMPORT-USERNAME is the username you used to import the .ova or .ovf file into your virtualization system.
    • VM-IP-ADDRESS is the IP address of the BOSH Director installation VM.
  3. Enter your import password when prompted.

  4. Go to the home directory of the web app by running:

    cd /home/tempest-web/tempest/web/
    
  5. You are now in a position to explore whether things are as they can be within the web app. You can also verify that the microbosh component is successfully installed. A successful MicroBOSH installation is required to install TAS for VMs and any products like databases and messaging services.

  6. Navigate to the BOSH installation log home directory by running:

    cd /var/tempest/workspaces/default/deployments/micro
    
  7. You may want to begin by running a tail command on the current log. Run:

    cd /var/tempest/workspaces/default/deployments/micro
    

    If you cannot resolve an issue by viewing configurations, exploring logs, or reviewing common problems, you can troubleshoot further by running BOSH diagnostic commands with the BOSH Command Line Interface (CLI).

Caution Do not manually modify the deployment manifest. Operations Manager overwrites manual changes to this manifest. In addition, manually changing the manifest may cause future deployments to fail.

Viewing the VMs in your deployment

To view the VMs in your deployment, follow the procedure specific to your IaaS.

Amazon Web Services (AWS)

To view the VMs in your AWS deployment:

  1. Log in to the AWS Console.

  2. Go to the EC2 Dashboard.

  3. Click Running Instances.

  4. Click the gear icon in the upper-right corner.

  5. Select job, deployment, director, and index.

  6. Click Close.

OpenStack

To view the VMs in your OpenStack deployment:

  1. Install the novaclient from the python-novaclient repository on GitHub.

  2. Point novaclient to your OpenStack installation and tenant by exporting the following environment variables:

    export OS_AUTH_URL=YOUR_KEYSTONE_AUTH_ENDPOINT
    export OS_TENANT_NAME=TENANT_NAME
    export OS_USERNAME=USERNAME
    export OS_PASSWORD=PASSWORD
    
  3. List your VMs by running:

    nova list --fields metadata
    

vSphere

To view the VMs in your vSphere deployment:

  1. Log in to vCenter.

  2. Select Hosts and Clusters.

  3. Select the top level object that contains your deployment. For example, select Cluster, Datastore, or Resource Pool.

  4. In the top tab, click Related Objects.

  5. Select Virtual Machines.

  6. Right-click the Table heading and select Show/Hide Columns.

  7. Select the job, deployment, director, and index boxes.

Viewing Apps Manager logs in a terminal window

Apps Manager provides a graphical user interface to help manage organizations, users, apps, and spaces. For more information about Apps Manager, see Getting Started with Apps Manager

When troubleshooting Apps Manager performance, you can view the Apps Manager app logs. To view the Apps Manager app logs:

  1. From a command line, log in to Operations Manager by running:

    cf login -a api.YOUR-SYSTEM-DOMAIN -u admin
    

    Where YOUR-SYSTEM-DOMAIN is your system domain.

    When prompted, enter your UAA Administrator credentials. To obtain these credentials, see Credentials tab in the TAS for VMs tile.

  2. Target the system org and the apps-manager space by running:

    cf target -o system -s apps-manager
    
  3. Tail the Apps Manager logs by running:

    cf logs apps-manager
    

Changing logging levels for Apps Manager

Apps Manager recognizes the LOG_LEVEL environment variable. The LOG_LEVEL environment variable allows you to filter the messages reported in Apps Manager log files by severity level. Apps Manager defines severity levels using the Ruby standard library Logger class.

By default, the Apps Manager LOG_LEVEL environment variable is set to info. The logs show more verbose messaging when you set the LOG_LEVEL to debug.

To change the Apps Manager LOG_LEVEL environment variable, run:

cf set-env apps-manager LOG_LEVEL LEVEL

Where LEVEL is the desired severity level.

You can set LOG_LEVEL to one of the six severity levels defined by the Ruby Logger class:

  • Level 5 unknown: An unknown message that should always be logged.
  • Level 4 fatal: An unhandleable error that results in a program crash.
  • Level 3 error: A handleable error condition.
  • Level 2 warn: A warning.
  • Level 1 info: General information about system operation.
  • Level 0 debug: Low-level information for developers.

Once set, Apps Manager log files only include messages at the set severity level and above. For example, if you set LOG_LEVEL to fatal, the log only includes fatal and unknown level messages.

Analyzing disk usage on containers and Diego Cell VMs

To obtain disk usage statistics by Diego Cell VMs and containers, see Examining GrootFS Disk Usage.

check-circle-line exclamation-circle-line close-line
Scroll to top icon