The SoS tool writes the component log files into an output directory structure within the filesystem of the SDDC Manager instance in which the command is initiated, for example:

root@sddc-manager-controller [ /tmp/sos ]# ./sos
Welcome to Supportability and Serviceability(SoS) utility!
Logs : /var/tmp/sos-2017-09-13-17-29-51-8575
Log file : /var/tmp/sos-2017-09-13-17-29-51-8575/sos.log
Log Collection completed successfully for : [AUDIT, VIA, SDDC-MANAGER, SDDC-CASSANDRA, NSX_MANAGER, PSC, AUDIT LOG, ZOOKEEPER, API-LOGS, ESX, VDI, SWITCH, HMS, VMS_SCREENSHOT, VCENTER-SERVER, LOGINSIGHT, HEALTH-CHECK]

esx Directory Contents

In each rack-specific directory, the esx directory contains the following diagnostic files collected for each ESXi host in the rack:

File

Description

esx-IP-address.tgz

Diagnostic information from running the vm-support command on the ESXi host.

An example file is esx-192.168.100.101.tgz.

SmartInfo-IP-address.txt

S.M.A.R.T. status of the ESXi host's hard drive (Self-Monitoring, Analysis, and Reporting Technology).

An example file is SmartInfo-192.168.100.101.txt.

vsan-health-IP-address.txt

vSAN cluster health information from running the standard command python /usr/lib/vmware/vsan/bin/vsan-health-status.pyc on the ESXi host.

An example file is vsan-health-192.168.100.101.txt.

hms Directory Contents

In each rack-specific directory, the hms directory contains subdirectories named N0_hms_logs_timestamp.zip, N1_hms_logs_timestamp.zip, N2_hms_logs_timestamp.zip, and so on, one subdirectory for each ESXi host in the rack.

An example of the files and subdirectories in the hms directory is:

hms
  hms_log_archiver.sh
  N0_hms_logs_2016-11-01_09-25-22.zip
  N1_hms_logs_2016-11-01_09-25-35.zip
  N2_hms_logs_2016-11-01_09-25-38.zip
  N3_hms_logs_2016-11-01_09-25-29.zip
  ...
        

The hms_log_archiver.sh file that appears in the hms directory is the script that obtains the HMS diagnostic files for each subdirectory. Each subdirectory contains the following files, where Nn refers to the file for the nth ESXi host.

File

Description

Nn_hms_ib_timestamp.log

HMS in-band (IB) log

Nn_hms_oob_timestamp.zip

HMS out-of-band (OOB) log files hms.log and hms.log.1

Nn_hms_events_log_timestamp.log

HMS events log file

Nn_ServerInfo_timestamp.log

HMS server info log file

loginsight Directory Contents

In each rack-specific directory, the loginsight directory contains the diagnostic information files collected from the vRealize Log Insight instance deployed on that rack, if any. Not every rack in the installation will have a vRealize Log Insight instance deployed on it.

File

Description

li.tgz

Compressed TAR file consisting of the vRealize Log Insight instance's /var/log directory.

loginsight-support-timestamp.tar.gz

Standard vRealize Log Insight compressed support bundle, created by the loginsight-support command.

repo.tar.gz

Compressed TAR file consisting of a mass export of the instance's repository buckets. created by running the /opt/vmware/bin/loginsight-dump-repo.sh in the vRealize Log Insight instance.

loginsight-agent-vcenterFQDN-timestamp.zip Directory Contents

Even though these directories' names end in .zip, each one is a directory of files. In each rack-specific directory, each of these directories contains the diagnostic information files for the vRealize Log Insight Linux agent configured for each vCenter Server instance in the rack. When a vRealize Log Insight instance is deployed in the Cloud Foundation environment, each vCenter Server instance is configured with the vRealize Log Insight Linux agent to collect events from that vCenter Server instance and forward them to the vRealize Log Insight instance. Because a vCenter Server instance is deployed for the rack's management domain and for any of that rack's VI or VDI workload domains, at least one or more of these loginsight-agent-vcenterFQDN-timestamp.zip directories appears in each of the log output's rack-specific directories.

The vRealize Log Insight Linux agent writes its own operation log files. The files in each loginsight-agent-vcenterFQDN-timestamp.zip directory result from the SoS tool running the /usr/lib/loginsight-agent/bin/loginsight-agent-support command to generate the standard vRealize Log Insight Linux agent support bundle.

File

Description

config/liagent.ini

Configuration file containing the preconfigured default settings for the agent.

config/liagent-effective.ini

The agent's effective configuration. This effective configuration is the liagent.ini dynamically joined with settings from the vRealize Log Insight server-side settings to form this liagent-effective.ini file.

log/liagent_timestamp_*.log

Detailed log files.

var/log/messages

If the agent is configured to collect messages from the vCenter Server instance's /var/log directory, this file is the collected messages log.

nsx Directory Contents

In each rack-specific directory, the nsx directory contains the diagnostic information files collected for the NSX Manager instances and NSX Controller instances deployed in that rack.

The number of files in this directory depends on the number of NSX Manager and NSX Controller instances that are deployed in the rack. In a given rack, each management domain has one NSX Manager instance and a minimum of three NSX Controller instances, and any VI or VDI workload domains in the rack each have one NSX Manager instance and at least three NSX Controller instances.

File

Description

VMware-NSX-Manager-tech-support-nsxmanagerIPaddr.tar.gz

Standard NSX Manager compressed support bundle, generated using the NSX for vSphere API POST https://nsxmanagerIPaddr/api/1.0/appliance-management/techsupportlogs/NSX, where nsxmanagerIPaddr is the IP address of the NSX Manager instance.

An example is VMware-NSX-Manager-tech-support-10.0.0.8.tar.gz.

VMware-NSX-Controller-tech-support-nsxmanagerIPaddr-controller-controllerId.tgz

Standard NSX Controller compressed support bundle, generated using the NSX for vSphere API to query the NSX Controller technical support logs: GET https://nsxmanagerIPaddr/api/2.0/vdn/controller/controllerId/techsupportlogs, where nsxmanagerIPaddr is the IP address of the NSX Manager instance and controllerID identifies the NSX Controller instance.

Examples are VMware-NSX-Controller-tech-support-10.0.0.8-controller-1.tgz, VMware-NSX-Controller-tech-support-10.0.0.8-controller-2.tgz, VMware-NSX-Controller-tech-support-10.0.0.8-controller-3.tgz

psc Directory Contents

In the rack-1 directory, the psc directory contains the diagnostic information files collected for the Platform Services Controller instances deployed in that rack.

Note:

In a Cloud Foundation environment, the two Platform Services Controller instances are deployed in the primary rack only. As a result, this psc directory only appears in the primary rack's log output. For the description of the primary rack, see VMware Software Components Deployed in a Typical Cloud Foundation System.

File

Description

vm-support-pscIPaddr.tar.gz

Standard Platform Services Controller support bundle downloaded from the Platform Services Controller instance with IP address pscIPaddr.

switch Directory Contents

In the rack-specific directory, the switch directory contains the diagnostic information files collected for that rack's switches.

Each physical rack in the installation has a management switch and two ToR switches. A multirack system additionally has two inter-rack switches. The SoS tool writes the logs for the inter-rack switches into the rack-1/switch subdirectory.

Only certain switch makers and models are supported for use in a Cloud Foundation installation. See the VMware Cloud Foundation section of the VMware Compatibility Guide for details on which switch makers and models are supported for this release.

File

Description

cl_support_Management1_timestamp.tar.xz

Standard support bundle collected from a management switch. In this release, the management switches run the Cumulus Linux operating system, and the SoS tool collects the switch's support bundle using the standard Cumulus /usr/cumulus/bin/cl-support support command.

IPaddr-switchmaker-techsupport.gz

Standard support bundle collected from a ToR or inter-rack switch at IP address IPaddr and for switch maker switchmaker. The SoS tool collects the switch's support bundle using the appropriate command for the particular switch, such as show tech-support.

The ToR switches typically have IP addresses 192.168.0.20 and 192.168.0.21. The inter-rack switches typically have IP addresses 192.168.0.30 and 192.168.0.31.

vc Directory Contents

In each rack-specific directory, the vc directory contains the diagnostic information files collected for the vCenter Server instances deployed in that rack.

The number of files in this directory depends on the number of vCenter Server instances that are deployed in the rack. In a given rack, each management domain has one vCenter Server instance, and any VI or VDI workload domains in the rack each have one vCenter Server instance.

File

Description

vc-vcsaFQDN-timestamp.tgz

Standard vCenter Server support bundle downloaded from the vCenter Server Appliance instance having a fully-qualified domain name vcsaFQDN. The support bundle is obtained from the instance using the standard vc-support.sh command.

vdi Directory Contents

If the rack has a deployed VDI workload domain, the SoS tool creates a vdi directory in the log directory for that rack. The vdi directory contains the diagnostic information files collected for the VDI environment's VMware server components deployed in that rack.

The SoS tool collects the standard VMware support bundles from the VMware server components fromVMware Horizon and App Volumes that are deployed as VMs for use by the VDI environment:

  • View Connection Server instances, including when View Connection Server is deployed as a security server for the VDI environment. A security server is a special instance of View Connection Server as described in the VMware Horizon product documentation. A security server is deployed for the VDI environment if the Connect from anywhere option was specified when the VDI workload domain was created.

  • App Volumes Manager. The App Volumes Manager instance is deployed for the VDI environment if the Implement App Volumes option was specified when the VDI workload domain was created.

File

Description

connHostname.vdm-sdct-timestamp-server.zip

View Connection support bundle downloaded from the View Connection instances having hostname connserverHostname, such as con-1-1, con-1-2, and so on. The support bundle is obtained from the instance using the standard C:\Program Files\VMware View\Server\DCT\support.bat command for the View Connection Server.

appvolsHostname-logs.zip

App Volumes log files obtained from the App Volumes Manager instance having hostname appvolsHostname, such as appvolumes-1-1, appvolumes-1-2, and so on.

vrm.properties Directory Contents

In each rack-specific directory, the vrm.properties directory contains the following configuration files from the SDDC Manager instance deployed in the rack:

File

Description

hms_ib_inventory.json

SDDC Manager rack hardware inventory file, created during imaging of the rack. The SoS tool obtains this file from the SDDC Manager instance's /home/vrack/VMware/vRack directory.

vrm-security.keystore

SDDC Manager keystore file, from the SDDC Manager system's /home/vrack/VMware/vRack directory.

vrm.properties

Properties file from the SDDC Manager Dashboard.

vrm.properties.vRack

Copy of the SDDC Manager vrm.properties file in the SDDC Manager system's /home/vrack/VMware/vRack directory.

zk Directory Contents

In the rack-1 directory, the zk directory contains three subdirectories, each containing the diagnostic information files collected for the SDDC Manager ISVM instances deployed in that rack.

Note:

In a Cloud Foundation environment, the three ISVM instances are deployed in the primary rack only. As a result, this zk directory only appears in the primary rack's log output. For the description of the primary rack, see GUID-FFEFA75E-6279-48E7-8DE9-FBC67937CEE4.html#GUID-FFEFA75E-6279-48E7-8DE9-FBC67937CEE4__section_50C84501540349CCAA0A5815B59ED6C7.

The subdirectories in the zk directory are named according to the three ISVM instances' IP addresses, such as:

  • 192.168.100.43

  • 192.168.100.44

  • 192.168.100.45

Each subdirectory contains two files.

File

Description

cassandra-bundle.tgz

Compressed TAR file containing the Cassandra database's logs and diagnostic information.

zk-bundle.tgz

Compressed TAR file containing the Zookeeper logs and diagnostic information.

hms.tar.gz Contents

Each rack-specific directory has an hms.tar.gz file.

File

Description

hms.tar.gz

Compressed file containing hms.tar, which contains the HMS software component's diagnostic information.

vrm-timestamp.tgz Contents

Each rack-specific directory has a vrm-timestamp.tgz file.

File

Description

vrm-timestamp.tgz

Compressed file containing vrm-timestamp.tar, which contains diagnostic information for SDDC Manager.

via-timestamp.tgz Contents

If the VIA virtual machine is reachable from the SDDC Manager instance where the SoS tool is invoked, the logs directory for that rack contains a via-timestamp.tgz file.

Under standard operating conditions, the VIA virtual machine is not reachable from the SDDC Manager instances in a Cloud Foundation installation. The VIA virtual machine is used to image a rack for use in a Cloud Foundation installation, and is reachable from that newly imaged rack's SDDC Manager instance at the end of the imaging process. You can use the SoS tool in the newly imaged rack's SDDC Manager instance to collect the VIA VM's logs at the end of the imaging process.

File

Description

via-timestamp.tgz

Compressed file containing vrm-timestamp.tar, which contains the VIA VM's diagnostic information.