After you upgrade to Workspace ONE Access 20.10.0.0, you might need to configure certain settings.

Apply Security Patch HW-137959

After you upgrade, apply the security patch HW-137959 for this specific upgrade version. See VMware KB article 85254 HW-137959: VMSA-2021-0016 for Workspace ONE Access, VMware Identity Manager (CVE-2021-22002, CVE-2021-22003).

Configuring Workspace ONE Access Connector Instances

You can upgrade your existing Workspace ONE Access connector installation to version 20.10 to get the latest features such as the new Virtual App service, security updates, and resolved issues. Workspace ONE Access connector is a component of Workspace ONE Access. See the Installing VMware Workspace ONE Access Connector 20.10 guide.

Log4j Configuration Files

If any log4j configuration files in a Workspace ONE Access instance were edited, new versions of the files are not automatically installed during the upgrade. However, after the upgrade, the logs controlled by those files will not work.

To resolve this issue:

  1. Log in to the virtual appliance.
  2. Search for log4j files with the .rpmnew suffix.

    find / -name "*log4j.properties.rpmnew"

  3. For each file found, copy the new file to the corresponding old log4j file without the .rpmnew suffix.

Save the Workspace ONE UEM Configuration

Saving the Workspace ONE UEM configuration populates the Device Services URL for the catalog. Perform this task to allow new end users to enroll and manage their devices.

  1. Log in to the Workspace ONE Access console.
  2. Select Identity & Access Management > Setup > VMware Workspace ONE UEM.
  3. In the Workspace ONE UEM Configuration section, click Save.

Cluster ID in Secondary Data Center

Cluster IDs are used to identify the nodes in a cluster.

If your Workspace ONE Access 20.10 deployment includes a secondary data center, you might need to change the cluster ID of the secondary data center after upgrade.

Workspace ONE Access detects and assigns a cluster ID automatically when a new service appliance is powered up. For a multiple data center deployment, each cluster must be identified with a unique ID.

All appliances that belong to a cluster have the same cluster ID and a cluster typically consists of three appliances.

When you set up the secondary data center, verify that the cluster ID is unique to the data center. If a cluster ID is not unique to the data center, verify that each node has the Elasticsearch discovery-idm plugin installed and edit the cluster ID manually as described in the instructions that follow. You only need to perform these actions once and only on the secondary data center.

  1. Verify that each node has the Elasticsearch discovery-idm plugin.
    1. Log in to the virtual appliance.
    2. Use the following command to check if the plugin is installed.

      /opt/vmware/elasticsearch/bin/plugin list

    3. If the plugin does not exist, use the following command to add it.

      /opt/vmware/elasticsearch/bin/plugin install file:///opt/vmware/elasticsearch/jars/discovery-idm-1.0.jar

  2. Log in to the Workspace ONE Access console.
  3. Select the Dashboard > System Diagnostics Dashboard tab.
  4. In the top panel, locate the cluster information for the secondary data center cluster.
  5. Update the cluster ID of all the nodes in the secondary data center to a different number than the one used in the first data center.

    For example, set all the nodes in the secondary data center to 2, if the first data center is not using 2.


    cluster information

  6. Verify that the clusters in both the primary and secondary data centers are formed correctly.

    Follow these steps for each node in the primary and secondary data centers.

    1. Log in to the virtual appliance.
    2. Run the following command:

      curl 'http://localhost:9200/_cluster/health?pretty'

      If the cluster is configured correctly, the command returns a result similar to the following example:

      {
        "cluster_name" : "horizon",
        "status" : "green",
        "timed_out" : false,
        "number_of_nodes" : 3,
        "number_of_data_nodes" : 3,
        "active_primary_shards" : 20,
        "active_shards" : 40,
        "relocating_shards" : 0,
        "initializing_shards" : 0,
        "unassigned_shards" : 0,
        "delayed_unassigned_shards" : 0,
        "number_of_pending_tasks" : 0,
        "number_of_in_flight_fetch" : 0
      }

Cache Service Setting in Secondary Data Center Appliances

If you set up a secondary data center, Workspace ONE Access instances in the secondary data center are configured for read-only access with the "read.only.service=true" entry in the /usr/local/horizon/conf/runtime-config.properties file. After you upgrade such an appliance, the service fails to start.

To resolve this issue, perform the steps that follow. The steps include an example scenario of a secondary data center containing the following three nodes.
sva1.example.com
sva2.example.com
sva3.example.com
  1. Log in to a virtual appliance in the secondary data center as the root user.

    For this example, log in to sva1.example.com.

  2. Edit the /usr/local/horizon/conf/runtime-config.properties file as indicated in the substeps that follow.

    You might be able to edit an existing entry, or you can add a new entry. If applicable, uncomment entries that are commented out.

    1. Set the value of the cache.service.type entry to ehcache.
      cache.service.type=ehcache
    2. Set the value of the ehcache.replication.rmi.servers entry to the fully qualified domain names (FQDN) of the other nodes in the secondary data center. Use a colon : as the separator.

    For this example, configure the entry as follows.

    ehcache.replication.rmi.servers=sva2.example.com:sva3.example.com
  3. Restart the service.

    service horizon-workspace restart

  4. Repeat the preceding steps on the remaining nodes in the secondary data center.

    For this example, the remaining nodes to configure are sva2.example.com and sva3.example.com.

Citrix Integration

For Citrix integration in VMware Identity Manager 3.3, all external connectors must be version 2018.8.1.0 for Linux (the connector version in the 3.3 release) or later.

You must also use Integration Broker 3.3. Upgrade is not available for Integration Broker. Uninstall the old version, then install the new version.