After you upgrade to Workspace ONE Access 22.09, review the post-upgrade configuration procedures and determine which ones you must do to complete the upgrade to 22.09.

Configuring Workspace ONE Access Connector Instances

You can upgrade your existing Workspace ONE Access connector installation to version 22.09 to get the latest features such as the new Virtual App service, security updates, and resolved issues. Workspace ONE Access connector is a component of Workspace ONE Access. See the Installing VMware Workspace ONE Access Connector guide.

Reinstall the provisioning adapters

After you upgrade to 22.09, the provisioning adapters must be reinstalled.

  1. Log in to the virtual appliance
  2. Run the following commands to install the provisioning adapters.
    rm -rf /opt/vmware/horizon/bundleCache/provisioningBundle*.jar
    /usr/sbin/hznAdminTool installAdapter -force -fileName /opt/vmware/horizon/workspace/webapps/SAAS/WEB-INF/adaptors/horizon-provisioning-adaptors-2.0.0.jar -bundleName Default
  3. Restart the virtual appliance. Run service horizon-workspace restart.
  4. Repeat these steps for all nodes in the cluster.

Refresh People Search Configuration to Display Mapped Attributes

If People Search was enabled in 22.08, you must refresh the People Search configuration in the Workspace ONE Access console so that the Mapped User Attributes section in the Summary card displays the user attributes that are selected. This is only a display issue in the console. For end users, the People Search feature continue to work as expected.

Summary card in People Search configuration with mapped users no displayed
  1. Log in to the Workspace ONE Access console and navigate to the Integrations > People Search page,
  2. In the Summary card, click EDIT and on the Select user attributes page, click NEXT.
  3. In the Select users and sync to directory page, click SAVE.

The Summary card is updated to display the mapped user attributes.

Log4j Configuration Files

If any log4j configuration files in a Workspace ONE Access instance were edited, new versions of the files are not automatically installed during the upgrade. After the upgrade, the logs controlled by those files will not work.

To resolve this issue:

  1. Log in to the virtual appliance.
  2. Search for log4j files with the .rpmnew suffix.

    find / -name "*log4j.properties.rpmnew"

  3. For each file found, copy the new file to the corresponding old log4j file without the .rpmnew suffix.

Save the Workspace ONE UEM Configuration

After you upgrade the appliance, you must go to the Workspace ONE Access console and save the Workspace ONE UEM configuration settings. Saving the Workspace ONE UEM configuration populates the Device Services URL for the catalog. Perform this task to allow new end users to enroll and manage their devices.

  1. Log in to the Workspace ONE Access console.
  2. Open the Integrations > UEM Integration page.
  3. In the Workspace ONE UEM Configuration section, click Save.

Cluster ID in Secondary Data Center

Cluster IDs are used to identify the nodes in a cluster.

Workspace ONE Access detects and assigns a cluster ID automatically when a new service appliance is powered up. For a multiple data center deployment, each cluster must be identified with a unique ID.

All appliances that belong to a cluster have the same cluster ID and a cluster typically consists of three appliances.

When you set up the secondary data center, verify that the cluster ID is unique to the data center. If a cluster ID is not unique to the data center, edit the cluster ID manually as described in the instructions that follow. You only need to perform these actions once and only on the secondary data center.

  1. Log in to the Workspace ONE Access console.
  2. Select the Monitor > Resiliency tab.
  3. In the top panel, locate the cluster information for the secondary data center cluster.
  4. Update the cluster ID of all the nodes in the secondary data center to a different number than the one used in the first data center.

    For example, set all the nodes in the secondary data center to 2, if the first data center is not using 2.


    cluster information

  5. Verify that the clusters in both the primary and secondary data centers are formed correctly.

    Follow these steps for each node in the primary and secondary data centers.

    1. Log in to the virtual appliance.
    2. Run the following command:

      curl 'http://localhost:9200/_cluster/health?pretty'

      If the cluster is configured correctly, the command returns a result similar to the following example:

      {
        "cluster_name" : "horizon",
        "status" : "green",
        "timed_out" : false,
        "number_of_nodes" : 3,
        "number_of_data_nodes" : 3,
        "active_primary_shards" : 20,
        "active_shards" : 40,
        "relocating_shards" : 0,
        "initializing_shards" : 0,
        "unassigned_shards" : 0,
        "delayed_unassigned_shards" : 0,
        "number_of_pending_tasks" : 0,
        "number_of_in_flight_fetch" : 0
      }

Cache Service Setting in Secondary Data Center Appliances

If you set up a secondary data center, Workspace ONE Access instances in the secondary data center are configured for read-only access with the "read.only.service=true" entry in the /usr/local/horizon/conf/runtime-config.properties file. After you upgrade such an appliance, the service fails to start.

To resolve this issue, perform the steps that follow. The steps include an example scenario of a secondary data center containing the following three nodes.
sva1.example.com
sva2.example.com
sva3.example.com
  1. Log in to a virtual appliance in the secondary data center as the root user.

    For this example, log in to sva1.example.com.

  2. Edit the /usr/local/horizon/conf/runtime-config.properties file as indicated in the substeps that follow.

    You might be able to edit an existing entry, or you can add a new entry. If applicable, uncomment entries that are commented out.

    1. Set the value of the cache.service.type entry to ehcache.
      cache.service.type=ehcache
    2. Set the value of the ehcache.replication.rmi.servers entry to the fully qualified domain names (FQDN) of the other nodes in the secondary data center. Use a colon : as the separator.

    For this example, configure the entry as follows.

    ehcache.replication.rmi.servers=sva2.example.com:sva3.example.com
  3. Restart the service.

    service horizon-workspace restart

  4. Repeat the preceding steps on the remaining nodes in the secondary data center.

    For this example, the remaining nodes to configure are sva2.example.com and sva3.example.com.