For secure access to the UI and API of Workspace ONE Access, you deploy the nodes on an overlay-backed or VLAN-backed NSX network segment.

Network Segment

This network design has the following features:

  • All Workspace ONE Access components have routed access to the management VLAN through the Tier-0 gateway in the NSX-T Data Center instance for the management domain.

  • Routing to the management network and other external networks is dynamic and is based on the Border Gateway Protocol (BGP).

Figure 1. Network Design for the Clustered Workspace ONE Access

The Workspace ONE Access cluster nodes are connected to the cross-instance NSX network segment, which is connected to the management network through the NSX Tier-0 and Tier-1 gateways.
Table 1. Design Decisions on the NSX Segment for Workspace ONE Access

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-001

Place the Workspace ONE Access cluster nodes on an overlay-backed or VLAN-backed NSX network segment.

Provides a consistent deployment model for management applications in an environment with a single or multiple VMware Cloud Foundation instances.

You must use an implementation in NSX-T Data Center to support this network configuration.

IP Addressing Scheme

Allocate a statically assigned IP address and a host name from the cross-instance NSX segment to the load balancer virtual server, PostgreSQL database and each Workspace ONE Access cluster node.

Table 2. Design Decisions on the IP Addressing Scheme for Workspace ONE Access

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-002

Allocate statically assigned IP addresses for the following:

  • Workspace ONE Access cluster nodes

  • Embedded PostgreSQL database

  • NSX load-balancer virtual server

Using statically assigned IP addresses ensures stability across the SDDC and makes it simpler to maintain and easier to track.

Requires precise IP address management.

Name Resolution

The IP address of each Workspace ONE Access appliance, and associated NSX load balancer virtual server, is associated with a fully qualified name whose suffix aligns with your domain name, and must have valid DNS forward (A) and reverse (PTR) records.

Table 3. Design Decisions on Name Resolution for Workspace ONE Access

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-003

Configure forward and reverse DNS records for the following components:

  • Workspace ONE Access cluster nodes

  • NSX load balancer virtual server

Workspace ONE Access is accessible by using a set of fully qualified domain names instead of by using only IP address.

  • You must provide DNS records for each Workspace ONE Access node and for the load balancer virtual server IP address.

  • All firewalls between the Workspace ONE Access nodes and the DNS servers must allow DNS traffic.

VCF-VRS-WSA-NET-004

Configure the DNS settings for the Workspace ONE Access cluster nodes to use DNS servers in the first VMware Cloud Foundation instance.

Workspace ONE Access requires DNS resolution to connect to SDDC Components.

None.

Name Resolution for Workspace ONE Access for Multiple VMware Cloud Foundation Instances

In an environment with multiple VMware Cloud Foundation instances, multiple DNS servers are available across the instances, providing higher DNS availability and resilience.

Table 4. Design Decisions on Name Resolution for Workspace ONE Access for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-005

Configure the DNS settings for the Workspace ONE Access cluster nodes to use DNS servers in each VMware Cloud Foundation instance.

Improves resiliency if an outage of a DNS server occurs.

None.

Time Synchronization

Workspace ONE Access depends on time synchronization for all cluster nodes.

Table 5. Design Decisions on Time Synchronization for Workspace ONE Access

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-006

Configure the NTP settings on the Workspace ONE Access cluster nodes to use NTP servers in the first VMware Cloud Foundation instance.

Workspace ONE Access depends on time synchronization for all cluster nodes.

All firewalls located between the Workspace ONE Access cluster nodes and the NTP servers must allow NTP traffic.

Time Synchronization for Workspace ONE Access for Multiple VMware Cloud Foundation Instances

In an environment with multiple VMware Cloud Foundation instances, multiple NTP servers are available across the instances, providing higher NTP availability and resilience.

Table 6. Design Decisions on Time Synchronization for Workspace ONE Access for Multiple VMware Cloud Foundation Instances

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-007

Configure the NTP settings on the Workspace ONE Access cluster nodes to use NTP servers in each VMware Cloud Foundation instance.

Improves resiliency in the event of an outage of an NTP server.

If you scale from a deployment with a single VMware Cloud Foundation instance to one with multiple VMware Cloud Foundation instances, the NTP settings on each Workspace ONE Access cluster node must be updated.

Load Balancing

A Workspace ONE Access cluster deployment requires a load balancer to manage connections to the Workspace ONE Access services.

The design uses load-balancing services provided by NSX-T Data Center. During the deployment of the Workspace ONE Access cluster, vRealize Suite Lifecycle Manager and SDDC Manager coordinate to automate the configuration of the NSX load balancer. The load balancer is configured with the settings outlined in the table below.

Table 7. Clustered Workspace ONE Access Load Balancer Configuration

Load Balancer Element

Settings

Service Monitor

  • Use the default intervals and timeouts:

    • Monitoring interval: 3 seconds

    • Idle timeout period: 10 seconds

    • Rise/Fall: 3 seconds.

  • HTTP request:

    • HTTP method: Get

    • HTTP request version: 1.1

    • Request URL: /SAAS/API/1.0/REST/system/health/heartbeat.

  • HTTP response:

    • HTTP response code: 200

    • HTTP response body: OK

  • SSL configuration:

    • Server SSL: Enabled

    • Client certificate: Cross-instance Workspace ONE Access cluster certificate

    • SSL profile: default-balanced-server-ssl-profile.

Server Pool

  • LEAST_CONNECTION algorithm.

  • Set the SNAT translation mode to Auto Map for the pool.

  • Static members:

    • Name: host name IP:

    • IP address

    • Port: 443

    • Weight: 1

    • State: Enabled

  • Set the above service monitor.

HTTP Application Profile

  • Timeout

    • 3600 seconds (60 minutes).

  • X-Forwarded-For

    • Insert.

Cookie Persistence Profile

  • Cookie name

    • JSESSIONID.

  • Cookie mode

    • Rewrite.

Virtual Server

  • HTTP type

    • L7

  • Port

    • 443.

  • IP

    • Workspace ONE Access cluster IP

  • Persistence

    • Above Cookie Persistence Profile.

  • Application profile

    • Above http application profile.

  • Server pool

    • Above server pool

Table 8. Design Decisions on Load Balancing for Workspace ONE Access

Decision ID

Design Decision

Design Justification

Design Implication

VCF-VRS-WSA-NET-008

Use the NSX load balancer that is configured by SDDC Manager on a dedicated Tier-1 gateway to load balance connections across the Workspace ONE Access cluster nodes.

  • Required to deploy Workspace ONE Access as a cluster, enabling it to handle a greater load and obtain a higher level of service availability for cross-instance vRealize Suite solutions, which also share this load balancer.

  • During the deployment of Workspace ONE Access by using vRealize Suite Lifecycle Manager, SDDC Manager automates the configuration of the NSX load balancer for the Workspace ONE Access cluster.

You must use the load balancer that is configured by SDDC Manager and the integration with vRealize Suite Lifecycle Manager.