The VMware Tunnel supports deploying a single-tier model and a multi-tier model. Both SaaS and on-premises Workspace ONE environments support the single-tier and multi-tier models. You can use the deployment model that best fits your needs.

Single-Tier Deployment Model

Single-tier models have a single instance of VMware Tunnel configured with a public DNS. In the Workspace ONE UEM console and the installer, this deployment model uses the basic-endpoint model.

Multi-Tier Deployment Model

Multi-tier networks have a separation between servers with firewalls between the tier. Typical Workspace ONE multi-tier deployments have a DMZ that separates the Internet from the internal network. VMware Tunnel supports deploying a front-end server in the DMZ that communicates with a back-end server in the internal network. The multi-tier deployment model includes two instances of the VMware Tunnel with separate roles. The VMware Tunnel front-end server resides in the DMZ and can be accessed from public DNS over the configured ports. The servers in this deployment model communicate with your API and AWCM servers. For SaaS deployments, Workspace ONE hosts the API and AWCM components in the cloud. For an on-premises environment, the AWCM component is typically installed in the DMZ with the API.

The cascade deployment model architecture includes two instances of the VMware Tunnel with separate roles. In cascade mode, the front-end server resides in the DMZ and communicates to the back-end server in your internal network.

If you are using a multi-tier deployment model and the Proxy component of the VMware Tunnel, use the relay-endpoint deployment mode. The relay-endpoint deployment mode architecture includes two instances of the VMware Tunnel with separate roles. The VMware Tunnel relay server resides in the DMZ and can be accessed from public DNS over the configured ports.

Deploying VMware Tunnel using Single-Tier Deployment

Single-tier models have a single instance of VMware Tunnel configured with a public DNS. In the Workspace ONE UEM console and the installer, this deployment model uses the basic-endpoint model. If you are using the single-tier deployment model, use the basic-endpoint mode. The basic endpoint deployment model of VMware Tunnel is a single instance of the product installed on a server with a publicly available DNS. Basic VMware Tunnel is typically installed in the internal network behind a load balancer in the DMZ that forwards traffic on the configured ports to the VMware Tunnel, which then connects directly to your internal Web applications. All deployment configurations support load balancing and reverse proxy.

Basic VMware Tunnel is typically installed in the internal network behind a load balancer in the DMZ that forwards traffic on the configured ports to the VMware Tunnel, which then connects directly to your internal Web applications. All deployment configurations support load balancing and reverse proxy.

The basic endpoint Tunnel server communicates with API and AWCM to receive a whitelist of clients allowed to access VMware Tunnel. Both proxy and Per-App Tunnel components support using an outbound proxy to communicate with API/AWCM in this deployment model. When a device connects to VMware Tunnel, it is authenticated based on unique X.509 certificates issued by Workspace ONE UEM. Once a device is authenticated, the VMware Tunnel (basic endpoint) forwards the request to the internal network.

If the basic endpoint is installed in the DMZ, the proper network changes must be made to allow the VMware Tunnel to access various internal resources over the necessary ports. Installing this component behind a load balancer in the DMZ minimizes the number of network changes to implement the VMware Tunnel and provides a layer of security because the public DNS is not pointed directly to the server that hosts the VMware Tunnel.

Deploying VMware Tunnel using Cascade Mode Deployment

The cascade deployment model architecture includes two instances of the VMware Tunnel with separate roles. In cascade mode, the front-end server resides in the DMZ and communicates to the back-end server in your internal network.

Only the Per-App Tunnel component supports the cascade deployment model. If you use only the Proxy component, you must use the Relay-Endpoint model.

Devices access the front-end server for cascade mode using a configured hostname over configured ports. The default port for accessing the front-end server is port 8443. The back-end server for cascade mode is installed in the internal network hosting your intranet sites and web applications. This deployment model separates the publicly available front-end server from the back-end server that connects directly to internal resources, providing an extra layer of security.

The front-end server facilitates authentication of devices by connecting to AWCM when requests are made to the VMware Tunnel. When a device makes a request to the VMware Tunnel, the front-end server determines if the device is authorized to access the service. Once authenticated, the request is forwarded securely using TLS over a single port to the back-end server.

The back-end server connects to the internal DNS or IP requested by the device.

Cascade mode communicates using TLS connection (or optional DTLS connection). You can host as many front-end and back-end servers as you like. Each front-end server acts independently when searching for an active back-end server to connect devices to the internal network. You can set up multiple DNS entries in a DNS lookup table to allow load balancing.

Both the front-end and back-end servers communicate with the Workspace ONE UEM API server and AWCM. The API server delivers the VMware Tunnel configuration and the AWCM delivers device authentication, device access control list, and traffic rules. The front-end and back-end server communicates with API/AWCM through direct TLS connections unless you enable outbound proxy calls. Use this connection if the front-end server cannot reach the API/AWCM servers. If enabled, front-end servers connect through the back-end server to the API/AWCM servers. This traffic, and the back-end traffic, route using server-side traffic rules. For more information, see Network Traffic Rules for Per-App Tunnel

The following diagram illustrates the Multi-Tier deployment for the Per-App Tunnel component in cascade mode:

Deploying VMware Tunnel using Relay-Endpoint

If you are using a multi-tier deployment model and the Proxy component of the VMware tunnel, use the relay-endpoint deployment mode. The relay-endpoint deployment mode architecture includes two instances of the VMware Tunnel with separate roles. The VMware Tunnel relay server resides in the DMZ and can be accessed from public DNS over the configured ports.

If you are only using the Per-App Tunnel component, consider using a cascade mode deployment.

The ports for accessing the public DNS are by default port 8443 for Per-App Tunnel and port 2020 for proxy. The VMware Tunnel endpoint server is installed in the internal network hosting intranet sites and Web applications. This server must have an internal DNS record that is resolved by the relay server. This deployment model separates the publicly available server from the server that connects directly to internal resources, providing an added layer of security.

The relay server role includes communicating with the API and AWCM components and authenticating devices when requests are made to VMware Tunnel. In this deployment model, communication to API and AWCM from the relay server can be routed to the Outbound Proxy via endpoint server. The Per-App Tunnel service must communicate with API and AWCM directly. When a device makes a request to the VMware Tunnel, the relay server determines if the device is authorized to access the service. Once authenticated, the request is forwarded securely using HTTPS over a single port (the default port is 2010) to the VMware Tunnel endpoint server.

The role of the endpoint server is to connect to the internal DNS or IP requested by the device. The endpoint server does not communicate with the API or AWCM unless Enable API and AWCM outbound calls via proxy is set to Enabled in the VMware Tunnel settings in the Workspace ONE UEM console. The relay server performs health checks at a regular interval to ensure that the endpoint is active and available.

These components can be installed on shared or dedicated servers. Install VMware Tunnel on dedicated Linux servers to ensure that performance is not impacted by other applications running on the same server. For a relay-endpoint deployment, the proxy and Per-App Tunnel components are installed on the same relay server.

Figure 1. On-premises configuration for Relay-Endpoint deployments
The Relay-Endpoint deployment for VMware Tunnel in on-premises environments is graphically represented.
Figure 2. SaaS configuration for Relay-Endpoint deployments
The Relay-Endpoint deployment for VMwareTunnel in SaaS environments is graphically represented.