Under normal conditions, a service provider can share compute, storage, and networking resources among multiple tenant organizations. The system enforces isolation through abstraction, secure engineering practices in the hypervisor and the vCloud Director software stack.

Tenant organizations share the underlying resource pools, datastores, and external networks exposed through a single Provider VDC without affecting (or even being aware of) resources that they do not own. Proper management of vApp storage and runtime leases, vApp quotas, limits on resource-intensive operations, and organization VDC allocation models can ensure that one teanant cannot deny service to another by accident or on purpose. For example, a very conservative configuration would set up all organization VDCs under the reservation pool allocation model and never overcommit resources. The full complement of options is not covered in this document; however, some points are made in the following subsections.

Security Domains and Provider VDCs

Despite the proper isolation in the software and proper organization configuration, there may be times when tenant organizations do not want different workloads to be run or stored on particular compute, network, or storage resources. This doesn't elevate the system overall to a "high-security environment" (discussion of which is beyond the scope of this document), but does necessitate the need for the cloud to be segmented into multiple security domains. Specific examples of workloads requiring such treatment include:
  • Data subject to privacy laws that require it to be stored and processed within prescribed geographies.
  • Data and resources owned by countries or organizations that, despite trusting the isolation of the cloud, require as a matter of prudence and defense in depth that their VDCs cannot share resources with specific other tenants--for example, a competing company.
In these and other scenarios, resource pools, networks, and datastores should be segmented into different "security domains" by using different Provider VDCs whereby vApps with similar concerns can be grouped (or isolated). For example, you may clearly identify certain Provider VDCs as storing and processing data in certain countries.

Resource Pools

Within a single Provider VDC, you can have multiple resource pools that aggregate CPU and memory resources provided by the underlying vSphere infrastructure. Segmenting different organizations across different resource pools is not necessary from a confidentiality and integrity perspective. But from an availability perspective, there may be reasons to do that. This resource-management problem depends on organization VDC allocation models, the expected workloads, quotas and limits applied to these organizations, and the speed with which additional computing resources can be brought online by the provider. This guide does not define the different resource allocation models and how they impact each organization's usage of a resource pool other than to say that whenever you allow the overcommitment of resources in a pool used by more than one organization, you run the risk of causing service quality to degrade for one or more organizations. Proper monitoring of service levels is imperative to avoid Denial of Service being caused by one organization, but security does not dictate a specific separation of organizations to meet this goal.

Limiting Shared Consumption of Shared Resources

In the default configuration, many vCloud Director compute and storage resources can be consumed in unlimited quantities by all tenants. The system provides several ways for a system administrator to manage and monitor the consumption of these resources. Careful examination of the following areas is an important part of limiting the opportunity for a "noisy neighbor" to affect the level of service vCloud Director provides.
Limit resource-intensive operations
See Configure System Limits in the vCloud Director Administrator's Guide.
Impose sensible quotas
See Configure Organization Lease, Quota, and Limit Settings and (to limit the number of VDCs a tenant can create and limit the number of simultaneous connections per VM) Configure System Limits, both in the vCloud Director Administrator's Guide.
Manage storage and runtime leases
Leases provide a level of control over tenant consumption of storage and compute resources. Limiting the length of time that a vApp can remain powered-on or that a powered-off vApp can consume storage is an essential step in managing shared resources. See Understanding Leases in the vCloud Director Administrator's Guide.

External Networks

A service provider creates External Networks and makes them accessible to tenants. An External Network can be safely shared between multiple public networks, since by definition those networks are public. Tenants should be reminded that traffic on External Networks is subject to interception, and they should employ application-level or transport-level security on these networks for confidentiality and integrity when needed.

Private routed networks can share those External Networks in the same circumstances -- when they're used for connecting to a public network. Sometimes, an External Network may be used by an organization VDC Network to connect two different vApps and their networks or to connect a vApp Network back to the enterprise datacenter. In these cases, the External Network should not be shared between organizations.

Certainly, one cannot expect to have a separate physical network for each organization. Instead, it is recommended that a shared physical network be connected to a single External Network that is clearly identified as a DMZ network. Thus, organizations will know that it doesn't provide confidentiality protections. For communications that traverse an External Network but that require confidentiality protections, for instance, a vApp-to-enterprise datacenter connection or a vApp-to-vApp bridge over a public network, a VPN can be deployed. The reason for this is that in order for a vApp on a private routed network to be reachable, it must leverage IP address forwarding using an IP address routable on that External Network. Any other vApp that connects to that physical network can send packets to that vApp, even if it is another organization connected to another External Network. To prevent this, a service provider can use NSX Distributed Firewall and Distributed Logical Routing to enforce separation of traffic form multiple tenants on a single External Network. See NSX Distributed Firewall and Logical Routing in the VMware vCloud® Architecture Toolkit™ for Service Providers (vCAT-SP)

Organization VDC networks owned by different tenants can share the same External Network (as an uplink from an Edge Gateway) as long as they don't allow access to the inside with NAT and IP masquerading.
Important: vCloud Director Advanced Networking allows tenants and service providers to employ dynamic routing protocols such as OSPF. The OSPF autodiscovery mechanism, when used without authentication, could potentially establish peering relationships between Edge Gateways belonging to different tenants and start exchanging routes. To prevent this, do not enable OSPF on public shared interfaces unless you also enable OSPF authentication to prevent peering with unauthenticated Edge Gateways.

Network Pools

A single network pool can be used by multiple tenants as long as all networks in the pool are suitably isolated. VXLAN-backed Network Pools (the default) rely on the physical and virtual switches being configured to allow connectivity within a VXLAN and isolation between different VXLANs. Portgroup-backed Network Pools must be configured with portgroups that are isolated from each other. These portgroups could be isolated physically, through VXLANs.

Of the three types of Network Pools (portgroup, VLAN, and VXLAN), it is easiest to share a vCloud Director VXLAN Network Pool. VXLAN pools support many more networks than VLAN- or portgroup-backed Network Pools, and isolation is enforced at the vSphere-kernel layer. While the physical switches don't isolate the traffic without the use of the VXLAN, VXLAN isn't susceptible to misconfiguration at the hardware layer either. Recall from above that none of the networks in any Network Pool provide confidentiality protection for intercepted packets (for example, at the physical layer).

Storage Profiles

vCloud Director storage profiles aggregate datastores in a way that enables the service provider to offer storage capabilities tiered by capacity, performance, and other attributes. Individual datastores are not accessible by tenant organizations. Instead, a tenant can choose from a set of storage profiles offered by the service provider. If the underlying datastores are configured to be accessible only from the vSphere management network, then the risk in sharing datastores is limited, as with compute resources, to availability. One organization may end up using more storage than expected, limiting the amount of storage available to other organizations. This is especially true with organizations using the Pay-As-You-Go allocation model and the default "unlimited storage" setting. For this reason, if you share datastores, you should set a storage limit, enable thin provisioning if possible, and monitor storage usage carefully. You should also carefully manage your storage leases, as noged in Limiting Shared Consumption of Shared Resources. Alternatively, if you do not share datastores, you must properly dedicate storage to the storage profiles you make available to each organization, potentially wasting storage by allocating it to organizations that do not need it.

vSphere datastore objects are the logical volumes where VMDKs are stored. While vSphere administrators can see the physical storage systems from which these datastores are created, that requires rights not available to vCloud Director administrator or tenant. Tenant users who create and upload vApps simply store the vApps' VMDKs on one of the storage profiles available in the organization VDC they're using.

For this reason, virtual machines never see any storage outside of that consumed by their VMDKs unless they have network connectivity to those storage systems. This guide recommends that they do not; a provider could provide access to external storage for vApps as a network service, but it must be separate from the LUNs assigned to the vSphere hosts backing the cloud.

Likewise, tenant organizations see only the storage profiles available in their organization VDCs, and even that view is limited to the vCloud Director abstraction. They cannot browse the system's datastores. They see only what's published in catalogs or used by the vApps they manage. If organization VDC storage profiles do not share datastores, the organizations cannot impact each other's storage (except perhaps by using too much network bandwidth for storage I/O). Even if they do, the above restrictions and abstractions ensure proper isolation between the organizations. vCloud Director administrators can enable vSphere storage I/O control on specific datastores to restrict the ability of a tenant to consume an inordinate amount of storage I/O bandwidth. See Configure Storage I/O Control Support in a Provider VDC in the vCloud Director Administrator's Guide.