Certain virtual and physical networking components are required to accommodate a typical workload.

For display traffic, many elements can affect network bandwidth, such as protocol used, monitor resolution and configuration, and the amount of multimedia content in the workload. Concurrent launches of streamed applications can also cause usage spikes.

Because the effects of these issues can vary widely, many companies monitor bandwidth consumption as part of a pilot project. As a starting point for a pilot, plan for 150 to 200Kbps of capacity for a typical knowledge worker.

With the PCoIP or Blast Extreme display protocol, if you have an enterprise LAN with 100Mb or a 1Gb switched network, your end users can expect excellent performance under the following conditions:

  • Two monitors (1920 x 1080)
  • Heavy use of Microsoft Office applications
  • Heavy use of Flash-embedded Web browsing
  • Frequent use of multimedia with limited use of full screen mode
  • Frequent use of USB-based peripherals
  • Network-based printing

For more information, see the information guide called PCoIP Display Protocol: Information and Scenario-Based Network Sizing Guide.

Optimization Controls Available with PCoIP and Blast Extreme

If you use the PCoIP or the Blast Extreme display protocol from VMware, you can adjust several elements that affect bandwidth usage.

  • You can configure the image quality level and frame rate used during periods of network congestion. The quality level setting allows you to limit the initial quality of the changed regions of the display image. You can also adjust the frame rate.

    This control works well for static screen content that does not need to be updated or in situations where only a portion needs to be refreshed.

  • With regard to session bandwidth, you can configure the maximum bandwidth, in kilobits per second, to correspond to the type of network connection, such as a 4Mbit/s Internet connection. The bandwidth includes all imaging, audio, virtual channel, USB, and PCoIP or Blast control traffic.

    You can also configure a lower limit, in kilobits per second, for the bandwidth that is reserved for the session, so that a user does not have to wait for bandwidth to become available. You can specify the Maximum Transmission Unit (MTU) size for UDP packets for a session, from 500 to 1500 bytes.

For more information, see the "PCoIP General Settings" and the "VMware Blast Policy Settings" sections in Configuring Remote Desktop Features in Horizon 7.

Network Configuration Example

In a View 5.2 test pod in which one vCenter Server 5.1 instance managed 5 pools of 2,000 virtual machines in each pool, each ESXi host had the following hardware and software for networking requirements.

Note: This example was used in a View 5.2 setup, which was carried out prior to the release of VMware vSAN. For guidance on sizing and designing the key components of View virtual desktop infrastructures for VMware vSAN, see the white paper at http://www.vmware.com/files/pdf/products/vsan/VMW-TMD-Virt-SAN-Dsn-Szing-Guid-Horizon-View.pdf. Also, the example uses View Composer linked-clones, rather than instant clones, because the test was performed with View 5.2. The instant clone feature is introduced with Horizon 7.
Physical components for each host
  • Brocade 1860 Fabric Adapter utilizing 10Gig Ethernet and FCoE for network and storage traffic, respectively.
  • Connection to a Brocade VCS Ethernet fabric consisting of 6 VDX6720-60 switches. The switches uplinked to the rest of the network with two 1GB connections to a Juniper J6350 router.
vLAN summary
  • One 10Gb vLAN per desktop pool (5 pools)
  • One 1Gb vLAN for the management network
  • One 1Gb vLAN for the VMotion network
  • One 10Gb vLAN for the infrastructure network
Virtual VMotion-dvswitch (1 uplink per host)
This switch was used by the ESXi hosts of infrastructure, parent, and desktop virtual machines.
  • Jumbo Frame (9000 MTU)
  • 1 Ephemeral Distributed Port Group
  • Private VLAN and 192.168.x.x addressing
Infra-dvswitch (2 uplink per host)
This switch was used by the ESXi hosts of infrastructure virtual machines.
  • Jumbo frame (9000 MTU)
  • 1 Ephemeral distributed port group
  • Infrastructure VLAN /24 (256 addresses)
Desktop-dvswitch (2 uplink per host)
This switch was used by the ESXi hosts of parent, and desktop virtual machines.
  • Jumbo frame (9000 MTU)
  • 6 Ephemeral distributed port groups
  • 5 Desktop port groups (1 per pool)
  • Each network was /21, 2048 addresses