Certain virtual and physical networking components are required to accommodate a typical workload.

For display traffic, many elements can affect network bandwidth, such as protocol used, monitor resolution and configuration, and the amount of multimedia content in the workload. Concurrent launches of streamed applications can also cause usage spikes.

Because the effects of these issues can vary widely, many companies monitor bandwidth consumption as part of a pilot project. As a starting point for a pilot, plan for 150 to 200Kbps of capacity for a typical knowledge worker.

With the PCoIP display protocol, if you have an enterprise LAN with 100Mb or a 1Gb switched network, your end users can expect excellent performance under the following conditions:

  • Two monitors (1920 x 1080)

  • Heavy use of Microsoft Office applications

  • Heavy use of Flash-embedded Web browsing

  • Frequent use of multimedia with limited use of full screen mode

  • Frequent use of USB-based peripherals

  • Network-based printing

For more information, see the information guide called PCoIP Display Protocol: Information and Scenario-Based Network Sizing Guide.

Optimization Controls Available with PCoIP

If you use the PCoIP display protocol from VMware, you can adjust several elements that affect bandwidth usage.

  • You can configure the image quality level and frame rate used during periods of network congestion. The quality level setting allows you to limit the initial quality of the changed regions of the display image. Unchanged regions of the image progressively build to a lossless (perfect) quality. You can adjust the frame rate from 1 to 120 frames per second.

    This control works well for static screen content that does not need to be updated or in situations where only a portion needs to be refreshed.

  • You can also turn off the build-to-lossless feature altogether if instead of progressively building to perfect quality (lossless), you choose to build to perceptual lossless.

  • You can control which encryption algorithms are advertised by the PCoIP endpoint during session negotiation. By default, both Salsa20-256round12 and AES-128-GCM algorithms are available.

  • With regard to session bandwidth, you can configure the maximum bandwidth, in kilobits per second, to correspond to the type of network connection, such as a 4Mbit/s Internet connection. The bandwidth includes all imaging, audio, virtual channel, USB, and control PCoIP traffic.

    You can also configure a lower limit, in kilobits per second, for the bandwidth that is reserved for the session, so that a user does not have to wait for bandwidth to become available. You can specify the Maximum Transmission Unit (MTU) size for UDP packets for a PCoIP session, from 500 to 1500 bytes.

  • You can specify the maximum bandwidth that can be used for audio (sound playback) in a PCoIP session.

In addition, on most client systems, PCoIP client-side image caching stores image content on the client to avoid retransmission. By default, the cache is 90MB if the client version is 2.0 or later.

Network Configuration Example

In a View 5.2 test pod in which one vCenter Server 5.1 instance managed 5 pools of 2,000 virtual machines in each pool, each ESXi host had the following hardware and software for networking requirements.

Note:

This example was used in a View 5.2 setup, which was carried out prior to the release of VMware Virtual SAN. For guidance on sizing and designing the key components of View virtual desktop infrastructures for VMware Virtual SAN, see the white paper at http://www.vmware.com/files/pdf/products/vsan/VMW-TMD-Virt-SAN-Dsn-Szing-Guid-Horizon-View.pdf.

Physical components for each host

  • Brocade 1860 Fabric Adapter utilizing 10Gig Ethernet and FCoE for network and storage traffic, respectively.

  • Connection to a Brocade VCS Ethernet fabric consisting of 6 VDX6720-60 switches. The switches uplinked to the rest of the network with two 1GB connections to a Juniper J6350 router.

vLAN summary

  • One 10Gb vLAN per desktop pool (5 pools)

  • One 1Gb vLAN for the management network

  • One 1Gb vLAN for the VMotion network

  • One 10Gb vLAN for the infrastructure network

Virtual VMotion-dvswitch (1 uplink per host)

This switch was used by the ESXi hosts of infrastructure, parent, and desktop virtual machines.

  • Jumbo Frame (9000 MTU)

  • 1 Ephemeral Distributed Port Group

  • Private VLAN and 192.168.x.x addressing

Infra-dvswitch (2 uplink per host)

This switch was used by the ESXi hosts of infrastructure virtual machines.

  • Jumbo frame (9000 MTU)

  • 1 Ephemeral distributed port group

  • Infrastructure VLAN /24 (256 addresses)

Desktop-dvswitch (2 uplink per host)

This switch was used by the ESXi hosts of parent, and desktop virtual machines.

  • Jumbo frame (9000 MTU)

  • 6 Ephemeral distributed port groups

  • 5 Desktop port groups (1 per pool)

  • Each network was /21, 2048 addresses