The SD-WAN Gateway runs on a standard hypervisor (KVM or VMware ESXi).

Minimum Server Requirements

To run the hypervisor:

  • CPU: Intel XEON (10 cores minimum to run a single 8-core gateway VM) with minimum clock speed of 2.0 Ghz is required to achieve maximum performance. The CPU must support and enable the following instruction sets: AES-NI, SSSE3, SSE4, RDTSC, RDSEED, RDRAND, AVX/AVX2/AVX512.
  • Minimum of 36GB RAM (One gateway VM requires 32GB RAM)
  • Minimum of 150GB magnetic or SSD based, persistent disk volume (One gateway VM requires 96GB)
  • Minimum 1x10Ge network interface ports and 2 is preferred when enabling gateway partner hand-off interface (1Ge NICs are supported but will bottleneck performance). The physical NIC cards supporting SR-IOV are Intel 82599/82599ES and Intel X710/XL710 chipsets.
    Note: Configure the host BIOS settings as follows:

    - Hyperthreading - Disabled

    - Power Savings - Turned off

    - CPU Turbo - Enabled

    - AES-NI - Enabled

Examples of Server Specifications

NIC Chipset Hardware Specification
Intel 82599/82599ES HP DL380G9 http://www.hp.com/hpinfo/newsroom/press_kits/2014/ComputeEra/HP_ProLiantDL380_DataSheet.pdf
Intel X710/XL710 Dell PowerEdge R640 https://www.dell.com/en-us/work/shop/povw/poweredge-r640
  • CPU Model and Cores - Dual Socket Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz with 16 cores each
  • Memory - 384 GB RAM
Intel X710/XL710 Supermicro SYS-6018U-TRTP+ https://www.supermicro.com/en/products/system/1U/6018/SYS-6018U-TRTP_.cfm
  • CPU Model and Cores - Dual Socket Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz with 10 Cores each
  • Memory - 256 GB RAM

Required NIC Specifications for SR-IOV support

Hardware Manufacturer Firmware Version Host Driver for Ubuntu 18.04 Host Driver for ESXi 6.7
Dual Port Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ 7.0 2.10.19.30 1.8.6 and 1.10.9.0
Dual Port Intel Corporation Ethernet Controller X710 for 10GbE SFP+ 7.0 2.10.19.30 1.8.6 and 1.10.9.0
Quad Port Intel Corporation Ethernet Controller X710 for 10GbE SFP+ 7.0 2.10.19.30 1.8.6 and 1.10.9.0
Dell rNDC X710/350 card nvm 7.10 and FW 19.0.12 2.10.19.30 1.8.6 and 1.10.9.0

Supported Hypervisor Versions

Hypervisor Supported Versions
VMware
  • Intel 82599/82599ES - ESXi 6.5 U3 up to ESXi 7.0. To use SR-IOV, the vCenter and the vSphere Enterprise Plus license are required.
  • Intel X710/XL710 - ESXi 6.7 with VMware vSphere Web Client 6.7.0 up to ESXi 7.0 with VMware vSphere Web Client 7.0
KVM
  • Intel 82599/82599ES - Ubuntu 16.04 LTS and Ubuntu 18.04 LTS
  • Intel X710/XL710 - Ubuntu 16.04 LTS and Ubuntu 18.04 LTS

SD-WAN Gateway Virtual Machine (VM) Specification

For VMware, the OVA already specifies the minimum virtual hardware specification. For KVM, an example XML file is provided. The minimum virtual hardware specifications are:
  • If using VMware ESXi:
    • Latency Sensitivity must be set to 'High'.
    • vNIC must be of 'vmxnet3' type (or SR-IOV, see SR-IIOV section for support details).
    • 8 vCPUs (4 vCPUs are supported but expect lower performance).

      Important: All vCPU cores should be mapped to the same socket with the Cores per Socket parameter set to either 8 with 8 vCPUs, or 4 where 4 vCPUs are used.

      Note: Hyper-threading must be deactivated to achieve maximum performance.
    • 32 GB of memory
    • Minimum of any one of the following vNICs:
      • The First vNIC is the public (outside) interface, which must be an untagged interface.
      • The Second vNIC is optional and acts as the private (inside) interface that can support VLAN tagging dot1q and Q-in-Q. This interface typically faces the PE router or L3 switch.
    • Optional vNIC (if a separate management/OAM interface is required).
    • 96 GB of virtual disk.
  • If using KVM:
    • vNIC must be of 'Linux Bridge' type. (SR-IOV is required for high performance, see SR-IIOV section for support details).
    • 8 vCPUs (4vCPUs are supported but expect lower performance).

      Important: All vCPU cores should be mapped to the same socket with the Cores per Socket parameter set to either 8 with 8 vCPUs, or 4 where 4 vCPUs are used.

      Note: Hyper-threading must be deactivated to achieve maximum performance.
    • 32 GB of memory
    • Minimum of any one of the following vNICs:
      • The First vNIC is the public (outside) interface, which must be an untagged interface.
      • The Second vNIC is optional and acts as the private (inside) interface that can support VLAN tagging dot1q and Q-in-Q. This interface typically faces the PE router or L3 switch.
    • Optional vNIC (if a separate management/OAM interface is required).
    • 96 GB of virtual disk.

Firewall/NAT Requirements

Note: These requirements apply if the SD-WAN Gateway is deployed behind a Firewall and/or NAT device.
  • The firewall needs to allow outbound traffic from the SD-WAN Gateway to TCP/443 (for communication with SD-WAN Orchestrator).
  • The firewall needs to allow inbound traffic from the Internet to UDP/2426 (VCMP), UDP/4500, and UDP/500. If NAT is not used, then the firewall needs to also allow IP/50 (ESP).
  • If NAT is used, the above ports must be translated to an externally reachable IP address. Both the 1:1 NAT and port translations are supported.

Git Repository with Templates and Samples

The following Git repository contains templates and samples.

git clone https://bitbucket.org/velocloud/deployment.git

Use of DPDK on VMware SD-WAN Gateways

To improve packet throughput performance, VMware SD-WAN Gateways take advantage of Data Plane Development Kit (DPDK) technology. DPDK is a set of data plane libraries and drivers provided by Intel for offloading TCP packet processing from the operating system kernel to processes running in user space and results in higher packet throughput. For more details, see https://www.dpdk.org/.

On VMware hosted Gateways and Partner Gateways, DPDK is used on interfaces that manage data plane traffic and is not used on interfaces reserved for management plane traffic. For example, on a typical VMware hosted Gateway, eth0 is used for management plane traffic and would not use DPDK. In contrast, eth1, eth2, and eth3 are used for data plane traffic and use DPDK.