This section explains how to deploy and operate an on-premises VMware VeloCloud SD-WAN solution, including the Orchestrator.

Overview

This section explains how to deploy and operate an on-premises VMware VeloCloud SD-WAN solution, which includes the VeloCloud Orchestrator, the VeloCloud Controller, and a co-located Edge Hub Cluster. It includes a reference architecture and the requirements and caveats for an on-premises SD-WAN deployment. It also covers the optional use of an External Certificate Authority and FIPS mode.

Reference Architecture

The reference architecture shows the logical grouping of the different VeloCloud SD-WAN and network functions. It also shows how the different nodes connect and communicate with each other. We expect that most real-world deployments would adapt the reference architecture in some way to accomodate their existing customer network.

Figure 1. The Reference Architecture is a conceptual diagram representing the solution’s different functional blocks. These functions may be combined, split from one another, separated by firewalls and/or network segmentation, and so on, as long they maintain the fundamental ability to communicate with one another.
Note: In the above diagram, VECO stands for the VeloCloud Edge Cloud Orchestrator, and VECC stands for the VeloCloud Edge Cloud Controller.

Core Network

The Core Network has applications and resources that users need to access to achieve their business goals. It may also have management functions, such as network monitoring and operations. If you use an External Certificate Authority (ECA), it may reside here. Data Center Interconnect (DCI, if you have it) also ends here from a routing perspective. You may separate these functions into different logical network segments.

SD-WAN Network

The SD-WAN Network is between the Transport Network and the Core Network. It has the Orchestrator (management plane), the Controller (control & routing plane), and the on-premises Hub Cluster (data plane), if you have one. These are all in the SD-WAN Network, but you may separate the Hub Cluster from the Orchestrator and Controller logically. This way, you can keep the SD-WAN management and control-plane traffic away from the branch data-plane traffic that uses the Hub Cluster to reach the core network resources.

Transport Network

The Transport Network has the WAN transport functions in the network, such as Public WAN (internet) and Private WAN (MPLS).

The Transport Network also has the Wide Area Network routing functions. This is where you would find Public WAN (internet) routers and Private WAN (MPLS CE) routers in an on-prem customer network. You may do NAT between public and private IP addresses here, or on an Edge firewall, depending on your network setup. You can also use wireless WAN (5G, for example) in the Public WAN case.

Firewalls

While not explicitly shown in the reference architecture, firewalls may be present. They may be deployed between the different functional blocks to provide security and traffic inspection or may be used to create different segments or zones within a functional block.

Packet Flows

This section illustrates the packet flow path in the reference architecture for various operations.

Edge Activation

An internet-only branch location activates via a public WAN through the Internet Routing Function in the Transport Network to the public IP address of the Orchestrator (solid red line).

An MPLS-only branch location activates to the Orchestrator through the Private WAN Routing Function in the Transport Network, then through the Core Routing function in the Core Network, to the Orchestrator’s private IP address (dashed red line).

A hybrid Edge location may use either.

Figure 2. Packet flow for Edge Activation

Management Plane: Edge to Orchestrator (VECO)

After activation, Edges use their Loopback IP addresses to connect to the Orchestrator. The Edge puts this connection inside the tunnel to the Controller, using any transport tunnel available. This connection then goes out of the Controller through its eth1 connection to the core network and reaches the Orchestrator. The Controller keeps the Loopback IP address as the source (no SNAT as with 1-arm Controller). The Orchestrator then replies to the Edge with its loopback address, which is dynamically routable via the Controller eth1 interface for symmetric routing.

Activated Edges may also communicate directly with the Orchestrator via the underlay, in which case they would use the same packet flow paths as in the Activation section.

If Edges use the SD-WAN overlay via the Controller (also called the VECC), the path depends on the Edge type. Internet-only Edges use the SD-WAN overlay via Public WAN to the public IP of the Controller (solid red line in the diagram), where they leave the tunnel, and go from the Controller's public IP address to the public IP address of the Orchestrator (solid blue line in the diagram).

MPLS-only Edges take the SD-WAN overlay tunnel to the private IP of the Controller (dashed red line in the diagram). From there, they leave the overlay tunnel, and go to the private IP of the Orchestrator (dashed blue line in the diagram).
Note: You do not need NAT in this case because you have end-to-end private IP routing.
Figure 3. Packet Flow for Management Plane – Edge to Orchestrator (VECO)

Control Plane: Edge to Controller (VECC)

All Edges connect to the Controller (also referred to as the VeloCloud Edge Cloud Controller, or VECC) with SD-WAN overlay tunnels. Usually, Edges have a Primary and Secondary Controller for backup. But for simplicity, we only show one Controller in the diagram.

Internet-only Edges connect to the Controller's public IP address with an SD-WAN overlay tunnel over the public internet. They use the Internet routing function in the Transport Network (solid red line in the diagram). MPLS-only Edges also connect to the Controller with an SD-WAN overlay tunnel, but they use the Controller's private IP address. They use the Private WAN routing function in the Transport Network and the Core Routing function in the Core Network (solid red line in the diagram).

Figure 4. Packet Flow for Control Plane: Edge to Controller (VECC)

Hub Cluster to Controller (VECC)/Orchestrator (VECO): Special Case

The on-premises Hub Cluster provides a data path from Edge locations to the Core network, where the applications and resources are hosted. Usually, the Hub connects to a data center and accesses the Orchestrator and Controller remotely. So, the Hub Cluster needs to use public IP addresses to communicate with the Orchestrator/Controller; it cannot use the LAN-side / private IP network. To make the Hub Cluster look like it is at a remote location and access the Orchestrator/Controller over the public internet, you may need to put the Hub Cluster in a separate network segment (for example, VRF). You may also need hairpin NAT functionality on a network device on the WAN side of the Hub Cluster, Controller, and Orchestrator.

Figure 5. Packet Flow - Hub Cluster to Controller (VECC)/Orchestrator

Edge to Core Network Functions – Data Path

Branch Edges access applications and resources in the datacenter via the on-premises Edge Hub Cluster. Internet-only Edges establish an SD-WAN overlay tunnel to the public WAN IP address of the Hub Cluster via the Internet routing function in the Transport Network (solid red line). Similarly, MPLS-only Edges establish an SD-WAN overlay tunnel to the private WAN IP address of the Hub Cluster via the private routing function in the Transport Network (dashed red line).

Once the data path reaches the Hub Cluster, it is removed from the SD-WAN tunnel and natively routed to applications in the core network via the Core Routing Function (solid blue lines).

This path is shared by all Edge types. Hybrid locations may use a combination of public and private SD-WAN overlays to the Edge Hub Cluster.

Figure 6. Packet Flow for Edge to Core Network Functions - Data Path

Orchestrator Disaster Recovery and Synchronization

When two VMware Edge Cloud Orchestrators (VECOs, or Orchestrators) are deployed as an Active-Standby Disaster Recovery (DR) pair in geographically diverse data center locations, there is both a real-time synchronization function (for configurations, alarms, metrics, etc.) between the active and standby, as well as a keep-alive function (to detect the failure of the active Orchestrator). This data path uses the private IP address of the Orchestrator, and uses a Data Center Interconnect to reach the remote DC via Core Routing function (red line).

More details on Orchestrator Disaster Recovery may be found in the Configure Orchestrator Disaster Recovery section.

Figure 7. Disaster Recovery Topology and Packet Flow

Controller to Orchestrator

The Orchestrator (management plane node) must communicate with the Controller (control plane node). In an on-premises deployment, this is done using the private IP addresses associated with the core network facing interfaces.

Depending upon the DR configuration and the location of the active Orchestrator (in the local or remote DC), this connectivity may take different paths.

For a co-located Orchestrator and Controller, the nodes may communicate directly within the SD-WAN Network block, or pass to the Core Routing Function (solid blue lines).

For a remote Orchestrator and Controller, the Controller must use the Core Routing function and DCI to reach the DR-active Orchestrator (dashed red lines).

Figure 8. Orchestrator to Controller Packet Flow

Controller to External Certificate Authority – Optional

IF an External CA (eCA) is used in the deployment, it would most likely reside in the Management Functions of the Core Network. The Orchestrator would use the private IP address of the core network facing interface to reach the eCA via the Core Routing Function.

If the Orchestrator is deployed as an active-standby DR pair, the Orchestrator fail-over may require the Orchestrator to reach the eCA via DCI.

More details on External CA may be found here:

Figure 9. Controller to External CA Packet Flow

Design Requirements & Assumptions for a Federal Deployment

This section includes requirements, caveats, unique aspects, and best practices specific to an on-premises Federal deployment.

  • The solution spans two or more Data Centers.
  • For redundancy, the VMware Edge Cloud Orchestrator (VECO, or just Orchestrator) is deployed in Disaster Recovery mode. This means that two Orchestrators are deployed in an active-standby pair, one in each Data Center.
  • The paired Orchestrators must have L3 connectivity to one another (for example, via DCI) to maintain data synchronization in the event of a failure of the active Orchestrator.
  • Controllers are also present in both data centers. Since either one of the VECOs in the DR pair may be active, there must be L3 reachability between the Controllers and VECOs at both data centers.
  • Internet (public) transport only Sites: Branch Edge locations that have access to only Public WAN (internet) transport networks.
  • Private transport only Sites: Branch Edge locations that have access to only Private WAN (MPLS) transport networks.
  • Hybrid (public/private) transport mix Sites: Branch Edge locations that have access to both Public and Private WAN transport networks.
  • In some networks, no reachability to public prefixes from private transport is allowed. For example, the Orchestrator and Controller are 1.1.1.1 and 2.2.2.2, there is no reachability to those addresses from the private transport. The private transports do not have a route to these addresses, nor can they be advertised, also there is no default route.

Derived Requirements

The following requirements follow from the above design requirements and assumptions:
  • The VMware Edge Cloud Orchestrator (VECO, or just Orchestrator) must be directly reachable on the public internet.
    • This is achieved through the use of network address translation (NAT).
  • The Controller(s) must be directly reachable on the public internet.
  • The Controller(s) are deployed in 2-Arm Mode (Partner Controller) to accommodate private transport reachability.
    • The Controller(s) attract Orchestrator communication from the SD-WAN Edge by advertising both the public (NAT) and private IP address of the Orchestrator to the overlay.
    • A Hub Edge uses BGP filters to block these Orchestrator advertisements from being advertised back to the Data Center.
  • Edges use loopback interfaces and IP addresses for Orchestrator communication, and place the Orchestrator connection within the Partner Controller tunnel post-activation.
  • A Controller in Partner Controller mode establishes BGP peering with the data center to advertise Edge loopback IP addresses being used for Edge-to-Orchestrator traffic to ensure Orchestrator return traffic remains symmetrical within the Partner Controller tunnel to the Edge.
    • BGP filters are used to ensure only the Edge loopback IP addresses are advertised through this peering – block site user prefixes.
  • The Orchestrator has 2 interfaces:
    • Eth0 for Edge connections via Controller – routed via the Controller in Partner Controller mode.
    • Eth1 for HTTPS access – routed via the Hub Edge.
  • Two Operator Profiles are used:
    • A Public Operator Profile assigns Public IP addresses of the Orchestrator to the Edge.
      • Used for public transport-only Edges.
    • A Private Operator Profile assigns Private IP address of Orchestrator to the Edge
      • Used for private transport-only Edges, and Hybrid Edges

A Sample Minimal Topology

Figure 10. The Sample Minimal Topology adds detail to the Reference Architecture, providing an example of how the solution may be integrated into an existing network.
Note: In the above diagram, VCO stands for the Orchestrator, VCC stands for the Controller, and VCE stands for the Edge.
While most real networks would have a more complex design, the expectation is that this design can be extrapolated to a real network with which it is being integrated.
Note: The interfaces and IP addressing in the diagram are for for example purposes only, and will be referenced throughout the remainder of this guide.

Network Address Translation (NAT)

Network Address Translation (NAT) is used at the Hub Edge. It can be avoided if all components (Orchestrator, Controller, and Edges) are provided public IP addresses directly on their interface, but this is uncommon due to security requirements and limited IPv4 addresses in some networks.

In the minimal topology, the Orchestrator and the Edge(s) are placed behind a NAT boundary. The Controller is placed in front of the NAT boundary. In the example, the dc1-inet underlay router eth0 (internet connection) and eth1 (Controller) interfaces are in front of the NAT boundary, eth2 (Orchestrator), eth3 (Edge) and eth4 (data center) are behind the NAT boundary. This placement of components requires the minimal amount of NAT configuration required. The NATs needed are:

  1. Orchestrator 1:1 NAT: public IP to private IP translation on all eth0 inbound traffic. The VCOs publicly reachable IP address is 1.1.10.10, but its real IP address behind the NAT boundary is 10.10.10.10. All traffic initiating from and ingressing from the internet with destination IP address of 1.1.10.10 needs to have the destination address translated to 10.10.10.10. This translation must be bidirectional, in that the return traffic with a source IP address of 10.10.10.10 needs to be translated to a source IP address of 1.1.10.10.

    The same requirement is necessary for inbound traffic on eth4, as public (internet only) Edges attempting to communicate to Orchestrator using VCOs public IP address (1.1.10.10) will egress the vcc on eth1 and route through the data center core to the internet edge, inbound on eth4.

  2. Edge 1:1 NAT: public IP to private IP translation on all eth0 inbound traffic. The Edges publicly reachable IP address is 1.1.10.20, but its real IP address behind the NAT boundary is 10.10.10.20. All traffic initiating from and ingressing from the internet with a destination IP address of 1.1.10.20 needs to have the destination address translated to 10.10.10.20. This translation must be bidirectional, in that the return traffic with a source IP address of 10.10.10.20 needs to be translated to a source IP address of 1.1.10.20.
  3. Edge SNAT: Source Network Address Translation on all eth1 outbound traffic. All traffic initiating from the Edge destined to the Controller (egress eth1) with a source IP address of 10.10.10.20 needs to have the source IP address translated to 1.1.10.20. This translation must be bidirectional, in that the return traffic with a destination address to 1.1.10.20 needs to be translated to a destination IP address of 10.10.10.20.

An example configuration of these NAT rules in vyos:

nat {
    destination {
        rule 10 {
            destination {
                address 1.1.10.10
            }
            inbound-interface eth0
            translation {
                address 10.10.10.10
            }
        }
        rule 15 {
            destination {
                address 1.1.10.10
            }
            inbound-interface eth4
            translation {
                address 10.10.10.10
            }
        }
        rule 20 {
            destination {
                address 1.1.10.20
            }
            inbound-interface eth0
            translation {
                address 10.10.10.20
            }
        }
    }
    source {
        rule 20 {
            outbound-interface eth1
            source {
                address 10.10.10.20
            }
            translation {
                address 1.1.10.20
            }
        }
        rule 999 {
            outbound-interface eth0
            translation {
                address masquerade
            }
        }
    }
}

Routing

You need to designate 4 routing summary blocks:
  1. All Hub Edge WAN IP addresses – the sample minimal topology uses 10.10.0.0/16.
  2. All Spoke Edge Private WAN IP addresses – the sample minimal topology uses 172.20.0.0/16.
  3. All Edge Loopbacks – the sample minimal topology uses 100.100.0.0/16.
  4. All Spoke Edge client user subnets – the sample minimal topology uses 10.20.0.0/16.

Orchestrator Routing

The VMware Edge Cloud Orchestrator (VECO, or just Orchestrator) operating system needs to have the following routes:

  1. Hub Edge WAN IP summary – via eth0 next hop with metric 0.
  2. Spoke Edge Private WAN IP summary – via eth0 next hop with metric 0.
  3. Spoke Edge Public WAN IP summary (default route) – via eth0 next hop with metric 0.
  4. All Edge Loopbacks – via eth0 next hop with metric 0.
  5. All Spoke Edge client user subnets – via eth1 next hop with metric 0.
  6. Controller eth1 subnet – via eth1 next hop with metric 0.

Controller Routing

The Controller operating system needs to have the following routes:

  1. Default route – via eth0 next hop with metric 0
  2. Default route – via eth1 next hop with metric 5
  3. Orchestrator eth1 subnet - via eth1 next hop with metric 0

Controller SD-WAN Control Plane Routing

  • The Controller control plane injects Orchestrator routes to the overlay for both Orchestrator IPs: 1.1.10.10/32 and 10.10.10.10/32. This draws connections to Orchestrator from the Edge to use the Controller VCMP tunnels.
  • The Controller control plane peers BGP on eth1 with its next-hop to dynamically advertise Loopback IP addresses of Edges with VCMP tunnels established. This ensures return traffic from the Orchestrator to the Edges will come back to the Controller to be placed back into the VCMP tunnel.

Connections

Orchestrator Connections

  • The Orchestrator uses eth0 for all HTTPS management connections to Edges via the underlay and via the overlay (from Controller tunnels).
  • The Orchestrator uses eth1 for all HTTPS user/administrator GUI.
  • The Orchestrator-to-Controller communication uses Controller eth1-to-Orchestrator eth1 path.

Controller Connections

  • The Controller uses eth0 for all public transport VCMP tunnels, and eth1 for all private transport VCMP tunnels.

Edge Connections

  • Edges use their WAN IP address for all activations to the Orchestrator. Private transport Edges use 10.10.10.10 to activate, public transport Edges use 1.1.10.10 to activate. Hybrid Edges can use whichever path to the Orchestrator is available.
  • Edges then build VCMP tunnels (UDP 2426) using their WAN IP addresses to the Controller. Public transports build to Controller eth0, and private transports build to Controller eth1.
  • Post activation, Edges use their Loopback IP addresses to source all connections to the Orchestrator. The Edge places this connection inside the tunnel to the Controller, using either transport tunnel available. This connection then egresses the Controller through its eth1 connection to the core network destined for the Orchestrator. The Controller leaves the Loopback IP address as the source in tact (no SNAT as with 1-arm Controller). The Orchestrator then replies to the Edge destined to its loopback which is dynamically routable via the Controller eth1 interface to ensure symmetric routing.

Prerequisites

The following are the prequisites needed to deploy an on-premises SD-WAN solution:

  1. ESXi 6.5.0 or higher.
  2. Intel Xeon.
  3. Hyperthreading disabled (for Edge).
  4. SSD storage supporting 10k or higher IOPS
  5. Intel NIC with DPDK and SR-IOV support
  6. VDS, vswitches, and port groups pre-configured
  7. Encrypted disks are optional
  8. Plan for 5GB/year/edge
  9. Linux (such as Ubuntu VM) with genisoimage and tree installed

Cloud-init

The Orchestrator, Controller (here referred to as the Gateway), and Edge all boot from a mounted ISO file for their initial bootstrap configuration. This ISO file uses the Linux cloud-init method. Cloud-init consists of YAML files. YAML files are created, and genisoimage is used to create an ISO for each VM.
Note: For the Cloud-init section the Controller will be referred to as the Gateway as this is the equivalent name for this component and what a user would see on the Orchestrator User Interface.
  1. Create YAML files

    On the Linux machine, create 3 YAML files each for Orchestrator and Gateway: meta-data, network-config, and user-data. Create 2 files for Edge: user-data and meta-data. Arrange in directory structure to match the following:

    On the Linux machine, create 3 yaml files each for Orchestrator and Gateway: meta-data, network-config, and user-data. Create 2 files for Edge: user-data and meta-data. Arrange in directory structure to match the following:
    Note: The acronym VCG matches to the Gateway (Controller) component; VCE matches to the Edge; and VCO matches to the Orchestrator.
    isos/
    |
    |--VCG
    |  |--meta-data
    |  |--network-config
    |  |--user-data
    |  
    |--VCE
    |  |--meta-data
    |  |--user-data
    |
    |--VCO
    |  |--meta-data
    |  |--network-config
    |  |--user-data
    1. Orchestrator (VCO) Example:

      1. user-data
        #cloud-config
        hostname: vco-1
        password: Velocloud123
        chpasswd: {expire: False}
        ssh_pwauth: True
        velocloud:
          fips_mode: compliant
        vco:
          super_users:
            list: |
              [email protected]:Velocloud123
            remove_default_users: False
      2. network-config
        version: 2
        ethernets:
           eth0:
              addresses:
                 - 10.10.10.10/28
              routes:
                 - to: 0.0.0.0/0
                   via: 10.10.10.1
                   metric: 0
              nameservers:
                addresses: [10.10.10.1]
           eth1:
              addresses:
                 - 172.16.1.18/28
              routes:
                - to: 10.20.0.0/16
                  via: 172.16.1.17
                  metric: 0
                - to: 172.16.0.0/12
                  via: 172.16.1.17
                  metric: 0
                - to: 192.168.0.0/16
                  via: 172.16.1.17
                  metric: 0
      3. meta-data
        instance-id: vco-1
        local-hostname: vco-1
    2. Gateway (VCG) Example:

      1. user-data
        #cloud-config
        hostname: vcc-1
        password: Velocloud123
        chpasswd: {expire: False}
        ssh_pwauth: True
        velocloud:
          fips_mode: compliant
      2. network-config
        version: 2
        ethernets:
          eth0:
            addresses:
              - 1.1.1.10/28
            routes:
              - to: 0.0.0.0/0
                via: 1.1.1.1
                metric: 0
            nameservers:
              addresses: [1.1.1.1]
          eth1:
            addresses:
              - 172.16.1.10/28
            routes:
              - to: 0.0.0.0/0
                via: 172.16.1.1
                metric: 5
              - to: 172.16.1.16/28
                via: 172.16.1.1
                metric: 0
      3. meta-data
        instance-id: vcc-1
        local-hostname: vcc-1
    3. Edge (VCE) Example
      1. user-data
        #cloud-config
        hostname: vce
        password: Velocloud123
        chpasswd: {expire: False}
        ssh_pwauth: True
      2. meta-data
        instance-id: vce
  2. Generate ISO
    Example:
    cd iso
    cd vco
    genisoimage -output cdrom.iso -volid cidata -joliet -rock user-data meta-data network-config
    cd ..
    cd vcc
    genisoimage -output cdrom.iso -volid cidata -joliet -rock user-data meta-data network-config
    cd ..
    cd vce
    genisoimage -output cdrom.iso -volid cidata -joliet -rock user-data meta-data
    cd ..
    
    The directory structure should look as follows with cdrom.iso added to each directory:
    Note: The acronym VCG matches to the Gateway component; VCE matches to the Edge; and VCO matches to the Orchestrator.
    isos/
    |
    |--VCG
    |  |--cdrom.iso
    |  |--meta-data
    |  |--network-config
    |  |--user-data
    |  
    |--VCE
    |  |--cdrom.iso
    |  |--meta-data
    |  |--user-data
    |
    |--VCO
    |  |--cdrom.iso
    |  |--meta-data
    |  |--network-config
    |  |--user-data
  3. Upload the 3 ISO files to the ESXi datastore
    Figure 11. Upload location for ISO files
  4. Orchestrator Deployment - ESXi
    Note: Skip to the next section if using vCenter.
    1. Create/Register VM
    2. Deploy a virtual machine from an OVF or OVA file, Next
    3. Provide appropriate name, browse or drag OVA file, Next
    4. Select storage, Next
    5. Select appropriate port group for the Orchestrator eth0 interface (VCO also refers to the Orchestrator)
    6. Select thick provision
    7. Uncheck power on automatically
    8. Next
    9. Finish
    10. Wait for the OVF import to complete
    11. Select the newly deployed VM
    12. Edit
    13. Add Network Adapter
    14. Select appropriate port group for Orchestrator eth1 interface
    15. Expand CD/DVD Drive 1
    16. Change to Datastore ISO file
    17. Browse to location of uploaded ISO files, select Orchestrator (or VCO) ISO, Select
    18. Check Connect at Power On, and Connect
    19. Save
    20. Power On VM
    21. Wait for FIPS mode, the VM will automatically reboot after 5-10 minutes
    22. Login with vcadmin/Velocloud123
    23. Configure NTP at /etc/ntp.conf (only if private NTP needed, default uses 0.ubuntu.pool.ntp.org)
      1. sudo vi /etc/ntp.conf
      2. pool 10.10.10.1 iburst
      3. escape
      4. :wq!
      5. Sudo service ntp restart
      6. Sudo ntpq -c peers
  5. Orchestrator Deployment – vCenter
    1. Actions, Deploy OVF Template.
    2. Choose VCO OVA file from local, Next
    3. Provide appropriate name, select appropriate location
    4. Select appropriate compute resource, Next
    5. Review
    6. Next
    7. Select storage, thin provision
    8. Select appropriate port group for the Orchestrator (or, VCO) eth0 interface, Next
    9. Template values, do not use, leave all empty, Next
    10. Finish
    11. Wait for OVF Deploy/Import tasks to complete
    12. Select VM
    13. Actions > Edit Settings
    14. ADD NEW DEVICE
    15. Network Adapter
    16. New Network: select appropriate port group for the Orchestrator (or, VCO) eth1 interface
    17. Expand CD/DVD Drive
    18. Change to Datastore ISO file
    19. Browse to ISO, select it, OK
    20. Check box for Connect at Power on
    21. OK
    22. Power On VM
    23. Wait for FIPS mode, VM will automatically reboot after 5-10 minutes
    24. Login with vcadmin/Velocloud123
    25. Configure NTP at /etc/ntp.conf (only if private NTP needed, default uses 0.ubuntu.pool.ntp.org)
      • sudo vi /etc/ntp.conf
      • pool 10.10.10.1 iburst
      • escape
      • :wq!
      • Sudo service ntp restart
      • Sudo ntpq -c peers
  6. Orchestrator Initial Configuration
    1. Login to the Orchestrator at https://1.1.10.10/ui/operator with credentials [email protected]/vcadm!n; or [email protected]/Velocloud123; or customized credentials per ISO file are also available if ISO boot was used.
    2. From the top navigation Select Orchestrator.
    3. From the left navigation select System Properties.
    4. Select network.public.address.
    5. Replace localhost in value field with 1.1.10.10, Save Changes.
    6. Search the top search bar for websocket.
    7. Select network.portal.websocket.address.
    8. Value: 172.16.1.18, System Properties.
    9. Search top search bar for source.
    10. Select gateway.activation.validate.source.
    11. Change value to False, Save Changes.
    Note: To configure the Orchestrator in a Disaster Recovery (DR) topology, see Configure Orchestrator Disaster Recovery.
  7. Gateway Staging
    1. From the top navigation select Gateway Management
    2. From the left navigation select Gateway Pools
    3. Select Default Pool
    4. Change Partner Gateway Handoff to: Allow
    5. SAVE CHANGES
    6. From the left navigation select Gateways
    7. New Gateway
      1. Name: vcc-1
      2. IPv4 address: 1.1.1.10
      3. Service State: In Service
      4. Gateway Pool: Default Pool
      5. Create
      6. Select the newly created Gateway vcc-1
      7. Gateway Roles: Partner Gateway
      8. Partner Gateway (Advanced Hand Off) Details, delete all default Static Routes
      9. Static Routes +ADD
      10. Subnet: 10.10.10.10/32, Cost: 1, Encrypt: Check the box, Handoff: VLAN
      11. Subnet: 1.1.10.10/32, Cost: 1, Encrypt: Check the box, Handoff: VLAN
      12. SAVE CHANGES
      13. Save the activation key from the yellow bar at the top
  8. Gateway Deployment - vCenter
    Note: For a Gateway ESXi install, follow a similar process as outlined in Step 4 for the ESXi Orchestrator.
    1. Deploy OVF Template.
    2. Choose VCG OVA file from local. (VCG is the equivalent for Gateway)
    3. Provide appropriate name, select appropriate location.
    4. Select appropriate compute resource.
    5. Review
    6. Select storage, thick provision.
    7. Select appropriate port group for Outside, Inside.
    8. Template values: do not use, leave all empty.
    9. Next
    10. Finish
    11. Wait for OVF Deploy/Import tasks to complete.
    12. Select VM.
    13. Actions, Edit Settings
    14. Expand CD/DVD Drive 1.
    15. Change to DataStore ISO file.
    16. Browse to ISO, select, OK.
    17. Check Connect at Power On, OK.
    18. Power On VM.
    19. Wait for FIPS mode to enable, the instance will reboot automatically.
    20. FIPS is complete when login says Linux 4.15.0-1113-fips x86_64.
      vcg-1 login: vcadmin
      Password:
      Welcome to Velocloud OS (GNU/Linux 4.15.0-1113-fips x86_64)
    21. Login with vcadmin/Velocloud123
    22. Configure NTP at /etc/ntp.conf (only if private NTP needed, default uses 0.ubuntu.pool.ntp.org).
      pool 10.10.10.1 iburst
      sudo service ntp restart
      sudo ntpq -c peers
      
    23. Ensure offset is less than or equal to 15ms using:
      sudo ntpq-p
    24. Configure vc_blocked_subnets
      1. cd /opt/vc/etc
      2. sudo vi vc_blocked_subnets.json
      3. delete all lines
      4. insert : {}
      5. press the escape key
      6. :wq!
    25. Configure gatewayd
      1. cd /etc/config
      2. sudo vi gatewayd
      3. in the wan field (4th line) and geneve field (5th line) substitute eth1 in place of eth0
        "wan": ["eth1"],
        "geneve": ["eth1"],
      4. press the escape key
      5. :wq!
    26. sudo reboot
    27. Manually activate
      1. login with vcadmin/Velocloud123
      2. sudo su
      3. cd /opt/vc/bin
      4. activate.py -I -s 172.16.1.18 <activation
                key>
      5. a successful activation would output the following:
        Activation successful, VCO overridden back to 172.16.1.18
  9. Orchestrator Operator Profile
    Note: For an ESXi installation, follow a process similar to Step 4a for the Orchestrator.
    1. Login to the Orchestrator User Interface as an Operator.
      Note: Make sure the URL includes /operator at the end.
    2. Top navigation: Edge Image Management
    3. Left navigation: Software
    4. Upload Image
    5. Browse to or drop an appropriate Edge image, for example:

      edge-imageupdate-VC_VMDK-x86_64-5.2.0.2-83770177-R5202-20230725-GA-6969b39047

    6. Done
    7. Left navigation: Application Maps
    8. Upload
    9. Browse to or drop appropriate application map, for example: r5200_app_map.json
    10. Edit (do not save)
    11. Rename to version, for example: R5200
    12. Save Changes
    13. Top navigation: Administration
    14. Left navigation: Operator Profiles
    15. NEW
      1. Name: R5200-PUBLIC
      2. Create
      3. Select newly created R5200
      4. Ensure Orchestrator Address: IP address
      5. Ensure Orchestrator IPv4 Address: 1.1.10.10
      6. Application Map Assignment, JSON File: R5200
      7. Software Version: Toggled On
      8. Version: 5.2.0.2
      9. Save Changes
    16. Left Navigation: Operator Profiles
    17. Check the box next to R5200
    18. DUPLICATE
    19. Name: R5200-PRIVATE, CREATE
    20. Select R5200-PRIVATE
    21. Change Orchestrator IPv4 address to: 10.10.10.10
    22. SAVE CHANGES
  10. Customer Configuration
    1. Login to the Orchestrator User Interface as an Operator.
      Note: Make sure the URL includes /operator at the end.
    2. Top navigation: Customers & Partners
    3. Left navigation: Manage Customers
    4. +NEW CUSTOMER
      1. Company Name: cust1
      2. Check the SASE Support Access box
      3. Check the SASE User Management Access box
      4. Next
      5. Administrative Account
      6. Username: [email protected]
      7. Password: Velocloud123
      8. Next
      9. Services
      10. Check the SD-WAN box
      11. Gateway Pool: Default Pool
      12. Check the Allow Customer to Manage Software box
      13. Software Image, +ADD
      14. Select R5200-PRIVATE and R5200-PUBLIC, click the right pointing arrow to move these to the Selected Images
      15. Select the radio button to the right of R5200-PRIVATE
      16. Done
      17. SD-WAN, Edge Licensing, +ADD
      18. Search for the term ‘POC’
      19. Select POC | 10Gbps |, click the right pointing arrow to move this license to Selected Edge Licenses
      20. Save
      21. Check the box for Feature Access, Stateful Firewall
      22. ADD CUSTOMER
    5. Select the newly created ‘cust1’ check box to the left
    6. At top, select EDIT CUSTOMER SYSTEM SETTINGS
    7. Left navigation, select Customer Configuration
    8. Scroll to the bottom and then expand SD-WAN Settings
    9. Check the box for Distributed Cost Calculation
    10. Check the box for Use NSD Policy
    11. Save Changes
  11. Partner Gateway Configuration
    1. Top navigation: customer selected, Global Settings
    2. Left navigation: Customer Configuration
    3. Expand Gateway Pool
    4. Toggle Partner Hand Off to ON
    5. Configure Hand Off radio button, change to Per Gateway
    6. Confirm with OK
    7. Under vcc-1 – Global Segment, click Configure BFD & BGP
    8. BGP toggle to ON
    9. Customer ASN: 65004
    10. Hand Off Interface, Local IP address: 172.16.1.10/28
    11. Use for Private Tunnels: Check the box
    12. Advertise Local IP address via BGP: Check the box
    13. BGP, Neighbor IP: 172.16.1.1
    14. Neighbor ASN: 65003
    15. Secure BGP Routes: Check the box
    16. BGP Inbound Filters, +ADD
    17. Match type: prefix for IPv4, match value: 0.0.0.0/0, Exact Match: No, Action Type: Deny
    18. BGP Outbound Filters, +ADD
    19. Match Type: prefix for IPv4, match value 100.100.0.0/16, Exact Match = No, Action Type: Permit, Action Set: Community: 777:777, Community Additive: Activated
    20. UPDATE
    21. SAVE CHANGES
  12. Quick Start Profile Configuration
    1. Dark blue navigation bar, select Global Settings Dropdown, select SD-WAN
    2. Top navigation bar, select Configure
    3. Left Navigation bar, select Profiles
    4. Select Quick Start Profile
    5. Select Firewall tab
    6. Expand Edge Access
    7. Console Access: Allow
    8. SAVE CHANGES
    9. Select Device Tab
    10. Expand Connectivity > Interfaces, X out all Edge models not in use
    11. Select GE1
    12. Change Capability to Routed
    13. Disable Underlay Accounting
    14. Disable Enable WAN Link
    15. IPv4 Settings > Addressing Type, change to Static
    16. Check the Advertise box
    17. Uncheck the NAT Direct Traffic box
    18. SAVE
    19. Repeat steps k-through-r for GE2
    20. Select GE2, uncheck the Interface Enabled box
    21. SAVE
    22. SAVE CHANGES
    23. Expand VLAN, select 1-Corporate, DELETE
    24. Confirm DELETE
    25. Under Connectivity > Interfaces, select GE3
    26. IPv4 Settings > Addressing Type: Static, then SAVE
    27. Select GE4
    28. IPv4 Settings > Addressing Type: Static
    29. WAN Link: User Defined
    30. SAVE
    31. Under VPN Services, Expand Gateway Handoff Assignment, +SELECT GATEWAYS
    32. Check vcc-1, UPDATE
    33. Under VPN Services, toggle Cloud VPN to ON
    34. SAVE CHANGES
  13. Hub Edge Profile
    1. Configure > Profiles
    2. Check the box next to Quick Start Profile
    3. DUPLICATE
    4. Name: Hub, CREATE
  14. Hub Edge Staging
    1. Top navigation: Configure
    2. Left navigation: Edges
    3. +ADD EDGE
    4. Name: dc1-vce-1
    5. Model: select as appropriate
    6. Profile: Hub
    7. Edge License: POC
    8. NEXT
    9. ADD EDGE
    10. Expand Loopback Interfaces, +ADD
    11. Interface ID: 1
    12. IPv4 Address: 100.100.1.1, ADD
    13. Expand Management Traffic
    14. Source Interface: LO1
    15. Expand Interfaces section
    16. Select GE1
    17. IPv4 settings, enter IP address: 10.10.11.1
    18. Set Cidr prefix: 24
    19. SAVE
    20. Select GE3
    21. IPv4 settings, enter IP address: 10.10.10.20
    22. CIDR Prefix: 28
    23. Gateway: 10.10.10.17
    24. SAVE
    25. Select GE4
    26. IPv4 settings, enter IP address: 10.10.11.20
    27. CIDR Prefix: 28
    28. Gateway: 10.10.11.17
    29. SAVE
    30. WAN Link Configuration section, +ADD USER DEFINED WAN LINK
    31. Link Type: Private
    32. Name: Private
    33. SD-WAN Service Reachable: Checked
    34. SD-WAN Service Reachable Backup: Uncheck the box
    35. Interfaces: GE4
    36. ADD LINK
    37. SAVE CHANGES
    38. Copy the activation key from the yellow bar at the top
  15. Hub Edge BGP Configuration
    Caution: You must apply proper route maps to prevents loops to the Orchestrator.
    Important: The following steps are mandatory and must be done prior to activating the Hub Edge.
    1. Configure > Edges
    2. Select dc1-vce-1
    3. BGP: Check Override, Toggle ON, Expand
    4. Local ASN: 65004
    5. Filter List: +ADD
    6. Filter Name: outbound
    7. Filter Rules: 1 Rule, click on the link
    8. Match type: Prefix for IPv4
    9. Match value: 10.10.10.10/32
    10. Exact Match: check the box
    11. Action Type: Deny
    12. Check first rule, CLONE
    13. Match Type: Prefix for IPv4
    14. Match Value: 1.1.10.10/32
    15. Exact Match: check the box
    16. Action Type: Deny
    17. +ADD
    18. Match Type: Prefix for IPv4
    19. Match Value: 100.100.0.0/16
    20. Exact Match: do not check the box
    21. Action Type: Deny
    22. +ADD
    23. Match Type: Prefix for IPv4
    24. Match Value: 0.0.0.0/0
    25. Exact Match: do not check the box
    26. Action Type: Permit
    27. Action Set: Community 777:777
    28. Community Additive: Activated checked
    29. SUBMIT
    30. RESULT:
      Figure 12. Outbound Filter Rules
    31. Filter List, +ADD
    32. Filter Name: inbound
    33. Filter Rules: 1 Rule, click on the link
    34. Match type: Community
    35. Match Value: 777:777
    36. Exact Match: No
    37. Action Type: Deny
    38. +ADD
    39. Match Type: Prefix for IPv4
    40. Match Value: 0.0.0.0/0
    41. Exact Match: check the box
    42. Action Type: permit
    43. +ADD
    44. Match Type: Prefix for IPv4
    45. Match Value: 0.0.0.0/0
    46. Exact Match: do not check the box
    47. Action Type: Deny
    48. SUBMIT
    49. RESULT:
      Figure 13. Inbound Filter Rules
    50. Neighbors, +ADD
    51. Neighbor IP: 10.10.11.2
    52. ASN: 65003
    53. Inbound Filter: inbound
    54. Outbound Filter: outbound
    55. SAVE CHANGES
  16. Hub Edge Deployment
    Note: For an ESXi installation, follow a process similar to Step 4a for the Orchestrator. Mount an ISO instead of using the OVA wizard template and use the set_wan_config.sh command to set WAN IP addresses.
    1. Login to vCenter
    2. Deploy OVF Template.
    3. Choose Edge (that is, VCE) OVA file from local
    4. Provide an appropriate name, select an appropriate location
    5. Select an appropriate compute resource
    6. Review
    7. Select storage, thick provision
    8. Select appropriate port groups:
      1. GE1: dc1-lan-pg
      2. GE2: dc1-lan-pg
      3. GE3: dc1-inet-pg
    9. Next
    10. Template values:
      1. Orchestrator address: 10.10.10.10
      2. Activation code: (paste from previous)
      3. Check the box to ignore Orchestrator certificate validation errors
      4. Default Users Password: Velocloud123
      5. DNS1: 10.10.10.17
      6. DNS2: 10.10.11.17
      7. GE3 interface IPv4 allocation: STATIC
      8. GE3 interface IPv4: 10.10.10.20
      9. GE3 interface IPv4 subnet mask: 255.255.255.240
      10. GE3 interface default gateway: 10.10.10.17
      11. GE4 interface IPv4 allocation: STATIC
      12. GE4 interface IPv4: 10.10.11.20
      13. GE4 interface IPv4 subnet mask: 255.255.255.240
      14. GE4 interface default gateway: 10.10.11.17
      15. NEXT
    11. FINISH
    12. Wait for OVF Deploy/Import tasks to complete
    13. Power On VM
    14. Login with vcadmin/Velocloud123
    15. If needed (such as with an ESXi deployment), manually activate:
      1. activate.py -i -s 1.1.10.10 <activation key>
  17. Spoke Edge Profile
    1. Dark blue navigation bar, select Global Settings Dropdown, select SD-WAN
    2. Top navigation bar, select Configure
    3. Left Navigation bar, select Profiles
    4. Select the checkbox next to Hub Profile
    5. Duplicate
    6. Name: Spoke-Hybrid
    7. Create
    8. Select Device tab
    9. Expand Connectivity > Interfaces, select appropriate SD-WAN Edge models
    10. Under VPN Services, check the box next to Branch to Hub Site (permanent VPN): Enable Branch to Hubs
    11. On the right, select Edit Hubs
    12. Select the check box next to dc1-vce-1, click the arrow that moves it rightward on the Hubs list
    13. UPDATE HUBS
    14. SAVE CHANGES
    15. Configure > Profiles
    16. Check the box next to Spoke-Hybrid
    17. DUPLICATE
    18. Name: Spoke-Public
    19. CREATE
    20. On the Configure > Device tab
    21. Expand Interfaces
    22. Select GE4
    23. Change Addressing Type to DHCP
    24. Change WAN Link to Auto-Detect
    25. SAVE
    26. SAVE CHANGES
    27. Configure > Profiles
    28. Check the box next to Spoke-Hybrid
    29. DUPLICATE
    30. Name: Spoke-Private
    31. CREATE
    32. Back on the Configure > Device tab
    33. Expand the Interfaces section
    34. Select GE3
    35. Change Addressing Type: DHCP
    36. SAVE
    37. SAVE CHANGES
  18. Public Spoke Staging
    1. Configure > Edges
    2. +ADD EDGE
    3. Name: s1
    4. Model: select as appropriate
    5. Profile: Spoke-Public
    6. Edge License: POC
    7. NEXT
    8. ADD EDGE
    9. Expand Loopback Interfaces
    10. +ADD
    11. Interface ID: 1
    12. IPv4 address: 100.100.11.1
    13. ADD
    14. Expand the Management Traffic section
    15. Change Source Interface: Lo1
    16. Expand the Interfaces section
    17. Select GE1
    18. IPv4 Settings, IP Address: 10.20.11.1
    19. Cidr prefix: 24
    20. SAVE
    21. Expand the Interfaces section
    22. Select GE3
    23. IPv4 settings, IP address: 1.2.2.2
    24. CIDR Prefix: 28
    25. Gateway: 1.2.2.1
    26. SAVE
    27. SAVE CHANGES
    28. Copy the activation key from the yellow bar at the top
    29. Configure > Edges
    30. Check the box next to s1
    31. MORE
    32. Click Assign Operator Profile
    33. Change to R5200-PUBLIC
    34. ASSIGN
  19. Spoke Deployment
    Note: For an ESXi installation, follow a process similar to Step 4a for the Orchestrator. Mount an ISO instead of using the OVA wizard template and use the set_wan_config.sh command to set WAN IP addresses.
    1. Login to vCenter
    2. Deploy OVF Template
    3. Choose the Edge (VCE) OVA file from local
    4. Provide an appropriate name, select an appropriate location
    5. Select an appropriate compute resource
    6. Review
    7. Select storage, thick provision
    8. Select appropriate port groups
      1. GE1: s1-lan-pg
      2. GE2: s1-lan-pg
      3. GE3: inet-s1-pg
    9. Next
    10. Template Values:
      1. Orchestrator address: 1.1.10.10
      2. Activation code: (paste from section 14 step 21)
      3. Check the box to ignore VCO (Orchestrator) certificate validation errors
      4. Default Users Password: Velocloud123
      5. DNS1: 1.2.2.1
      6. DNS2: 1.2.2.1
      7. GE3 interface IPv4 allocation: STATIC
      8. GE3 interface IPv4: 1.2.2.2
      9. GE3 interface IPv4 subnet mask: 255.255.255.240
      10. GE3 interface default gateway: 1.2.2.1
      11. NEXT
    11. FINISH
    12. Wait for the OVF Deploy/Import tasks to complete
    13. Actions > Edit Settings
    14. Uncheck the box for Connected for Network Adapter 4
    15. Power On VM
    16. Login with vcadmin/Velocloud123
    17. If needed, manually activate as follows:
      1. activate.py -i -s 1.1.10.10 <activation key>