You can create a VPN tunnel between the PCG and a remote endpoint by following this workflow. These instructions are specific to workload VMs manged in the Native Cloud Enforced Mode.

Prerequisites

  • In AWS: Verify that you have deployed a VPC in the Native Cloud Enforced Mode. This must be a Transit or Self-managed VPC. VPN is not supported for Compute VPCs in AWS.
  • In Microsoft Azure: Verify that you have deployed a VNet in the Native Cloud Enforced Mode. You can use both Transit and Compute VNets.
  • Verify that the remote endpoint is peered with the PCG and has route-based IPSec VPN and BGP capabilities.

Procedure

  1. In your public cloud, find the NSX-assigned local endpoint for the PCG and assign a public IP address to if necessary:
    1. Go to your PCG instance in the public cloud and navigate to Tags.
    2. Note the IP address in the value field of the tag nsx.local_endpoint_ip.
    3. (Optional) If your VPN tunnel requires a public IP, for example, if you want to set up a VPN to another public cloud or to the on-prem NSX-T Data Center deployment:
      1. Navigate to the uplink interface of the PCG instance.
      2. Attach a public IP address to the nsx.local_endpoint_ip IP address that you noted in step b.
    4. (Optional) If you have an HA pair of PCG instances, repeat steps a and b and attach a public IP address if necessary, as described in step c.
  2. In NSX Manager, enable IPSec VPN for the PCG that appears as a tier-0 gateway named like cloud-t0-vpc/vnet-<vpc/vnet-id> and create route-base IPSec sessions between this tier-0 gateway's endpoint and the remote IP address of the desired VPN peer. See Add an IPSec VPN Service for other details.
    1. Go to Networking > VPN > VPN Services > Add Service > IPSec. Provide the following details:
      Option Description
      Name Enter a descriptive name for the VPN service, for example <VPC-ID>-AWS_VPN or <VNet-ID>-AZURE_VPN.
      Tier0/Tier1 Gateway Select the tier-0 gateway for the PCG in your public cloud.
    2. Go to Networking > VPN > Local Endpoints > Add Local Endpoint. Provide the following information and see Add Local Endpoints for other details. :
      Note: If you have an HA pair of PCG instances, create a local endpoint for each instance using the corresponding local endpoint IP address attached to it in the public cloud.
      Option Description
      Name Enter a descriptive name for the local endpoint, for example <VPC-ID>-PCG-preferred-LE or <VNET-ID>-PCG-preferred-LE
      VPN Service Select the VPN service for the PCG's tier-0 gateway that you created in step 2a.
      IP Address Enter the value of the PCG's local endpoint IP address that you noted in step 1b.
    3. Go to Networking > VPN > IPSec Sessions > Add IPSec Session > Route Based. Provide the following information and see Add a Route-Based IPSec Session for other details:
      Note: If you are creating a VPN tunnel between PCGs deployed in a VPC and PCGs deployed in a VNet, you must create a tunnel for each PCG's local endpoint in the VPC and the remote IP address of the PCG in the VNet, and conversely from the PCGs in the VNet to the remote IP address of PCGs in the VPC. You must create a separate tunnel for the active and standby PCGs. This results in a full mesh of IPSec Sessions between the two public clouds.
      Option Description
      Name Enter a descriptive name for the IPsec session, for example, <VPC--ID>-PCG1-to-remote_edge
      VPN Service Select the VPN service you created in step 2a.
      Local Endpoint Select the local endpoint you created in step 2b.
      Remote IP Enter the public IP address of the remote peer with which you are creating the VPN tunnel.
      Note: Remote IP can be a private IP address if you are able to reach the private IP address, for example, using DirectConnect or ExpressRoute.
      Tunnel Interface Enter the tunnel interface in a CIDR format. The same subnet must be used for the remote peer to establish the IPSec session.
  3. Set up BGP neighbors on the IPSec VPN tunnel interface that you established in step 2. See Configure BGP for more details.
    1. Navigate to Networking > Tier-0 Gateways
    2. Select the auto-created tier-0 gateway for which you created the IPSec session and click Edit.
    3. Click the number or icon next to BGP Neighbors under the BGP section and provide the following details:
      Option Description
      IP Address

      Use the IP address of the remote VTI configured on the tunnel interface in the IPSec session for the VPN peer.

      Remote AS Number This number must match the AS number of the remote peer.

  4. Important: This step is only for NSX-T Data Center 3.0.0. Skip it if your are using NSX-T Data Center 3.0.1.
    If you are using Microsoft Azure, after you have configured VPN and BGP in NSX Manager, Enable IP Forwarding on the uplink interface of the PCG instance. If you have an active and a standby PCG instance, for HA, then enable IP Forwarding on both PCG instances.
  5. Advertise the prefixes you want to use for the VPN using the Redistribution Profile. Do the following:

    1. Important: This step is only for NSX-T Data Center 3.0.0. Skip it if your are using NSX-T Data Center 3.0.1.
      Add a static route for the CIDR of the VPC/VNet onboarded with the Native Cloud Enforced Mode to point to the uplink IP address of the tier-0 gateway, that is, PCG. See Configure a Static Route for instructions. If you have a PCG pair for HA, set up next hops to each PCG's uplink IP address.
    2. Add a prefix list for the VPC/VNet CIDR onboarded in the Native Cloud Enforced Mode and add it as an Out Filter in BGP neighbor configuration. See Create an IP Prefix List for instructions.
    3. Set up a route redistribution profile, enabling static route and selecting the route filter you created for VPC/VNet CIDRs in step b.
  6. In your public cloud:
    1. Go to the routing table of the subnet where you have your workload VMs.
      Note: Do not use the routing table of the PCG's uplink or management subnets.
    2. Add the tag nsx.managed = true to the routing table.

  7. Important: This step is only for NSX-T Data Center 3.0.0. Skip it if your are using NSX-T Data Center 3.0.1.
    NSX Cloud creates a default-snat rule for the tier-0 gateway (PCG) with source 0.0.0.0/0 and destination Any. Because of this rule, all traffic from your VMs in the Native Cloud Enforced Mode has the PCG's uplink IP address. If you want to see the true source of your traffic, do the following:
    1. Go to Networking > NAT and disable the default-snat rule for the tier-0 gateway (PCG).
    2. Create a new SNAT rule with the following values if you have VMs in the NSX Enforced Mode to continue providing SNAT for such VMs:
      Option Description
      Source CIDR of the VPC/VNet in the NSX Enforced Mode .
      Destination Any
      Translated The same IP address in Translated that is in the default-snat rule.
      Apply To Select the PCG's uplink interface.
      Do not edit the default-snat rule. It is reverted in case of a failover.

Results

Verify that routes are created in the managed routing table for all IP prefixes advertised by the remote endpoint with next hop set to the PCG's uplink IP address.