You can install multiple NSX Edge services gateway virtual appliances in a data center. Each NSX Edge virtual appliance can have a total of ten uplink and internal network interfaces. The internal interfaces connect to secured port groups and act as the gateway for all protected virtual machines in the port group. The subnet assigned to the internal interface can be a publicly routed IP address space or a NATed/routed RFC 1918 private space. Firewall rules and other NSX Edge services are enforced on traffic between interfaces.

Uplink interfaces of an ESG connect to uplink port groups that have access to a shared corporate network or a service that provides access layer networking.

The following list describes feature support by interface type (internal and uplink) on an ESG.

  • DHCP: Not supported on uplink interfaces.

  • DNS Forwarder: Not supported on uplink interfaces.

  • HA: Not supported on uplink interfaces, requires at least one internal interface.

  • SSL VPN: Listener IP must belong to an uplink interface.

  • IPsec VPN: Local site IP must belong to an uplink interface.

  • L2 VPN: Only internal networks can be stretched.

The following figure shows a sample topology. The Edge Service Gateway uplink interface is connected to the physical infrastructure through the vSphere distributed switch. The Edge Service Gateway internal interface is connected to a logical router through a logical transit switch.

Multiple external IP addresses can be configured for load balancing, site-to-site VPN, and NAT services.


If you enable high availability on an NSX Edge in a cross-vCenter NSX environment, both the active and standby NSX Edge appliances must reside within the same vCenter Server. If you migrate one of the members of an NSX Edge HA pair to a different vCenter Server system, the two HA appliances will no longer operate as an HA pair, and you might experience traffic disruption.


  • You must have been assigned the Enterprise Administrator or NSX Administrator role.

  • Verify that the resource pool has enough capacity for the Edge Services Gateway (ESG) virtual appliance to be deployed. See System Requirements for NSX Data Center for vSphere for the resources required for each size of appliance.

  • Verify that the host clusters on which the NSX Edge appliance will be installed are prepared for NSX. See "Prepare Host Clusters for NSX" in the NSX Installation Guide.

  • Determine if you want to enable DRS. If you create an Edge Services Gateway with HA, and DRS is enabled, DRS anti-affinity rules are created to prevent the appliances from being deployed on the same host. If DRS is not enabled at the time the appliances are created, the rules are not created and the appliances might be deployed on or moved to the same host.


  1. In vCenter, navigate to Home > Networking & Security > NSX Edges and click the Add (add) icon.
  2. Select Edge Services Gateway and type a name for the device.

    This name appears in your vCenter inventory. The name must be unique across all ESGs within a single tenant.

    Optionally, you can also enter a hostname. This name appears in the CLI. If you do not enter a host name, the Edge ID, which is created automatically, is displayed in the CLI.

    Optionally, you can enter a description and tenant and enable high availability.

    For example:

  3. Enter and reenter a password for the ESG.

    The password must be at least 12 characters and must follow 3 of the following four rules:

    • At least one uppercase letter

    • At least one lowercase letter

    • At least one number

    • At least one special character

  4. (Optional) Enable SSH, high availability, automatic rule generation, and FIPS mode, and set the log level.

    If you do not enable automatic rule generation, you must manually add firewall, NAT, and routing configurations to allow control traffic for certain NSX Edge services, including load balancing and VPN. Auto rule generation does not create rules for data-channel traffic.

    By default, SSH and high availability are disabled, and automatic rule generation is enabled.

    By default, FIPS mode is disabled.

    By default, the log level is emergency.

    For example:

  5. Select the size of the NSX Edge appliance based on your requirements.

    The Large NSX Edge has more CPU, memory, and disk space than the Compact NSX Edge, and supports a larger number of concurrent SSL VPN-Plus users. The X-Large NSX Edge is suited for environments that have a load balancer with millions of concurrent sessions. The Quad Large NSX Edge is recommended for high throughput and requires a high connection rate.

    See System Requirements for NSX Data Center for vSphere for the resources required for each size of appliance.

  6. Create an Edge appliance.

    Enter the settings for the ESG virtual appliance you are adding to your vCenter inventory. If you do not add an appliance when you install NSX Edge, NSX Edge remains in an offline mode until you add an appliance.

    If you enabled HA, you can add two appliances. If you add a single appliance, NSX Edge replicates its configuration for the standby appliance. For HA to work correctly, you must deploy both appliances on a shared datastore.

    For example:

  7. Select Deploy NSX Edge to add the Edge in a deployed mode. You must configure appliances and interfaces for the Edge before it can be deployed.
  8. Configure interfaces.

    On ESG, both IPv4 and IPv6 addresses are supported.

    You must add at least one internal interface for HA to work.

    An interface can have multiple non-overlapping subnets.

    If you enter more than one IP address for an interface, you can select the primary IP address. An interface can have one primary and multiple secondary IP addresses. NSX Edge considers the primary IP address as the source address for locally generated traffic, for example remote syslog and operator-initiated pings.

    You must add an IP address to an interface before using it on any feature configuration.

    Optionally, you can enter the MAC address for the interface.

    If you change the MAC address using API call later, you must redeploy the Edge after changing the MAC address.

    If HA is enabled, you can optionally enter two management IP addresses in CIDR format. Heartbeats of the two NSX Edge HA virtual machines are communicated through these management IP addresses. The management IP addresses must be in the same L2/subnet and be able to communicate with each other.

    Optionally, you can modify the MTU.

    Enable proxy ARP if you want to allow the ESG to answer ARP requests intended for other machines. This is useful, for example, when you have the same subnet on both sides of a WAN connection.

    Enable ICMP redirect to convey routing information to hosts.

    Enable reverse path filtering to verify the reachability of the source address in packets being forwarded. In enabled mode, the packet must be received on the interface that the router uses to forward the return packet. In loose mode, the source address must appear in the routing table.

    Configure fence parameters if you want to reuse IP and MAC addresses across different fenced environments. For example, in a cloud management platform (CMP), fencing allows you to run several cloud instances simultaneously with the same IP and MAC addresses isolated or "fenced".

    For example:

    The following example shows two interfaces. One attaches the ESG to the outside world through an uplink port group on a vSphere distributed switch. The other attaches the ESG to a logical transit switch to which a distributed logical router is also attached.

  9. Configure a default gateway.

    You can edit the MTU value, but it cannot be more than the configured MTU on the interface.

    For example:

  10. Configure the firewall policy, logging, and HA parameters.

    If you do not configure the firewall policy, the default policy is set to deny all traffic.

    By default, logs are enabled on all new NSX Edge appliances. The default logging level is NOTICE. If logs are stored locally on the ESG, logging might generate too many logs and affect the performance of your NSX Edge. For this reason, it is recommended that you configure remote syslog servers, and forward all logs to a centralized collector for analysis and monitoring.

    If you enabled high availability, complete the HA section. By default, HA automatically chooses an internal interface and automatically assigns link-local IP addresses.

    • Select the internal interface for which to configure HA parameters.


      If you select ANY for interface but there are no internal interfaces configured, the UI displays an error. Two Edge appliances are created but since there is no internal interface configured, the new NSX Edge remains in standby and HA is disabled. After an internal interface is configured, HA is enabled on the NSX Edge appliance.

    • Enter the period in seconds within which, if the backup appliance does not receive a heartbeat signal from the primary appliance, the primary appliance is considered inactive and the backup appliance takes over. The default interval is 15 seconds. Optionally, you can enter two management IP addresses in CIDR format to override the local link IP addresses assigned to the HA virtual machines. Ensure that the management IP addresses do not overlap with the IP addresses used for any other interface and do not interfere with traffic routing. Do not use an IP address that exists somewhere else on your network, even if that network is not directly attached to the NSX Edge.

    For example:


After the ESG is deployed, go to the Hosts and Clusters view and open the console of the NSX Edge virtual appliance. From the console, make sure you can ping the connected interfaces.

What to do next

When you install an NSX Edge appliance, NSX enables automatic VM startup/shutdown on the host if vSphere HA is disabled on the cluster. If the appliance VMs are later migrated to other hosts in the cluster, the new hosts might not have automatic VM startup/shutdown enabled. For this reason, VMware recommends that when you install NSX Edge appliances on clusters that have vSphere HA disabled, you should check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled. See "Edit Virtual Machine Startup and Shutdown Settings" in vSphere Virtual Machine Administration.

Now you can configure routing to allow connectivity from external devices to your VMs.