You can install multiple NSX Edge services gateway virtual appliances in a data center. Each NSX Edge virtual appliance can have a total of ten uplink and internal network interfaces. The internal interfaces connect to secured port groups and act as the gateway for all protected virtual machines in the port group. The subnet assigned to the internal interface can be a publicly routed IP address space or a NATed/routed RFC 1918 private space. Firewall rules and other NSX Edge services are enforced on traffic between interfaces.

About this task

Uplink interfaces of an ESG connect to uplink port groups that have access to a shared corporate network or a service that provides access layer networking.

The following list describes feature support by interface type (internal and uplink) on an ESG.

  • DHCP: Not supported on uplink interface.

  • DNS Forwarder: Not supported on uplink interface.

  • HA: Not supported on uplink interface, requires at least one internal interface.

  • SSL VPN: Listener IP must belong to uplink interface.

  • IPsec VPN: Local site IP must belong to uplink interface.

  • L2 VPN: Only internal networks can be stretched.

The following figure shows a sample topology with an ESG's uplink interface connected to physical infrastructure through the vSphere distributed switch and the ESG's internal interface connect to an NSX logical router through an NSX logical transit switch.

Multiple external IP addresses can be configured for load balancing, site-to-site VPN, and NAT services.

Important:

If you enable high availability on an NSX Edge in a cross-vCenter NSX environment, both the active and standby NSX Edge appliances must reside within the same vCenter Server. If you migrate one of the members of an NSX Edge HA pair to a different vCenter Server system, the two HA appliances will no longer operate as an HA pair, and you might experience traffic disruption.

Prerequisites

  • You must have been assigned the Enterprise Administrator or NSX Administrator role.

  • Verify that the resource pool has enough capacity for the edge services gateway (ESG) virtual appliance to be deployed. See System Requirements for NSX.

  • Verify that the host clusters on which the NSX Edge appliance will be installed are prepared for NSX. See Prepare Host Clusters for NSX in the NSX Installation Guide.

Procedure

  1. In vCenter, navigate to Home > Networking & Security > NSX Edges and click the Add (add) icon.
  2. Select Edge Services Gateway and type a name for the device.

    This name appears in your vCenter inventory. The name should be unique across all ESGs within a single tenant.

    Optionally, you can also enter a hostname. This name appears in the CLI. If you do not specify the host name, the Edge ID, which gets created automatically, is displayed in the CLI.

    Optionally, you can enter a description and tenant and enable high availability.

    For example:

  3. Type and re-type a password for the ESG.

    The password must be at least 12 characters and must follow 3 of the following 4 rules:

    • At least one upper case letter

    • At least one lower case letter

    • At last one number

    • At least one special character

  4. (Optional) : Enable SSH, high availability, automatic rule generation, and FIPS mode, and set the log level.

    If you do not enable automatic rule generation, you must manually add firewall, NAT, and routing configuration to allow control traffic for certain, NSX Edge services, including as load balancing and VPN. Auto rule generation does not create rules for data-channel traffic.

    By default, SSH and high availability are disabled, and automatic rule generation is enabled.

    By default, FIPS mode is disabled.

    By default, the log level is emergency.

    For example:

  5. Select the size of the NSX Edge instance based on your system resources.

    The Large NSX Edge has more CPU, memory, and disk space than the Compact NSX Edge, and supports a larger number of concurrent SSL VPN-Plus users. The X-Large NSX Edge is suited for environments that have a load balancer with millions of concurrent sessions. The Quad Large NSX Edge is recommended for high throughput and requires a high connection rate.

    See System Requirements for NSX.

  6. Create an edge appliance.

    Enter the settings for the ESG virtual appliance that will be added to your vCenter inventory. If you do not add an appliance when you install NSX Edge, NSX Edge remains in an offline mode until you add an appliance.

    If you enabled HA you can add two appliances. If you add a single appliance, NSX Edge replicates its configuration for the standby appliance and ensures that the two HA NSX Edge virtual machines are not on the same ESX host even after you use DRS and vMotion (unless you manually vMotion them to the same host). For HA to work correctly, you must deploy both appliances on a shared datastore.

    For example:

  7. Select Deploy NSX Edge to add the Edge in a deployed mode. You must configure appliances and interfaces for the Edge before it can be deployed.
  8. Configure interfaces.

    On ESGs, both IPv4 and IPv6 addresses are supported.

    You must add at least one internal interface for HA to work.

    An interface can have multiple non-overlapping subnets.

    If you enter more than one IP address for an interface, you can select the primary IP address. An interface can have one primary and multiple secondary IP addresses. NSX Edge considers the primary IP address as the source address for locally generated traffic, for example remote syslog and operator-initiated pings.

    You must add an IP address to an interface before using it on any feature configuration.

    Optionally, you can enter the MAC address for the interface.

    If you change the MAC address using API call later, you must redeploy the edge after changing the MAC address.

    If HA is enabled, you can optionally enter two management IP addresses in CIDR format. Heartbeats of the two NSX Edge HA virtual machines are communicated through these management IP addresses. The management IP addresses must be in the same L2/subnet and be able to communicate with each other.

    Optionally, you can modify the MTU.

    Enable proxy ARP if you want to allow the ESG to answer ARP requests intended for other machines. This is useful, for example, when you have the same subnet on both sides of a WAN connection.

    Enable ICMP redirect to convey routing information to hosts.

    Enable reverse path filtering to verify the reachability of the source address in packets being forwarded. In enabled mode, the packet must be received on the interface that the router would use to forward the return packet. In loose mode, the source address must appear in the routing table.

    Configure fence parameters if you want to reuse IP and MAC addresses across different fenced environments. For example, in a cloud management platform (CMP), fencing allow you to run several cloud instances simultaneous with the same IP and MAC addresses completely isolated or “fenced.”

    For example:

    The following example shows two interfaces, one attaching the ESG to the outside world through an uplink portgroup on a vSphere distributed switch and the other attaching the ESG to a logical transit switch to which a distributed logical router is also attached.

  9. Configure a default gateway.

    You can edit the MTU value, but it cannot be more than the configured MTU on the interface.

    For example:

  10. Configure the firewall policy, logging, and HA parameters.
    Caution:

    If you do not configure the firewall policy, the default policy is set to deny all traffic.

    By default, logs are enabled on all new NSX Edge appliances. The default logging level is NOTICE. If logs are stored locally on the ESG, logging may generate too many logs and affect the performance of your NSX Edge. For this reason, it is recommended that you configure remote syslog servers, and forward all logs to a centralized collector for analysis and monitoring.

    If you enabled high availability, complete the HA section. By default, HA automatically chooses an internal interface and automatically assigns link-local IP addresses. NSX Edge supports two virtual machines for high availability, both of which are kept up to date with user configurations. If a heartbeat failure occurs on the primary virtual machine, the secondary virtual machine state is changed to active. Thus, one NSX Edge virtual machine is always active on the network. NSX Edge replicates the configuration of the primary appliance for the standby appliance and ensures that the two HA NSX Edge virtual machines are not on the same ESX host even after you use DRS and vMotion. Two virtual machines are deployed on vCenter in the same resource pool and datastore as the appliance you configured. Local link IP addresses are assigned to HA virtual machines in the NSX Edge HA so that they can communicate with each other. Select the internal interface for which to configure HA parameters. If you select ANY for interface but there are no internal interfaces configured, the UI displays an error. Two Edge appliances are created but since there is no internal interface configured, the new Edge remains in standby and HA is disabled. Once an internal interface is configured, HA will get enabled on the Edge appliance. Type the period in seconds within which, if the backup appliance does not receive a heartbeat signal from the primary appliance, the primary appliance is considered inactive and the backup appliance takes over. The default interval is 15 seconds. Optionally, you can enter two management IP addresses in CIDR format to override the local link IP addresses assigned to the HA virtual machines. Ensure that the management IP addresses do not overlap with the IP addresses used for any other interface and do not interfere with traffic routing. You should not use an IP address that exists somewhere else on your network, even if that network is not directly attached to the NSX Edge.

    For example:

Results

After the ESG is deployed, go to the Hosts and Clusters view and open the console of the edge virtual appliance. From the console, make sure you can ping the connected interfaces.

What to do next

When you install an NSX Edge appliance, NSX enables automatic VM startup/shutdown on the host if vSphere HA is disabled on the cluster. If the appliance VMs are later migrated to other hosts in the cluster, the new hosts might not have automatic VM startup/shutdown enabled. For this reason, VMware recommends that when you install NSX Edge appliances on clusters that have vSphere HA disabled, you should check all hosts in the cluster to make sure that automatic VM startup/shutdown is enabled. See "Edit Virtual Machine Startup and Shutdown Settings" in vSphere Virtual Machine Administration.

Now you can configure routing to allow connectivity from external devices to your VMs.