VMware Cloud on AWS uses NSX to create and manage SDDC networks. NSX provides an agile software-defined infrastructure to build cloud-native application environments.

This guide explains how to manage your SDDC networks using NSX and the VMware Cloud Console Networking and Security Dashboard.

You can access the NSX Manager in your SDDC in several ways: See Open NSX Manager for details.

SDDC Network Topology

When you create an SDDC, it includes a Management Network. Single-host trial SDDCs also include a small Compute Network. You specify the Management Network CIDR block when you create the SDDC. It cannot be changed after the SDDC has been created. See Deploy an SDDC from the VMC Console for details. The Management Network has two subnets:
Appliance Subnet
This subnet is used by the vCenter, NSX and HCX appliances in the SDDC. Other appliance-based services that you add to the SDDC also connect to this subnet.
Infrastructure Subnet
This subnet is used by the ESXi hosts in the SDDC.

The Compute Network includes an arbitrary number of logical segments for your workload VMs. See VMware Configuration Maximums for current limits on logical segments. In a Single-Host SDDC starter configuration, we create a compute network with a single routed segment. In SDDC configurations that have more hosts, you must create compute network segments to meet your needs. See VMware Configuration Maximums for applicable limits.

An SDDC network has two notional tiers:
  • Tier-0 handles north-south traffic (traffic leaving or entering the SDDC, or between the Management and Compute gateways). In the default configuration, each SDDC has a single Tier-0 router. If an SDDC is a member of an SDDC group, you can reconfigure the SDDC to add Tier-0 routers that handle SDDC group traffic. See Configure a Multi-Edge SDDC With Traffic Groups.
  • Tier-1 handles east-west traffic (traffic between routed network segments within the SDDC). In the default configuration, each SDDC has a single Tier-1 router. You can create and configure additional Tier-1 gateways if you need them. See Add a Custom Tier-1 Gateway to a VMware Cloud on AWS SDDC.
Figure 1. SDDC Network Topology
A diagram of an SDDC network connected to an on-premises network over a VPN and AWS Direct Connect.
NSX Edge Appliance

The default NSX Edge Appliance is implemented as a pair of VMs that run in active/standby mode. This appliance provides the platform on which the default Tier-0 and Tier-1 routers run, along with IPsec VPN connections and their BGP routing machinery. All north-south traffic goes through the default Tier-0 router. To avoid sending east-west traffic through the appliance, a component of each Tier-1 router runs on every ESXi host that handles routing for destinations within the SDDC.

If you need additional bandwidth for the subset of this traffic routed to SDDC group members, a Direct Connect Gateway attached to an SDDC group, HCX Service Mesh, or to the Connected VPC, you can reconfigure your SDDC to be Multi-Edge by creating traffic groups, each of which creates an additional T0 router. See Configure a Multi-Edge SDDC With Traffic Groups for details.
Note:

VPN traffic, as well as DX traffic to a private VIF must pass through the default T0 and cannot be routed to a non-default traffic group. In addition, because NAT rules always run on the default T0 router, additional T0 routers cannot handle traffic subject to NAT rules. This includes traffic to and from the SDDC's native Internet connection. It also includes traffic to the Amazon S3 service, which uses a NAT rule and must go through the default T0.

Management Gateway (MGW)
The MGW is a Tier-1 router that handles routing and firewalling for vCenter and other management appliances running in the SDDC. Management gateway firewall rules run on the MGW and control access to management VMs. In a new SDDC, the Internet connection is labeled Not Connected in the Overview tab and remains blocked until you create a Management Gateway Firewall rule allowing access from a trusted source. See Add or Modify Management Gateway Firewall Rules.
Compute Gateway (CGW)
The CGW is a Tier-1 router that handles network traffic for workload VMs connected to routed compute network segments. Compute gateway firewall rules, along with NAT rules, run on the Tier-0 router. In the default configuration, these rules block all traffic to and from compute network segments (see Configure Compute Gateway Networking and Security).

Routing Between Your SDDC and the Connected VPC

When you create an SDDC, we pre-allocate 17 AWS Elastic Network Interfaces (ENIs) in the selected VPC owned by the AWS account you specify at SDDC creation. We assign each of these ENIs an IP address from the subnet you specify at SDDC creation, then attach each of the hosts in the SDDC cluster Cluster-1 to one of these ENIs. An additional IP address is assigned to the ENI where the active NSX Edge Appliance is running.

This configuration, known as the Connected VPC, supports network traffic between VMs in the SDDC and native AWS instances and services with addresses in the Connected VPC's primary CIDR block. When you create or delete routed network segments connected to the default CGW, the main route table is automatically updated. When Managed Prefix List mode is enabled for the Connected VPC, the main route table and any custom route tables to which you have added the managed prefix list are also updated.

The Connected VPC (or SERVICES) Interface is used for all traffic to destinations within the Connected VPC's primary CIDR. AWS services or instances that communicate with the SDDC must be in subnets associated with the main route table of the Connected VPC when using the default configuration. If the AWS Managed Prefix List Mode mode is enabled (see Enable AWS Managed Prefix List Mode for the Connected Amazon VPC) then you can manually add the Managed Prefix list to any custom route table within the connected VPC when you want AWS services and instances using those custom route tables to communicate with SDDC workloads over the SERVICES Interface.

When the NSX Edge appliance in your SDDC is moved to another host, either to recover from a failure or during SDDC maintenance, the IP address allocated to the appliance is moved to the new ENI (on the new host), and the main route table, along with any custom route tables that use a Managed Prefix List, is updated to reflect the change. If you have replaced the main route table or are using a custom route table but have not enabled Managed Prefix List Mode, that update fails and network traffic can no longer be routed between SDDC networks and the Connected VPC. See View Connected VPC Information and Troubleshoot Problems With the Connected VPC for more about how to use the VMware Cloud Console to see the details of your Connected VPC.

VMware Cloud on AWS provides several facilities to help you aggregate routes to the Connected VPC, other VPCs, and your VMware Managed Transit Gateways. See Enable AWS Managed Prefix List Mode for the Connected Amazon VPC.

For an in-depth discussion of SDDC network architecture and the AWS network objects that support it, read the VMware Cloud Tech Zone article VMware Cloud on AWS: SDDC Network Architecture.

Reserved Network Addresses

Certain IPv4 address ranges are unavailable for use in SDDC compute networks. Several are used internally by SDDC network components. Most are reserved by convention on other networks as well.
Table 1. Reserved Address Ranges in SDDC Networks
  • 10.0.0.0/15
  • 172.31.0.0/16
These ranges are reserved within the SDDC management subnet, but can be used in your on-premises networks or SDDC compute network segments.
  • 169.254.0.0/19
  • 169.254.64.0/24
  • 169.254.101.0/30
  • 169.254.105.0/24
  • 169.254.106.0/24
Per RFC 3927, all of 169.254.0.0/16 is a link-local range that cannot be routed beyond a single subnet. However, with the exception of these CIDR blocks, you can use 169.254.0.0/16 addresses for your virtual tunnel interfaces. See Create a Route-Based VPN.
192.168.1.0/24

This the default compute segment CIDR for a single-host starter SDDC and is not reserved in other configurations. The first (192.168.1.0) and last (192.168.1.255) addresses in this CIDR block are reserved as the network address and broadcast address of the segment. The remaining 254 addresses are available for use.

Note: SDDC versions 1.20 and earlier also reserve 100.64.0.0/16 for carrier-grade NAT per RFC 6598. Avoid using addresses in this range in SDDC versions earlier than 1.22. See VMware Knowledge Base article 76022 for a detailed breakdown of how older SDDC networks use the 100.64.0.0/16 address range and VMware Knowledge Base article 92322 for more information about reserved address range changes in SDDC version 1.22.
SDDC networks also observe the conventions for special Use IPv4 address ranges enumerated in RFC 3330.

Multicast Support in SDDC Networks

In SDDC networks, layer 2 multicast traffic is treated as broadcast traffic on the network segment where the traffic originates. It is not routed beyond that segment. Layer 2 multicast traffic optimization features such as IGMP snooping are not supported. Layer 3 multicast (such as Protocol Independent Multicast) is not supported in VMware Cloud on AWS.

Connecting Your On-Premises SDDC to Your Cloud SDDC

To connect your on-premises data center to your VMware Cloud on AWS SDDC, you can create a VPN that uses the public Internet, a VPN that uses AWS Direct Connect, or just use AWS Direct Connect alone. You can also take advantage of SDDC groups to use VMware Transit Connect and an AWS Direct Connect Gateway to provide centralized connectivity between a group of VMware Cloud on AWS SDDCs and an on-premises SDDC. See Creating and Managing SDDC Deployment Groups.
Figure 2. SDDC Connections to your On-Premises Data Center
A diagram showing how an SDDC network can connect to an on-premises network over a VPN, HCX, and AWS Direct Connect.
Layer 3 (L3) VPN
A layer 3 VPN provides a secure connection between your on-premises data center and your VMware Cloud on AWS SDDC over the public Internet or AWS Direct Connect. These IPsec VPNs can be either route-based or policy-based. For the on-premises endpoint, you can use any device that supports the settings listed in the IPsec VPN Settings Reference.
Layer 2 (L2) VPN
A layer 2 VPN provides an extended, or stretched, network with a single IP address space that spans your on-premises data center and your SDDC and enables hot or cold migration of on-premises workloads to the SDDC. You can create only a single L2VPN tunnel in any SDDC. The on-premises end of the tunnel requires NSX. If you are not already using NSX in your on-premises data center, you can download a standalone NSX Edge appliance to provide the required functionality. An L2 VPN can connect your on-premises data center to the SDDC over the public Internet or AWS Direct Connect.
AWS Direct Connect (DX)
AWS Direct Connect is a service provided by AWS that creates a high-speed, low latency connection between your on-premises data center and AWS services. When you configure AWS Direct Connect, VPNs can route traffic over DX instead of the public Internet. Because DX implements Border Gateway Protocol (BGP) routing, use of an L3VPN for the management network is optional when you configure DX. DX traffic is not encrypted. If you want to encrypt that traffic, configure an IPsec VPN that uses DX and a private IP address.
VMware HCX
VMware HCX, a multi-cloud app mobility solution, is provided free to all SDDCs and facilitates migration of workload VMs to and from your on-premises data center to your SDDC. For more information about installing, configuring, and using HCX, see the Hybrid Migration with HCX Checklist.

MTU Considerations for Internal and External Traffic

Network traffic internal to the SDDC (including traffic to and from the Connected VPC) supports an MTU of up to 8900 bytes. Traffic to the MGW is generally limited to 1500 bytes because management appliance interfaces use an MTU of 1500. Other MTU defaults are listed in VMware Configuration Maximums. The following guidelines apply to MTU values throughout the SDDC network:
  • SDDC group and DX share the same interface so must use the lower MTU value (8500 bytes) when both connections are in use.
  • All VM NICs and interfaces on the same segment need to have the same MTU.
  • MTU can differ between segments as long as the endpoints support PMTUD and any firewalls in the path permit ICMP traffic.
  • The layer 3 (IP) MTU must be less than or equal to the underlying layer 2 connection's maximum supported packet size (MTU) minus any protocol overhead. In VMware Cloud on AWS this is the NSX segment, which supports layer 3 packets with an MTU of up to 8900 bytes.

Understanding SDDC Network Performance

For a detailed discussion of SDDC network performance, please read the VMware Cloud Tech Zone Designlet Understanding VMware Cloud on AWS Network Performance.