Infrastructure (On-Premises)

For the on-premises environment, we leveraged an existing lab testbed in our datacenter, which consisted of 4 x HPE ProLiant DL380 Gen9 servers.

The physical hosts within this 4-node cluster each contained two Intel Xeon E5-2683 v4 processors running at 2.10 GHz with 512 GB of RAM.  Each processor had 16 cores and 32 logical threads with hyperthreading enabled (Figure 1).

Screenshot of an on-premises host server

Figure 1. Screenshot of an on-premises host server

Much like VMware Cloud on AWS, these 4 hosts contained local NVMe storage and were configured as a vSAN cluster, with DRS enabled. While the two environments do not represent a 100% “apples-to-apples” comparison, it gives us a reasonably close approximation.

Infrastructure (VMware Cloud on AWS)

For our database benchmarks in the cloud, we deployed a four-node software-defined data center (SDDC) from the VMware Cloud on AWS portal. We used the latest SDDC available at the time of testing, which was version 1.4.

The physical hosts within the SDDC each contained two Intel Xeon E5-2686 v4 processors running at 2.30 GHz with 512 GB of RAM. Each processor had 18 cores and 36 logical threads with hyperthreading enabled (Figure 2).

Screenshot of the VMware Cloud on AWS SDDC

Figure 2. Screenshot of the VMware Cloud on AWS SDDC

On VMware Cloud for AWS, vSAN is used for the storage. It used the 8 NVMe devices that were in each of the 4 servers. Each NVMe had a capacity of 1.7 TB, so the entire cluster had 32 NVMe devices and a total raw capacity of approximately 40 TB (Figure 3). As part of the automatic deployment, management and workload datastores are created. The management datastore is for things like the vCenter and NSX-related VMs. The workload datastore is where all of the SQL Server VMs that are created or migrated into the environment are stored.

Screenshot of workload datastore configuration

Figure 3. Screenshot of workload datastore configuration

Figure 4 shows a four-node SDDC on AWS and its major components. We only had to deploy and configure the SQL workload load drivers and database VMs; the rest of the components and configuration was done automatically as part of the deployment from the VMware Cloud on AWS portal.

Cloud testbed configuration

Figure 4. Cloud testbed configuration

While beyond the scope of this paper, it should be noted that our VMware Cloud on AWS solution consists of tools to ease the transition to the cloud, namely Hybrid Linked Mode, which allows us to vMotion VMs from our existing on-premises datacenter to VMware Cloud on AWS with ease. For more information, see the VMware Cloud on AWS Getting Started documentation.

Database Benchmarks

We used two online transaction processing (OLTP) benchmark workloads to verify SQL Server performance: HammerDB (for small VMs) and CDB (for large VMs).

HammerDB is an open-source database load testing and benchmarking tool.  It supports SQL Server, Oracle, and many other databases. It implements a workload like TPC-C, and reports throughput in transactions per minute (TPM). The TPC-C specification has been around since 1992, and thus is considered the “gold standard” of OLTP workloads.

CDB (Cloud Database Benchmark) is a database schema and workload mix designed by Microsoft. The benchmark is designed with cloud computing in mind, but the database schema, data population, and transactions have been designed to be broadly representative of the basic elements most commonly used in OLTP workloads. The benchmark driver reports throughput in terms of transactions per second (TPS). Since the benchmark is relatively new, the workload was designed to simulate heavier loads on the database with each simulated user/thread. We were first introduced to this workload during our project to measure performance of SQL Server 2017 on Linux using vSphere 6.5.

It is important to note that since these two benchmarks have different workloads and database schemas, the results are not directly comparable.

Virtual Machine Configurations

We installed Windows Server 2016 as the guest operating system (OS) for our workload VMs (both load-driving clients and database servers). SQL Server 2017 Enterprise Edition was the database engine used within all database server VMs. The SQL Server databases were built to 100 GB in size for both HammerDB and CDB. We adhered to the Best Practices Guide for Microsoft SQL Server, and do not have any specific additional caveats for deploying SQL Server within VMware Cloud on AWS.

We configured all load driver VMs with 4 virtual CPUs (vCPUs) and 4 GB of virtual RAM; the HammerDB database servers with 8 vCPUs and 32 GB of RAM; and the CDB database servers with vCPUs equal to the number of cores of the physical host.

The VMs used the VMXNET3 virtual network adapter and paravirtual SCSI (PVSCSI) adapters. We assigned data and log disks to separate PVSCSI adapters.

Both benchmarks were run with an increasing number of simulated users until all of the database server VMs’ vCPUs were fully saturated. This represented the maximum throughput that the test environment could achieve.

check-circle-line exclamation-circle-line close-line
Scroll to top icon