We used the HammerDB tool version 2.23 to benchmark Oracle Database version 12c.

Highlights

  • 35% application performance improvement (Hammer-DB transactions per minute) with vPMEM
  • Up to 4.5x increase in Oracle IOPS
  • 1.4x DB reads, 3x DB writes, and up to approximately 17x increase in DB Log writes
  • Up to more than 57x decrease in Oracle DB operations (read/write) latency

Configuration

Table 6 shows the Oracle VM configuration and Table 7 shows the HammerDB parameters used to test Oracle DB. Some additional configurations and parameters are mentioned in Appendix A.

OS

CentOS 7.4

CPU

48 vCPUs

vRAM

128 GB (SGA size = 35 GB)

NVMe SSD

400 GB DB, 100 GB Logs

vPMEM

400 GB DB, 100 GB Logs

Table 6: Oracle VM configuration

 

Virtual Users

70

Warehouses

3500

Warm Up Time

5 minutes

Run Time

25 minutes

Tests-Profile        

TPC_C

Table 7: HammerDB parameters for Oracle

Results

All the IOPS and latency numbers reported in Figure 10 and Table 8 are obtained via the iostat tool in Linux [13] [14]. More detailed iostat output is in Appendix A.

Figure 10 shows the breakdown of IOPS achieved by Oracle Database. The NVMe SSD bars show the read-to-write ratio is 75:25, which makes it a typical OLTP workload. The most fascinating data point in Figure 12 is a 16.7x increase in DB Log writes per second to almost 90K writes per second with vPMEM. The Oracle DB writer issues smaller log writes at a high frequency because of low device latency, which results in more read/write DB operations. This translates to overall application throughput. Log write size with NVMe SSD is 40 KB and with vPMEM it is 3.4 KB (from iostat).

Figure 10: HammerDB with Oracle IOPS breakdown

 

Table 8 shows the dramatic decrease in latencies of DB read/write operations and Log writes. The minimum latency reported by iostats tool is 10 microsecs (0.01 milliseconds).

Config

DB Reads (usecs)

DB Writes (usecs)

Log Writes (usecs)

NVME SSD

388

571

90

vPMEM

< 10

< 10

< 10

Table 8: HammerDB with Oracle latency breakdown (using iostat)

 

Figure 11 shows HammerDB throughput increase by 35% and the server being fully utilized (98% guest CPU utilization obtained from mpstat/iostat).

Figure 11: Hammer-DB throughput gain with vPMEM

Note: We also increased virtual users to a value (80 users) where NVMe SSD achieved the maximum performance. Even when comparing 80 virtual users for NVMe SSD, vPMEM at 70 users was 29% better.

check-circle-line exclamation-circle-line close-line
Scroll to top icon