HammerDB with Oracle Database

We used large pages at system boot and THP in Linux was disabled. We also set priority for LGWRT and DB WRT via this command:

chrt --rr -p 83 -$P

We determined that this command was needed to get the best performance because when CPU is near saturation, the log writer does not get scheduled optimally, resulting in sub-optimal performance.

HammerDB client and Oracle server are in the same VM for results in the section, "Relational Database Performance Using Oracle."

Iostat for pmem devices can be enabled via the following command:

$ echo 1 > /sys/block/pmem<n>/queue/iostats

I/Os that go through DAX paths (mounted using –o dax option) are not counted in iostat output. In this experiment, the -o dax option did not make a difference to performance, so we mounted the pmem devices without the –o dax option to collect detailed iostats for more insights.

The nvme0n1/pmem1 – nvme0n10/pmem10 have Oracle DB tables. Nvme0n11/pmem11 has the Oracle redo logs.

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await 
nvme0n1           0.00     0.20 1959.90  439.17 15679.20  8526.13    20.18     0.89    0.37    0.35    0.49  
nvme0n2           0.00     0.20 1993.73  475.53 15949.87  9099.73    20.29     0.92    0.37    0.34    0.50  
nvme0n3           0.00     0.20 1888.13  474.13 15105.07  9939.47    21.20     0.93    0.39    0.35    0.57  
nvme0n4           0.00     0.20 1864.33  451.40 14914.67  8845.87    20.52     0.90    0.39    0.34    0.59  
nvme0n5           0.00     0.20 1861.33  449.87 14890.67  9002.13    20.68     0.89    0.39    0.35    0.55  
nvme0n6           0.00     0.20 1862.23  457.00 14897.87  8779.20    20.42     0.91    0.39    0.35    0.57  
nvme0n7           0.00     0.20 1841.60  443.93 14732.80  8446.93    20.28     0.91    0.40    0.35    0.61  
nvme0n8           0.00     0.20 1857.67  435.93 14861.33  7655.73    19.63     0.91    0.40    0.34    0.62  
nvme0n9           0.00     0.20 1925.17  444.00 15401.33  7873.33    19.65     0.93    0.39    0.35    0.60  
nvme0n10          0.00     0.20 1989.57  448.10 15916.53  7865.87    19.51     0.96    0.39    0.34    0.61  
nvme0n11          0.00    56.80    0.07 5367.10     0.03 206904.32   77.10     0.48    0.09    0.00    0.09  

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await 
pmem1             0.00     0.00 2776.40 1325.80 22211.20 13635.73    17.48     0.01    0.00    0.00    0.00  
pmem2             0.00     0.00 2782.53 1338.67 22260.27 12730.40    16.98     0.01    0.00    0.00    0.00  
pmem3             0.00     0.00 2733.27 1496.60 21866.13 15594.93    17.71     0.01    0.00    0.00    0.00  
pmem4             0.00     0.00 2583.43 1426.67 20667.47 15173.87    17.88     0.01    0.00    0.00    0.00  
pmem5             0.00     0.00 2590.70 1444.27 20725.60 14511.47    17.47     0.01    0.00    0.00    0.00  
pmem6             0.00     0.00 2574.03 1390.80 20592.27 13338.13    17.12     0.01    0.00    0.00    0.00  
pmem7             0.00     0.00 2667.43 1356.27 21339.47 12721.87    16.93     0.01    0.00    0.00    0.00  
pmem8             0.00     0.00 2602.90 1318.87 20823.20 11549.87    16.51     0.01    0.00    0.00    0.00  
pmem9             0.00     0.00 2628.10 1344.07 21024.80 12040.27    16.65     0.01    0.00    0.00    0.00  
pmem10            0.00     0.00 2672.37 1305.87 21378.93 11498.13    16.53     0.01    0.00    0.00    0.00  
pmem11            0.00     0.00    0.00 89777.27    0.00 300545.35    6.70     0.09    0.00    0.00    0.00  

Sysbench with MySQL

The below iostat show the I/O statistics when a single VM is running on NVMe SSD. Note that just one instance of Sysbench can exercise close to 550 MBPS from the storage device, and scaling to 4 VMs requires 2200 MBPS bandwidth.

read_write (79/21)
Device:         rrqm/s   wrqm/s     r/s     w/s       rMB/s    wMB/s avgrq-sz   avgqu-sz  await r_await w_await 
nvme0n1         0.00 934.50   26331.50 6645.50 411.43 128.95 33.56 9.93   0.30 0.36 0.07

HammerDB with Microsoft SQL Server

Table 19 describes the SQL Server 2016.

Log Flush Wait Time

Total wait time (in milliseconds) to flush the log. Indicates the wait time for log records to be hardened to disk.

Log Flush Waits/sec

Number of commits per second waiting for the log flush.

Log Flush Write Time (ms)

Time in milliseconds for performing writes of log flushes that were completed in the last second.

Log Flushes/sec

Number of log flushes per second.

Table 19: Description of log flush events

check-circle-line exclamation-circle-line close-line
Scroll to top icon