Cyclictest is one of the most frequently used tools for evaluating the relative performance of real-time systems.

Cyclictest accurately and repeatedly measures the difference between a thread’s intended wake up time and the time at which it wakes up, to provide statistics about the system latencies.

  1. To install the Cyclictest software, perform the following steps:

    1. Install the following packages by running these commands:

      1. tdnf install -y gcc make patch

      2. tdnf install -y git glibc-devel binutils

      3. tdnf install -y linux-api-headers libnuma-devel wget tar

    2. Create a new directory: mkdir rt-tests

    3. Enter the new directory: cd rt-tests

    4. Run this command: wget

      https://mirrors.edge.kernel.org/pub/linux/utils/rt-tests/rt-tests-2.7.tar.gz

      Note:

      Check the URL first to ensure it still contains the latest Cyclictest version. If an older version of Cyclictest is required, check this directory: https://mirrors.edge.kernel.org/pub/linux/utils/rt-tests/older/

    5. Run this command:tar xvzf rt-tests-2.7.tar.gz

    6. Change directory: cd rt-tests-2.7

    7. Run these commands:

      1. make all

      2. make install

      3. make cyclictest

  2. The Cyclictest software is now installed. Run this command:

        taskset -c 0-1 ./cyclictest -m -p 99 -i 100 -t 1 -a 1 -h 120 -D 1m --mainaffinity=0 -q

Note:

Run this command in the background - either SSH into VM or from the console. It is recommended to close the console and not open it again until the test is finished.

The best method is to SSH into the VM and run the command in background.

Cyclictest Variables and Query Commands

Table 1. Variables Used in the Cyclictest Query

Variable

Variable Name

Variable Description

taskset

Taskset mask command [arguments]

Taskset is used to set or retrieve the CPU affinity of a running process.

-c #

cpu list

Interpret mask as numerical list of processors instead of a bitmask. Numbers are separated by commas and may include ranges. For example: 0, 1, 5, 8-11

-m

mlockall

Lock current and future memory allocations to prevent being paged out

-p PRIO

Prio=PRIO

Priority of highest prio thread

-i INTV

Interval=INTV

Base interval of thread in microseconds (µs) default=1000 µs

-t NUM

Threads=NUM

Number of threads

-a NUM

affinity

Run thread #N on processor #N, if possible, with NUM pin all threads to the processor NUM

-h US

Histogram-US

Dump a latency histogram to stdout after the run. US is the max latency time to be tracked in microseconds (µs)

-D TIME

Duration=TIME

Run the test for the specified time which defaults to seconds. Append ‘m,’ ‘h,’ or ‘d’ to specify minutes, hours, or days

--mainaffinity

Mainaffinity=CPUSET

Run the main thread on CPU#N. This only affects the main thread and not the measurement threads

-q

quiet

Print only a summary on exit (no live updates)

The information in the preceding table is from the cyclictest –version command. Query: taskset -c 0-1 ./cyclictest -m -p 99 -i 100 -t 1 -a 1 -h 120 -D 1m --mainaffinity=0 -q

Since we are only running one measurement thread (vCPU 1), move the non-RT main thread to a different vCPU (vCPU 0):

  • taskset -c 0-1 run cyclictest on these vCPUs (measurement thread runs on vCPU 1)

  • --mainaffinity=0 (main thread runs on vCPU 0)

The duration of 1 minute (1m) was replaced with 60 hours (60h) and 7 days (7d).

Test Results

The following tests were run:

Test A: ESX 7.0.3 environment with Advantech ECU-579 servers running ESX 7.0.3c

Two VMs:

  • 4 vCPUs

  • 16 GB RAM

  • 40 GB disk

  • Photon 3.0 RT OS

  • OS profile - real-time

  • vCPU 1 isolated to run Cyclictest 2.3 for 60 hours.

Note:

Two VMs running on vSAN storage.

Photon 3.0 (x86_64) - Kernel Linux 4.19.225-rt101-3.ph3-rt

Table 2. Test A Results

VM Description

Number of Latency Samples

Minimum Latency Microseconds (µs)

Average Latency value Microseconds (µs)

Maximum Latency value Microseconds (µs)

Histogram Overflows* (over 120 µs)

Single VM

ESX 7.0.3

vSAN Storage

2,160,000,000

3

4

29

0

Single VM

ESX 7.0.3

vSAN Storage

2,160,000,000

2

3

34

0

Figure 1. VM 1 histogram, number of samples per latency value
Figure 2. VM 2 histogram, number of samples per latency value

Test A: ESX 8.0U2 environment with Dell PowerEdge XR12 servers running ESX 8.0U2

Two VMs:

  • 4 vCPUs

  • 16 GB RAM

  • 40 GB disk

  • Photon 5.0 RT OS

  • OS profile - real-time

  • vCPU 1 isolated to run Cyclictest 2.7 for 60 hours.

Note: 1 VM running on a local datastore.

Photon 5.0 (x86_64) - Kernel Linux 6.1.83-2.ph5-rt

Table 3. Test A Results

VM Description

Number of Latency Samples

Minimum Latency Microseconds (µs)

Average Latency value Microseconds (µs)

Maximum Latency value Microseconds (µs)

Histogram Overflows* (over 120 µs)

Single VM

ESX 7.0.3

vSAN Storage

863942400

2

2

9

0

Figure 3. VM 1 histogram, number of samples per latency value

Test B: ESX 8.0U2 environment with Dell PowerEdge XR12 servers running ESX 8.0U2

Two VMs:

  • 4 vCPUs

  • 16 GB RAM

  • 40 GB disk

  • Photon 5.0 RT OS

  • OS profile - real-time

  • vCPU 1 isolated to run Cyclictest 2.7 for 7 days.

Note: 1 VM running on a local datastore.

Photon 5.0 (x86_64) - Kernel Linux 6.1.83-2.ph5-rt

Table 4. Test B Results

VM Description

Number of Latency Samples

Minimum Latency Microseconds (µs)

Average Latency value Microseconds (µs)

Maximum Latency value Microseconds (µs)

Histogram Overflows* (over 120 µs)

Single VM

ESX 7.0.3

vSAN Storage

6048000000

1

1

8

0

Figure 4. VM 1 histogram, number of samples per latency value

Test C: ESX 8.0U2 environment with Crystal Group ES373S17 servers running ESX 8.0U2

Two VMs:

  • 4 vCPUs

  • 16 GB RAM

  • 40 GB disk

  • Photon 5 RT OS

  • OS profile - real-time

  • vCPU isolated to run cyclictest 2.7 for 7 days

Note:

VMs running on local datastore.

Table 5. Test C Results

VM Description

Number of Latency Samples

Minimum Latency Microseconds (µs)

Average Latency value Microseconds (µs)

Maximum Latency value Microseconds (µs)

Histogram Overflows* (over 120 µs)

Single VM

ESXi 8.0U2

2419038720

1

1

12

0

Figure 5. VM (no vSAN) histogram, number of samples per latency value

Test D: ESX 8.0U2 environment with Crystal Group ES373S17 servers running ESX 8.0U2

Two VMs:

  • 4 vCPUs

  • 16 GB RAM

  • 40 GB disk

  • Photon 5 RT OS

  • OS profile - real-time

  • vCPU isolated to run cyclictest 2.7 for 60 hours

Note:

VMs running on local datastore.

Table 6. Test D Results

VM Description

Number of Latency Samples

Minimum Latency Microseconds (µs)

Average Latency value Microseconds (µs)

Maximum Latency value Microseconds (µs)

Histogram Overflows* (over 120 µs)

Single VM

ESXi 8.0U2

863942400

1

1

12

0

Figure 6. VM (no vSAN) histogram, number of samples per latency value