Newer
Older
# echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
/sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger
Finally, once we've run cyclictest to actually generate some
events, we can see the output by looking at the wakeup_latency
synthetic event's hist file::
# cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
- onmax(var).save(field,.. .)
The 'onmax(var).save(field,...)' hist trigger action is invoked
whenever the value of 'var' associated with a histogram entry
exceeds the current maximum contained in that variable.
The end result is that the trace event fields specified as the
onmax.save() params will be saved if 'var' exceeds the current
maximum for that hist trigger entry. This allows context from the
event that exhibited the new maximum to be saved for later
reference. When the histogram is displayed, additional fields
displaying the saved values will be printed.
As an example the below defines a couple of hist triggers, one for
sched_waking and another for sched_switch, keyed on pid. Whenever
a sched_waking occurs, the timestamp is saved in the entry
corresponding to the current pid, and when the scheduler switches
back to that pid, the timestamp difference is calculated. If the
resulting latency, stored in wakeup_lat, exceeds the current
maximum latency, the values specified in the save() fields are
# echo 'hist:keys=pid:ts0=common_timestamp.usecs \
if comm=="cyclictest"' >> \
/sys/kernel/debug/tracing/events/sched/sched_waking/trigger
# echo 'hist:keys=next_pid:\
wakeup_lat=common_timestamp.usecs-$ts0:\
onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
if next_comm=="cyclictest"' >> \
/sys/kernel/debug/tracing/events/sched/sched_switch/trigger
When the histogram is displayed, the max value and the saved
values corresponding to the max are displayed following the rest
of the fields::
# cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
{ next_pid: 2255 } hitcount: 239
common_timestamp-ts0: 0
max: 27
next_comm: cyclictest
prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
{ next_pid: 2256 } hitcount: 2355
common_timestamp-ts0: 0
max: 49 next_comm: cyclictest
prev_pid: 0 prev_prio: 120 prev_comm: swapper/0
Totals:
Hits: 12970
Entries: 2
Dropped: 0
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
- onmax(var).snapshot()
The 'onmax(var).snapshot()' hist trigger action is invoked
whenever the value of 'var' associated with a histogram entry
exceeds the current maximum contained in that variable.
The end result is that a global snapshot of the trace buffer will
be saved in the tracing/snapshot file if 'var' exceeds the current
maximum for any hist trigger entry.
Note that in this case the maximum is a global maximum for the
current trace instance, which is the maximum across all buckets of
the histogram. The key of the specific trace event that caused
the global maximum and the global maximum itself are displayed,
along with a message stating that a snapshot has been taken and
where to find it. The user can use the key information displayed
to locate the corresponding bucket in the histogram for even more
detail.
As an example the below defines a couple of hist triggers, one for
sched_waking and another for sched_switch, keyed on pid. Whenever
a sched_waking event occurs, the timestamp is saved in the entry
corresponding to the current pid, and when the scheduler switches
back to that pid, the timestamp difference is calculated. If the
resulting latency, stored in wakeup_lat, exceeds the current
maximum latency, a snapshot is taken. As part of the setup, all
the scheduler events are also enabled, which are the events that
will show up in the snapshot when it is taken at some point:
# echo 1 > /sys/kernel/debug/tracing/events/sched/enable
# echo 'hist:keys=pid:ts0=common_timestamp.usecs \
if comm=="cyclictest"' >> \
/sys/kernel/debug/tracing/events/sched/sched_waking/trigger
# echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0: \
onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio, \
prev_comm):onmax($wakeup_lat).snapshot() \
if next_comm=="cyclictest"' >> \
/sys/kernel/debug/tracing/events/sched/sched_switch/trigger
When the histogram is displayed, for each bucket the max value
and the saved values corresponding to the max are displayed
following the rest of the fields.
If a snapshot was taken, there is also a message indicating that,
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
along with the value and event that triggered the global maximum:
# cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
{ next_pid: 2101 } hitcount: 200
max: 52 next_prio: 120 next_comm: cyclictest \
prev_pid: 0 prev_prio: 120 prev_comm: swapper/6
{ next_pid: 2103 } hitcount: 1326
max: 572 next_prio: 19 next_comm: cyclictest \
prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
{ next_pid: 2102 } hitcount: 1982 \
max: 74 next_prio: 19 next_comm: cyclictest \
prev_pid: 0 prev_prio: 120 prev_comm: swapper/5
Snapshot taken (see tracing/snapshot). Details:
triggering value { onmax($wakeup_lat) }: 572 \
triggered by event with key: { next_pid: 2103 }
Totals:
Hits: 3508
Entries: 3
Dropped: 0
In the above case, the event that triggered the global maximum has
the key with next_pid == 2103. If you look at the bucket that has
2103 as the key, you'll find the additional values save()'d along
with the local maximum for that bucket, which should be the same
as the global maximum (since that was the same value that
triggered the global snapshot).
And finally, looking at the snapshot data should show at or near
the end the event that triggered the snapshot (in this case you
can verify the timestamps between the sched_waking and
sched_switch events, which should match the time displayed in the
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
global maximum)::
# cat /sys/kernel/debug/tracing/snapshot
<...>-2103 [005] d..3 309.873125: sched_switch: prev_comm=cyclictest prev_pid=2103 prev_prio=19 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
<idle>-0 [005] d.h3 309.873611: sched_waking: comm=cyclictest pid=2102 prio=19 target_cpu=005
<idle>-0 [005] dNh4 309.873613: sched_wakeup: comm=cyclictest pid=2102 prio=19 target_cpu=005
<idle>-0 [005] d..3 309.873616: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2102 next_prio=19
<...>-2102 [005] d..3 309.873625: sched_switch: prev_comm=cyclictest prev_pid=2102 prev_prio=19 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
<idle>-0 [005] d.h3 309.874624: sched_waking: comm=cyclictest pid=2102 prio=19 target_cpu=005
<idle>-0 [005] dNh4 309.874626: sched_wakeup: comm=cyclictest pid=2102 prio=19 target_cpu=005
<idle>-0 [005] dNh3 309.874628: sched_waking: comm=cyclictest pid=2103 prio=19 target_cpu=005
<idle>-0 [005] dNh4 309.874630: sched_wakeup: comm=cyclictest pid=2103 prio=19 target_cpu=005
<idle>-0 [005] d..3 309.874633: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2102 next_prio=19
<idle>-0 [004] d.h3 309.874757: sched_waking: comm=gnome-terminal- pid=1699 prio=120 target_cpu=004
<idle>-0 [004] dNh4 309.874762: sched_wakeup: comm=gnome-terminal- pid=1699 prio=120 target_cpu=004
<idle>-0 [004] d..3 309.874766: sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=gnome-terminal- next_pid=1699 next_prio=120
gnome-terminal--1699 [004] d.h2 309.874941: sched_stat_runtime: comm=gnome-terminal- pid=1699 runtime=180706 [ns] vruntime=1126870572 [ns]
<idle>-0 [003] d.s4 309.874956: sched_waking: comm=rcu_sched pid=9 prio=120 target_cpu=007
<idle>-0 [003] d.s5 309.874960: sched_wake_idle_without_ipi: cpu=7
<idle>-0 [003] d.s5 309.874961: sched_wakeup: comm=rcu_sched pid=9 prio=120 target_cpu=007
<idle>-0 [007] d..3 309.874963: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=rcu_sched next_pid=9 next_prio=120
rcu_sched-9 [007] d..3 309.874973: sched_stat_runtime: comm=rcu_sched pid=9 runtime=13646 [ns] vruntime=22531430286 [ns]
rcu_sched-9 [007] d..3 309.874978: sched_switch: prev_comm=rcu_sched prev_pid=9 prev_prio=120 prev_state=R+ ==> next_comm=swapper/7 next_pid=0 next_prio=120
<...>-2102 [005] d..4 309.874994: sched_migrate_task: comm=cyclictest pid=2103 prio=19 orig_cpu=5 dest_cpu=1
<...>-2102 [005] d..4 309.875185: sched_wake_idle_without_ipi: cpu=1
<idle>-0 [001] d..3 309.875200: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2103 next_prio=19
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
- onchange(var).save(field,.. .)
The 'onchange(var).save(field,...)' hist trigger action is invoked
whenever the value of 'var' associated with a histogram entry
changes.
The end result is that the trace event fields specified as the
onchange.save() params will be saved if 'var' changes for that
hist trigger entry. This allows context from the event that
changed the value to be saved for later reference. When the
histogram is displayed, additional fields displaying the saved
values will be printed.
- onchange(var).snapshot()
The 'onchange(var).snapshot()' hist trigger action is invoked
whenever the value of 'var' associated with a histogram entry
changes.
The end result is that a global snapshot of the trace buffer will
be saved in the tracing/snapshot file if 'var' changes for any
hist trigger entry.
Note that in this case the changed value is a global variable
associated with current trace instance. The key of the specific
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
trace event that caused the value to change and the global value
itself are displayed, along with a message stating that a snapshot
has been taken and where to find it. The user can use the key
information displayed to locate the corresponding bucket in the
histogram for even more detail.
As an example the below defines a hist trigger on the tcp_probe
event, keyed on dport. Whenever a tcp_probe event occurs, the
cwnd field is checked against the current value stored in the
$cwnd variable. If the value has changed, a snapshot is taken.
As part of the setup, all the scheduler and tcp events are also
enabled, which are the events that will show up in the snapshot
when it is taken at some point:
# echo 1 > /sys/kernel/debug/tracing/events/sched/enable
# echo 1 > /sys/kernel/debug/tracing/events/tcp/enable
# echo 'hist:keys=dport:cwnd=snd_cwnd: \
onchange($cwnd).save(snd_wnd,srtt,rcv_wnd): \
onchange($cwnd).snapshot()' >> \
/sys/kernel/debug/tracing/events/tcp/tcp_probe/trigger
When the histogram is displayed, for each bucket the tracked value
and the saved values corresponding to that value are displayed
following the rest of the fields.
If a snapshot was taken, there is also a message indicating that,
along with the value and event that triggered the snapshot::
# cat /sys/kernel/debug/tracing/events/tcp/tcp_probe/hist
{ dport: 1521 } hitcount: 8
changed: 10 snd_wnd: 35456 srtt: 154262 rcv_wnd: 42112
{ dport: 80 } hitcount: 23
changed: 10 snd_wnd: 28960 srtt: 19604 rcv_wnd: 29312
{ dport: 9001 } hitcount: 172
changed: 10 snd_wnd: 48384 srtt: 260444 rcv_wnd: 55168
{ dport: 443 } hitcount: 211
changed: 10 snd_wnd: 26960 srtt: 17379 rcv_wnd: 28800
Snapshot taken (see tracing/snapshot). Details::
triggering value { onchange($cwnd) }: 10
triggered by event with key: { dport: 80 }
Totals:
Hits: 414
Entries: 4
Dropped: 0
In the above case, the event that triggered the snapshot has the
key with dport == 80. If you look at the bucket that has 80 as
the key, you'll find the additional values save()'d along with the
changed value for that bucket, which should be the same as the
global changed value (since that was the same value that triggered
the global snapshot).
And finally, looking at the snapshot data should show at or near
the end the event that triggered the snapshot::
# cat /sys/kernel/debug/tracing/snapshot
gnome-shell-1261 [006] dN.3 49.823113: sched_stat_runtime: comm=gnome-shell pid=1261 runtime=49347 [ns] vruntime=1835730389 [ns]
kworker/u16:4-773 [003] d..3 49.823114: sched_switch: prev_comm=kworker/u16:4 prev_pid=773 prev_prio=120 prev_state=R+ ==> next_comm=kworker/3:2 next_pid=135 next_prio=120
gnome-shell-1261 [006] d..3 49.823114: sched_switch: prev_comm=gnome-shell prev_pid=1261 prev_prio=120 prev_state=R+ ==> next_comm=kworker/6:2 next_pid=387 next_prio=120
kworker/3:2-135 [003] d..3 49.823118: sched_stat_runtime: comm=kworker/3:2 pid=135 runtime=5339 [ns] vruntime=17815800388 [ns]
kworker/6:2-387 [006] d..3 49.823120: sched_stat_runtime: comm=kworker/6:2 pid=387 runtime=9594 [ns] vruntime=14589605367 [ns]
kworker/6:2-387 [006] d..3 49.823122: sched_switch: prev_comm=kworker/6:2 prev_pid=387 prev_prio=120 prev_state=R+ ==> next_comm=gnome-shell next_pid=1261 next_prio=120
kworker/3:2-135 [003] d..3 49.823123: sched_switch: prev_comm=kworker/3:2 prev_pid=135 prev_prio=120 prev_state=T ==> next_comm=swapper/3 next_pid=0 next_prio=120
<idle>-0 [004] ..s7 49.823798: tcp_probe: src=10.0.0.10:54326 dest=23.215.104.193:80 mark=0x0 length=32 snd_nxt=0xe3ae2ff5 snd_una=0xe3ae2ecd snd_cwnd=10 ssthresh=2147483647 snd_wnd=28960 srtt=19604 rcv_wnd=29312
3. User space creating a trigger
--------------------------------
Writing into /sys/kernel/tracing/trace_marker writes into the ftrace
ring buffer. This can also act like an event, by writing into the trigger
file located in /sys/kernel/tracing/events/ftrace/print/
Modifying cyclictest to write into the trace_marker file before it sleeps
and after it wakes up, something like this::
static void traceputs(char *str)
{
/* tracemark_fd is the trace_marker file descriptor */
if (tracemark_fd < 0)
return;
/* write the tracemark message */
write(tracemark_fd, str, strlen(str));
And later add something like::
traceputs("start");
clock_nanosleep(...);
traceputs("end");
We can make a histogram from this::
# cd /sys/kernel/tracing
# echo 'latency u64 lat' > synthetic_events
# echo 'hist:keys=common_pid:ts0=common_timestamp.usecs if buf == "start"' > events/ftrace/print/trigger
# echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(ftrace.print).latency($lat) if buf == "end"' >> events/ftrace/print/trigger
# echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger
The above created a synthetic event called "latency" and two histograms
against the trace_marker, one gets triggered when "start" is written into the
trace_marker file and the other when "end" is written. If the pids match, then
it will call the "latency" synthetic event with the calculated latency as its
parameter. Finally, a histogram is added to the latency synthetic event to
record the calculated latency along with the pid.
Now running cyclictest with::
# ./cyclictest -p80 -d0 -i250 -n -a -t --tracemark -b 1000
-p80 : run threads at priority 80
-d0 : have all threads run at the same interval
-i250 : start the interval at 250 microseconds (all threads will do this)
-n : sleep with nanosleep
-a : affine all threads to a separate CPU
-t : one thread per available CPU
--tracemark : enable trace mark writing
-b 1000 : stop if any latency is greater than 1000 microseconds
Note, the -b 1000 is used just to make --tracemark available.
Then we can see the histogram created by this with::
# cat events/synthetic/latency/hist
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
# event histogram
#
# trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
#
{ lat: 107, common_pid: 2039 } hitcount: 1
{ lat: 122, common_pid: 2041 } hitcount: 1
{ lat: 166, common_pid: 2039 } hitcount: 1
{ lat: 174, common_pid: 2039 } hitcount: 1
{ lat: 194, common_pid: 2041 } hitcount: 1
{ lat: 196, common_pid: 2036 } hitcount: 1
{ lat: 197, common_pid: 2038 } hitcount: 1
{ lat: 198, common_pid: 2039 } hitcount: 1
{ lat: 199, common_pid: 2039 } hitcount: 1
{ lat: 200, common_pid: 2041 } hitcount: 1
{ lat: 201, common_pid: 2039 } hitcount: 2
{ lat: 202, common_pid: 2038 } hitcount: 1
{ lat: 202, common_pid: 2043 } hitcount: 1
{ lat: 203, common_pid: 2039 } hitcount: 1
{ lat: 203, common_pid: 2036 } hitcount: 1
{ lat: 203, common_pid: 2041 } hitcount: 1
{ lat: 206, common_pid: 2038 } hitcount: 2
{ lat: 207, common_pid: 2039 } hitcount: 1
{ lat: 207, common_pid: 2036 } hitcount: 1
{ lat: 208, common_pid: 2040 } hitcount: 1
{ lat: 209, common_pid: 2043 } hitcount: 1
{ lat: 210, common_pid: 2039 } hitcount: 1
{ lat: 211, common_pid: 2039 } hitcount: 4
{ lat: 212, common_pid: 2043 } hitcount: 1
{ lat: 212, common_pid: 2039 } hitcount: 2
{ lat: 213, common_pid: 2039 } hitcount: 1
{ lat: 214, common_pid: 2038 } hitcount: 1
{ lat: 214, common_pid: 2039 } hitcount: 2
{ lat: 214, common_pid: 2042 } hitcount: 1
{ lat: 215, common_pid: 2039 } hitcount: 1
{ lat: 217, common_pid: 2036 } hitcount: 1
{ lat: 217, common_pid: 2040 } hitcount: 1
{ lat: 217, common_pid: 2039 } hitcount: 1
{ lat: 218, common_pid: 2039 } hitcount: 6
{ lat: 219, common_pid: 2039 } hitcount: 9
{ lat: 220, common_pid: 2039 } hitcount: 11
{ lat: 221, common_pid: 2039 } hitcount: 5
{ lat: 221, common_pid: 2042 } hitcount: 1
{ lat: 222, common_pid: 2039 } hitcount: 7
{ lat: 223, common_pid: 2036 } hitcount: 1
{ lat: 223, common_pid: 2039 } hitcount: 3
{ lat: 224, common_pid: 2039 } hitcount: 4
{ lat: 224, common_pid: 2037 } hitcount: 1
{ lat: 224, common_pid: 2036 } hitcount: 2
{ lat: 225, common_pid: 2039 } hitcount: 5
{ lat: 225, common_pid: 2042 } hitcount: 1
{ lat: 226, common_pid: 2039 } hitcount: 7
{ lat: 226, common_pid: 2036 } hitcount: 4
{ lat: 227, common_pid: 2039 } hitcount: 6
{ lat: 227, common_pid: 2036 } hitcount: 12
{ lat: 227, common_pid: 2043 } hitcount: 1
{ lat: 228, common_pid: 2039 } hitcount: 7
{ lat: 228, common_pid: 2036 } hitcount: 14
{ lat: 229, common_pid: 2039 } hitcount: 9
{ lat: 229, common_pid: 2036 } hitcount: 8
{ lat: 229, common_pid: 2038 } hitcount: 1
{ lat: 230, common_pid: 2039 } hitcount: 11
{ lat: 230, common_pid: 2036 } hitcount: 6
{ lat: 230, common_pid: 2043 } hitcount: 1
{ lat: 230, common_pid: 2042 } hitcount: 2
{ lat: 231, common_pid: 2041 } hitcount: 1
{ lat: 231, common_pid: 2036 } hitcount: 6
{ lat: 231, common_pid: 2043 } hitcount: 1
{ lat: 231, common_pid: 2039 } hitcount: 8
{ lat: 232, common_pid: 2037 } hitcount: 1
{ lat: 232, common_pid: 2039 } hitcount: 6
{ lat: 232, common_pid: 2040 } hitcount: 2
{ lat: 232, common_pid: 2036 } hitcount: 5
{ lat: 232, common_pid: 2043 } hitcount: 1
{ lat: 233, common_pid: 2036 } hitcount: 5
{ lat: 233, common_pid: 2039 } hitcount: 11
{ lat: 234, common_pid: 2039 } hitcount: 4
{ lat: 234, common_pid: 2038 } hitcount: 2
{ lat: 234, common_pid: 2043 } hitcount: 2
{ lat: 234, common_pid: 2036 } hitcount: 11
{ lat: 234, common_pid: 2040 } hitcount: 1
{ lat: 235, common_pid: 2037 } hitcount: 2
{ lat: 235, common_pid: 2036 } hitcount: 8
{ lat: 235, common_pid: 2043 } hitcount: 2
{ lat: 235, common_pid: 2039 } hitcount: 5
{ lat: 235, common_pid: 2042 } hitcount: 2
{ lat: 235, common_pid: 2040 } hitcount: 4
{ lat: 235, common_pid: 2041 } hitcount: 1
{ lat: 236, common_pid: 2036 } hitcount: 7
{ lat: 236, common_pid: 2037 } hitcount: 1
{ lat: 236, common_pid: 2041 } hitcount: 5
{ lat: 236, common_pid: 2039 } hitcount: 3
{ lat: 236, common_pid: 2043 } hitcount: 9
{ lat: 236, common_pid: 2040 } hitcount: 7
{ lat: 237, common_pid: 2037 } hitcount: 1
{ lat: 237, common_pid: 2040 } hitcount: 1
{ lat: 237, common_pid: 2036 } hitcount: 9
{ lat: 237, common_pid: 2039 } hitcount: 3
{ lat: 237, common_pid: 2043 } hitcount: 8
{ lat: 237, common_pid: 2042 } hitcount: 2
{ lat: 237, common_pid: 2041 } hitcount: 2
{ lat: 238, common_pid: 2043 } hitcount: 10
{ lat: 238, common_pid: 2040 } hitcount: 1
{ lat: 238, common_pid: 2037 } hitcount: 9
{ lat: 238, common_pid: 2038 } hitcount: 1
{ lat: 238, common_pid: 2039 } hitcount: 1
{ lat: 238, common_pid: 2042 } hitcount: 3
{ lat: 238, common_pid: 2036 } hitcount: 7
{ lat: 239, common_pid: 2041 } hitcount: 1
{ lat: 239, common_pid: 2043 } hitcount: 11
{ lat: 239, common_pid: 2037 } hitcount: 11
{ lat: 239, common_pid: 2038 } hitcount: 6
{ lat: 239, common_pid: 2036 } hitcount: 7
{ lat: 239, common_pid: 2040 } hitcount: 1
{ lat: 239, common_pid: 2042 } hitcount: 9
{ lat: 240, common_pid: 2037 } hitcount: 29
{ lat: 240, common_pid: 2043 } hitcount: 15
{ lat: 240, common_pid: 2040 } hitcount: 44
{ lat: 240, common_pid: 2039 } hitcount: 1
{ lat: 240, common_pid: 2041 } hitcount: 2
{ lat: 240, common_pid: 2038 } hitcount: 1
{ lat: 240, common_pid: 2036 } hitcount: 10
{ lat: 240, common_pid: 2042 } hitcount: 13
{ lat: 241, common_pid: 2036 } hitcount: 21
{ lat: 241, common_pid: 2041 } hitcount: 36
{ lat: 241, common_pid: 2037 } hitcount: 34
{ lat: 241, common_pid: 2042 } hitcount: 14
{ lat: 241, common_pid: 2040 } hitcount: 94
{ lat: 241, common_pid: 2039 } hitcount: 12
{ lat: 241, common_pid: 2038 } hitcount: 2
{ lat: 241, common_pid: 2043 } hitcount: 28
{ lat: 242, common_pid: 2040 } hitcount: 109
{ lat: 242, common_pid: 2041 } hitcount: 506
{ lat: 242, common_pid: 2039 } hitcount: 155
{ lat: 242, common_pid: 2042 } hitcount: 21
{ lat: 242, common_pid: 2037 } hitcount: 52
{ lat: 242, common_pid: 2043 } hitcount: 21
{ lat: 242, common_pid: 2036 } hitcount: 16
{ lat: 242, common_pid: 2038 } hitcount: 156
{ lat: 243, common_pid: 2037 } hitcount: 46
{ lat: 243, common_pid: 2039 } hitcount: 40
{ lat: 243, common_pid: 2042 } hitcount: 119
{ lat: 243, common_pid: 2041 } hitcount: 611
{ lat: 243, common_pid: 2036 } hitcount: 69
{ lat: 243, common_pid: 2038 } hitcount: 784
{ lat: 243, common_pid: 2040 } hitcount: 323
{ lat: 243, common_pid: 2043 } hitcount: 14
{ lat: 244, common_pid: 2043 } hitcount: 35
{ lat: 244, common_pid: 2042 } hitcount: 305
{ lat: 244, common_pid: 2039 } hitcount: 8
{ lat: 244, common_pid: 2040 } hitcount: 4515
{ lat: 244, common_pid: 2038 } hitcount: 371
{ lat: 244, common_pid: 2037 } hitcount: 31
{ lat: 244, common_pid: 2036 } hitcount: 114
{ lat: 244, common_pid: 2041 } hitcount: 3396
{ lat: 245, common_pid: 2036 } hitcount: 700
{ lat: 245, common_pid: 2041 } hitcount: 2772
{ lat: 245, common_pid: 2037 } hitcount: 268
{ lat: 245, common_pid: 2039 } hitcount: 472
{ lat: 245, common_pid: 2038 } hitcount: 2758
{ lat: 245, common_pid: 2042 } hitcount: 3833
{ lat: 245, common_pid: 2040 } hitcount: 3105
{ lat: 245, common_pid: 2043 } hitcount: 645
{ lat: 246, common_pid: 2038 } hitcount: 3451
{ lat: 246, common_pid: 2041 } hitcount: 142
{ lat: 246, common_pid: 2037 } hitcount: 5101
{ lat: 246, common_pid: 2040 } hitcount: 68
{ lat: 246, common_pid: 2043 } hitcount: 5099
{ lat: 246, common_pid: 2039 } hitcount: 5608
{ lat: 246, common_pid: 2042 } hitcount: 3723
{ lat: 246, common_pid: 2036 } hitcount: 4738
{ lat: 247, common_pid: 2042 } hitcount: 312
{ lat: 247, common_pid: 2043 } hitcount: 2385
{ lat: 247, common_pid: 2041 } hitcount: 452
{ lat: 247, common_pid: 2038 } hitcount: 792
{ lat: 247, common_pid: 2040 } hitcount: 78
{ lat: 247, common_pid: 2036 } hitcount: 2375
{ lat: 247, common_pid: 2039 } hitcount: 1834
{ lat: 247, common_pid: 2037 } hitcount: 2655
{ lat: 248, common_pid: 2037 } hitcount: 36
{ lat: 248, common_pid: 2042 } hitcount: 11
{ lat: 248, common_pid: 2038 } hitcount: 122
{ lat: 248, common_pid: 2036 } hitcount: 135
{ lat: 248, common_pid: 2039 } hitcount: 26
{ lat: 248, common_pid: 2041 } hitcount: 503
{ lat: 248, common_pid: 2043 } hitcount: 66
{ lat: 248, common_pid: 2040 } hitcount: 46
{ lat: 249, common_pid: 2037 } hitcount: 29
{ lat: 249, common_pid: 2038 } hitcount: 1
{ lat: 249, common_pid: 2043 } hitcount: 29
{ lat: 249, common_pid: 2039 } hitcount: 8
{ lat: 249, common_pid: 2042 } hitcount: 56
{ lat: 249, common_pid: 2040 } hitcount: 27
{ lat: 249, common_pid: 2041 } hitcount: 11
{ lat: 249, common_pid: 2036 } hitcount: 27
{ lat: 250, common_pid: 2038 } hitcount: 1
{ lat: 250, common_pid: 2036 } hitcount: 30
{ lat: 250, common_pid: 2040 } hitcount: 19
{ lat: 250, common_pid: 2043 } hitcount: 22
{ lat: 250, common_pid: 2042 } hitcount: 20
{ lat: 250, common_pid: 2041 } hitcount: 1
{ lat: 250, common_pid: 2039 } hitcount: 6
{ lat: 250, common_pid: 2037 } hitcount: 48
{ lat: 251, common_pid: 2037 } hitcount: 43
{ lat: 251, common_pid: 2039 } hitcount: 1
{ lat: 251, common_pid: 2036 } hitcount: 12
{ lat: 251, common_pid: 2042 } hitcount: 2
{ lat: 251, common_pid: 2041 } hitcount: 1
{ lat: 251, common_pid: 2043 } hitcount: 15
{ lat: 251, common_pid: 2040 } hitcount: 3
{ lat: 252, common_pid: 2040 } hitcount: 1
{ lat: 252, common_pid: 2036 } hitcount: 12
{ lat: 252, common_pid: 2037 } hitcount: 21
{ lat: 252, common_pid: 2043 } hitcount: 14
{ lat: 253, common_pid: 2037 } hitcount: 21
{ lat: 253, common_pid: 2039 } hitcount: 2
{ lat: 253, common_pid: 2036 } hitcount: 9
{ lat: 253, common_pid: 2043 } hitcount: 6
{ lat: 253, common_pid: 2040 } hitcount: 1
{ lat: 254, common_pid: 2036 } hitcount: 8
{ lat: 254, common_pid: 2043 } hitcount: 3
{ lat: 254, common_pid: 2041 } hitcount: 1
{ lat: 254, common_pid: 2042 } hitcount: 1
{ lat: 254, common_pid: 2039 } hitcount: 1
{ lat: 254, common_pid: 2037 } hitcount: 12
{ lat: 255, common_pid: 2043 } hitcount: 1
{ lat: 255, common_pid: 2037 } hitcount: 2
{ lat: 255, common_pid: 2036 } hitcount: 2
{ lat: 255, common_pid: 2039 } hitcount: 8
{ lat: 256, common_pid: 2043 } hitcount: 1
{ lat: 256, common_pid: 2036 } hitcount: 4
{ lat: 256, common_pid: 2039 } hitcount: 6
{ lat: 257, common_pid: 2039 } hitcount: 5
{ lat: 257, common_pid: 2036 } hitcount: 4
{ lat: 258, common_pid: 2039 } hitcount: 5
{ lat: 258, common_pid: 2036 } hitcount: 2
{ lat: 259, common_pid: 2036 } hitcount: 7
{ lat: 259, common_pid: 2039 } hitcount: 7
{ lat: 260, common_pid: 2036 } hitcount: 8
{ lat: 260, common_pid: 2039 } hitcount: 6
{ lat: 261, common_pid: 2036 } hitcount: 5
{ lat: 261, common_pid: 2039 } hitcount: 7
{ lat: 262, common_pid: 2039 } hitcount: 5
{ lat: 262, common_pid: 2036 } hitcount: 5
{ lat: 263, common_pid: 2039 } hitcount: 7
{ lat: 263, common_pid: 2036 } hitcount: 7
{ lat: 264, common_pid: 2039 } hitcount: 9
{ lat: 264, common_pid: 2036 } hitcount: 9
{ lat: 265, common_pid: 2036 } hitcount: 5
{ lat: 265, common_pid: 2039 } hitcount: 1
{ lat: 266, common_pid: 2036 } hitcount: 1
{ lat: 266, common_pid: 2039 } hitcount: 3
{ lat: 267, common_pid: 2036 } hitcount: 1
{ lat: 267, common_pid: 2039 } hitcount: 3
{ lat: 268, common_pid: 2036 } hitcount: 1
{ lat: 268, common_pid: 2039 } hitcount: 6
{ lat: 269, common_pid: 2036 } hitcount: 1
{ lat: 269, common_pid: 2043 } hitcount: 1
{ lat: 269, common_pid: 2039 } hitcount: 2
{ lat: 270, common_pid: 2040 } hitcount: 1
{ lat: 270, common_pid: 2039 } hitcount: 6
{ lat: 271, common_pid: 2041 } hitcount: 1
{ lat: 271, common_pid: 2039 } hitcount: 5
{ lat: 272, common_pid: 2039 } hitcount: 10
{ lat: 273, common_pid: 2039 } hitcount: 8
{ lat: 274, common_pid: 2039 } hitcount: 2
{ lat: 275, common_pid: 2039 } hitcount: 1
{ lat: 276, common_pid: 2039 } hitcount: 2
{ lat: 276, common_pid: 2037 } hitcount: 1
{ lat: 276, common_pid: 2038 } hitcount: 1
{ lat: 277, common_pid: 2039 } hitcount: 1
{ lat: 277, common_pid: 2042 } hitcount: 1
{ lat: 278, common_pid: 2039 } hitcount: 1
{ lat: 279, common_pid: 2039 } hitcount: 4
{ lat: 279, common_pid: 2043 } hitcount: 1
{ lat: 280, common_pid: 2039 } hitcount: 3
{ lat: 283, common_pid: 2036 } hitcount: 2
{ lat: 284, common_pid: 2039 } hitcount: 1
{ lat: 284, common_pid: 2043 } hitcount: 1
{ lat: 288, common_pid: 2039 } hitcount: 1
{ lat: 289, common_pid: 2039 } hitcount: 1
{ lat: 300, common_pid: 2039 } hitcount: 1
{ lat: 384, common_pid: 2039 } hitcount: 1
Totals:
Hits: 67625
Entries: 278
Dropped: 0
Note, the writes are around the sleep, so ideally they will all be of 250
microseconds. If you are wondering how there are several that are under
250 microseconds, that is because the way cyclictest works, is if one
iteration comes in late, the next one will set the timer to wake up less that
250. That is, if an iteration came in 50 microseconds late, the next wake up
will be at 200 microseconds.
But this could easily be done in userspace. To make this even more
interesting, we can mix the histogram between events that happened in the
kernel with trace_marker::
# cd /sys/kernel/tracing
# echo 'latency u64 lat' > synthetic_events
# echo 'hist:keys=pid:ts0=common_timestamp.usecs' > events/sched/sched_waking/trigger
# echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).latency($lat) if buf == "end"' > events/ftrace/print/trigger
# echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger
The difference this time is that instead of using the trace_marker to start
the latency, the sched_waking event is used, matching the common_pid for the
trace_marker write with the pid that is being woken by sched_waking.
After running cyclictest again with the same parameters, we now have::
# cat events/synthetic/latency/hist
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
# event histogram
#
# trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
#
{ lat: 7, common_pid: 2302 } hitcount: 640
{ lat: 7, common_pid: 2299 } hitcount: 42
{ lat: 7, common_pid: 2303 } hitcount: 18
{ lat: 7, common_pid: 2305 } hitcount: 166
{ lat: 7, common_pid: 2306 } hitcount: 1
{ lat: 7, common_pid: 2301 } hitcount: 91
{ lat: 7, common_pid: 2300 } hitcount: 17
{ lat: 8, common_pid: 2303 } hitcount: 8296
{ lat: 8, common_pid: 2304 } hitcount: 6864
{ lat: 8, common_pid: 2305 } hitcount: 9464
{ lat: 8, common_pid: 2301 } hitcount: 9213
{ lat: 8, common_pid: 2306 } hitcount: 6246
{ lat: 8, common_pid: 2302 } hitcount: 8797
{ lat: 8, common_pid: 2299 } hitcount: 8771
{ lat: 8, common_pid: 2300 } hitcount: 8119
{ lat: 9, common_pid: 2305 } hitcount: 1519
{ lat: 9, common_pid: 2299 } hitcount: 2346
{ lat: 9, common_pid: 2303 } hitcount: 2841
{ lat: 9, common_pid: 2301 } hitcount: 1846
{ lat: 9, common_pid: 2304 } hitcount: 3861
{ lat: 9, common_pid: 2302 } hitcount: 1210
{ lat: 9, common_pid: 2300 } hitcount: 2762
{ lat: 9, common_pid: 2306 } hitcount: 4247
{ lat: 10, common_pid: 2299 } hitcount: 16
{ lat: 10, common_pid: 2306 } hitcount: 333
{ lat: 10, common_pid: 2303 } hitcount: 16
{ lat: 10, common_pid: 2304 } hitcount: 168
{ lat: 10, common_pid: 2302 } hitcount: 240
{ lat: 10, common_pid: 2301 } hitcount: 28
{ lat: 10, common_pid: 2300 } hitcount: 95
{ lat: 10, common_pid: 2305 } hitcount: 18
{ lat: 11, common_pid: 2303 } hitcount: 5
{ lat: 11, common_pid: 2305 } hitcount: 8
{ lat: 11, common_pid: 2306 } hitcount: 221
{ lat: 11, common_pid: 2302 } hitcount: 76
{ lat: 11, common_pid: 2304 } hitcount: 26
{ lat: 11, common_pid: 2300 } hitcount: 125
{ lat: 11, common_pid: 2299 } hitcount: 2
{ lat: 12, common_pid: 2305 } hitcount: 3
{ lat: 12, common_pid: 2300 } hitcount: 6
{ lat: 12, common_pid: 2306 } hitcount: 90
{ lat: 12, common_pid: 2302 } hitcount: 4
{ lat: 12, common_pid: 2303 } hitcount: 1
{ lat: 12, common_pid: 2304 } hitcount: 122
{ lat: 13, common_pid: 2300 } hitcount: 12
{ lat: 13, common_pid: 2301 } hitcount: 1
{ lat: 13, common_pid: 2306 } hitcount: 32
{ lat: 13, common_pid: 2302 } hitcount: 5
{ lat: 13, common_pid: 2305 } hitcount: 1
{ lat: 13, common_pid: 2303 } hitcount: 1
{ lat: 13, common_pid: 2304 } hitcount: 61
{ lat: 14, common_pid: 2303 } hitcount: 4
{ lat: 14, common_pid: 2306 } hitcount: 5
{ lat: 14, common_pid: 2305 } hitcount: 4
{ lat: 14, common_pid: 2304 } hitcount: 62
{ lat: 14, common_pid: 2302 } hitcount: 19
{ lat: 14, common_pid: 2300 } hitcount: 33
{ lat: 14, common_pid: 2299 } hitcount: 1
{ lat: 14, common_pid: 2301 } hitcount: 4
{ lat: 15, common_pid: 2305 } hitcount: 1
{ lat: 15, common_pid: 2302 } hitcount: 25
{ lat: 15, common_pid: 2300 } hitcount: 11
{ lat: 15, common_pid: 2299 } hitcount: 5
{ lat: 15, common_pid: 2301 } hitcount: 1
{ lat: 15, common_pid: 2304 } hitcount: 8
{ lat: 15, common_pid: 2303 } hitcount: 1
{ lat: 15, common_pid: 2306 } hitcount: 6
{ lat: 16, common_pid: 2302 } hitcount: 31
{ lat: 16, common_pid: 2306 } hitcount: 3
{ lat: 16, common_pid: 2300 } hitcount: 5
{ lat: 17, common_pid: 2302 } hitcount: 6
{ lat: 17, common_pid: 2303 } hitcount: 1
{ lat: 18, common_pid: 2304 } hitcount: 1
{ lat: 18, common_pid: 2302 } hitcount: 8
{ lat: 18, common_pid: 2299 } hitcount: 1
{ lat: 18, common_pid: 2301 } hitcount: 1
{ lat: 19, common_pid: 2303 } hitcount: 4
{ lat: 19, common_pid: 2304 } hitcount: 5
{ lat: 19, common_pid: 2302 } hitcount: 4
{ lat: 19, common_pid: 2299 } hitcount: 3
{ lat: 19, common_pid: 2306 } hitcount: 1
{ lat: 19, common_pid: 2300 } hitcount: 4
{ lat: 19, common_pid: 2305 } hitcount: 5
{ lat: 20, common_pid: 2299 } hitcount: 2
{ lat: 20, common_pid: 2302 } hitcount: 3
{ lat: 20, common_pid: 2305 } hitcount: 1
{ lat: 20, common_pid: 2300 } hitcount: 2
{ lat: 20, common_pid: 2301 } hitcount: 2
{ lat: 20, common_pid: 2303 } hitcount: 3
{ lat: 21, common_pid: 2305 } hitcount: 1
{ lat: 21, common_pid: 2299 } hitcount: 5
{ lat: 21, common_pid: 2303 } hitcount: 4
{ lat: 21, common_pid: 2302 } hitcount: 7
{ lat: 21, common_pid: 2300 } hitcount: 1
{ lat: 21, common_pid: 2301 } hitcount: 5
{ lat: 21, common_pid: 2304 } hitcount: 2
{ lat: 22, common_pid: 2302 } hitcount: 5
{ lat: 22, common_pid: 2303 } hitcount: 1
{ lat: 22, common_pid: 2306 } hitcount: 3
{ lat: 22, common_pid: 2301 } hitcount: 2
{ lat: 22, common_pid: 2300 } hitcount: 1
{ lat: 22, common_pid: 2299 } hitcount: 1
{ lat: 22, common_pid: 2305 } hitcount: 1
{ lat: 22, common_pid: 2304 } hitcount: 1
{ lat: 23, common_pid: 2299 } hitcount: 1
{ lat: 23, common_pid: 2306 } hitcount: 2
{ lat: 23, common_pid: 2302 } hitcount: 6
{ lat: 24, common_pid: 2302 } hitcount: 3
{ lat: 24, common_pid: 2300 } hitcount: 1
{ lat: 24, common_pid: 2306 } hitcount: 2
{ lat: 24, common_pid: 2305 } hitcount: 1
{ lat: 24, common_pid: 2299 } hitcount: 1
{ lat: 25, common_pid: 2300 } hitcount: 1
{ lat: 25, common_pid: 2302 } hitcount: 4
{ lat: 26, common_pid: 2302 } hitcount: 2
{ lat: 27, common_pid: 2305 } hitcount: 1
{ lat: 27, common_pid: 2300 } hitcount: 1
{ lat: 27, common_pid: 2302 } hitcount: 3
{ lat: 28, common_pid: 2306 } hitcount: 1
{ lat: 28, common_pid: 2302 } hitcount: 4
{ lat: 29, common_pid: 2302 } hitcount: 1
{ lat: 29, common_pid: 2300 } hitcount: 2
{ lat: 29, common_pid: 2306 } hitcount: 1
{ lat: 29, common_pid: 2304 } hitcount: 1
{ lat: 30, common_pid: 2302 } hitcount: 4
{ lat: 31, common_pid: 2302 } hitcount: 6
{ lat: 32, common_pid: 2302 } hitcount: 1
{ lat: 33, common_pid: 2299 } hitcount: 1
{ lat: 33, common_pid: 2302 } hitcount: 3
{ lat: 34, common_pid: 2302 } hitcount: 2
{ lat: 35, common_pid: 2302 } hitcount: 1
{ lat: 35, common_pid: 2304 } hitcount: 1
{ lat: 36, common_pid: 2302 } hitcount: 4
{ lat: 37, common_pid: 2302 } hitcount: 6
{ lat: 38, common_pid: 2302 } hitcount: 2
{ lat: 39, common_pid: 2302 } hitcount: 2
{ lat: 39, common_pid: 2304 } hitcount: 1
{ lat: 40, common_pid: 2304 } hitcount: 2
{ lat: 40, common_pid: 2302 } hitcount: 5
{ lat: 41, common_pid: 2304 } hitcount: 1
{ lat: 41, common_pid: 2302 } hitcount: 8
{ lat: 42, common_pid: 2302 } hitcount: 6
{ lat: 42, common_pid: 2304 } hitcount: 1
{ lat: 43, common_pid: 2302 } hitcount: 3
{ lat: 43, common_pid: 2304 } hitcount: 4
{ lat: 44, common_pid: 2302 } hitcount: 6
{ lat: 45, common_pid: 2302 } hitcount: 5
{ lat: 46, common_pid: 2302 } hitcount: 5
{ lat: 47, common_pid: 2302 } hitcount: 7
{ lat: 48, common_pid: 2301 } hitcount: 1
{ lat: 48, common_pid: 2302 } hitcount: 9
{ lat: 49, common_pid: 2302 } hitcount: 3
{ lat: 50, common_pid: 2302 } hitcount: 1
{ lat: 50, common_pid: 2301 } hitcount: 1
{ lat: 51, common_pid: 2302 } hitcount: 2
{ lat: 51, common_pid: 2301 } hitcount: 1
{ lat: 61, common_pid: 2302 } hitcount: 1
{ lat: 110, common_pid: 2302 } hitcount: 1
Totals:
Hits: 89565
Entries: 158
Dropped: 0
This doesn't tell us any information about how late cyclictest may have
woken up, but it does show us a nice histogram of how long it took from
the time that cyclictest was woken to the time it made it into user space.