Skip to content
Snippets Groups Projects
histogram.rst 154 KiB
Newer Older
    generated using onmatch(event).trace(wakeup_latency,arg1,arg2).

    There is also an equivalent alternative form available for
    generating synthetic events.  In this form, the synthetic event
    name is used as if it were a function name.  For example, using
    the 'wakeup_latency' synthetic event name again, the
    wakeup_latency event would be generated by invoking it as if it
    were a function call, with the event field values passed in as
    arguments: onmatch(event).wakeup_latency(arg1,arg2).  The syntax
    for this form is:

      onmatch(matching.event).<synthetic_event_name>(param list)

    In either case, the 'param list' consists of one or more
    parameters which may be either variables or fields defined on
    either the 'matching.event' or the target event.  The variables or
    fields specified in the param list may be either fully-qualified
    or unqualified.  If a variable is specified as unqualified, it
    must be unique between the two events.  A field name used as a
    param can be unqualified if it refers to the target event, but
    must be fully qualified if it refers to the matching event.  A
    fully-qualified name is of the form 'system.event_name.$var_name'
    or 'system.event_name.field'.

    The 'matching.event' specification is simply the fully qualified
    event name of the event that matches the target event for the
    onmatch() functionality, in the form 'system.event_name'. Histogram
    keys of both events are compared to find if events match. In case
    multiple histogram keys are used, they all must match in the specified
    order.

    Finally, the number and type of variables/fields in the 'param
    list' must match the number and types of the fields in the
    synthetic event being generated.

    As an example the below defines a simple synthetic event and uses
    a variable defined on the sched_wakeup_new event as a parameter
    when invoking the synthetic event.  Here we define the synthetic
      # echo 'wakeup_new_test pid_t pid' >> \
             /sys/kernel/debug/tracing/synthetic_events
      # cat /sys/kernel/debug/tracing/synthetic_events
            wakeup_new_test pid_t pid

    The following hist trigger both defines the missing testpid
    variable and specifies an onmatch() action that generates a
    wakeup_new_test synthetic event whenever a sched_wakeup_new event
    occurs, which because of the 'if comm == "cyclictest"' filter only
    happens when the executable is cyclictest::
      # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
              wakeup_new_test($testpid) if comm=="cyclictest"' >> \
              /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger
    Or, equivalently, using the 'trace' keyword syntax:

    # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
            trace(wakeup_new_test,$testpid) if comm=="cyclictest"' >> \
            /sys/kernel/debug/tracing/events/sched/sched_wakeup_new/trigger

    Creating and displaying a histogram based on those events is now
    just a matter of using the fields and new synthetic event in the
    tracing/events/synthetic directory, as usual::
      # echo 'hist:keys=pid:sort=pid' >> \
             /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/trigger

    Running 'cyclictest' should cause wakeup_new events to generate
    wakeup_new_test synthetic events which should result in histogram
    output in the wakeup_new_test event's hist file::
      # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_new_test/hist

    A more typical usage would be to use two events to calculate a
    latency.  The following example uses a set of hist triggers to
    produce a 'wakeup_latency' histogram.
    First, we define a 'wakeup_latency' synthetic event::
      # echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
              /sys/kernel/debug/tracing/synthetic_events

    Next, we specify that whenever we see a sched_waking event for a
    cyclictest thread, save the timestamp in a 'ts0' variable::
      # echo 'hist:keys=$saved_pid:saved_pid=pid:ts0=common_timestamp.usecs \
              if comm=="cyclictest"' >> \
	      /sys/kernel/debug/tracing/events/sched/sched_waking/trigger

    Then, when the corresponding thread is actually scheduled onto the
    CPU by a sched_switch event (saved_pid matches next_pid), calculate
    the latency and use that along with another variable and an event field
    to generate a wakeup_latency synthetic event::
      # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:\
              onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,\
	              $saved_pid,next_prio) if next_comm=="cyclictest"' >> \
	      /sys/kernel/debug/tracing/events/sched/sched_switch/trigger

    We also need to create a histogram on the wakeup_latency synthetic
    event in order to aggregate the generated synthetic event data::
      # echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
              /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/trigger

    Finally, once we've run cyclictest to actually generate some
    events, we can see the output by looking at the wakeup_latency
    synthetic event's hist file::
      # cat /sys/kernel/debug/tracing/events/synthetic/wakeup_latency/hist

  - onmax(var).save(field,..	.)

    The 'onmax(var).save(field,...)' hist trigger action is invoked
    whenever the value of 'var' associated with a histogram entry
    exceeds the current maximum contained in that variable.

    The end result is that the trace event fields specified as the
    onmax.save() params will be saved if 'var' exceeds the current
    maximum for that hist trigger entry.  This allows context from the
    event that exhibited the new maximum to be saved for later
    reference.  When the histogram is displayed, additional fields
    displaying the saved values will be printed.

    As an example the below defines a couple of hist triggers, one for
    sched_waking and another for sched_switch, keyed on pid.  Whenever
    a sched_waking occurs, the timestamp is saved in the entry
    corresponding to the current pid, and when the scheduler switches
    back to that pid, the timestamp difference is calculated.  If the
    resulting latency, stored in wakeup_lat, exceeds the current
    maximum latency, the values specified in the save() fields are
      # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
              if comm=="cyclictest"' >> \
              /sys/kernel/debug/tracing/events/sched/sched_waking/trigger
      # echo 'hist:keys=next_pid:\
              wakeup_lat=common_timestamp.usecs-$ts0:\
              onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
              if next_comm=="cyclictest"' >> \
              /sys/kernel/debug/tracing/events/sched/sched_switch/trigger

    When the histogram is displayed, the max value and the saved
    values corresponding to the max are displayed following the rest
      # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
        { next_pid:       2255 } hitcount:        239
          common_timestamp-ts0:          0
          max:         27
	  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/1
        { next_pid:       2256 } hitcount:       2355
          common_timestamp-ts0: 0
          max:         49  next_comm: cyclictest
          prev_pid:          0  prev_prio:        120  prev_comm: swapper/0
  - onmax(var).snapshot()

    The 'onmax(var).snapshot()' hist trigger action is invoked
    whenever the value of 'var' associated with a histogram entry
    exceeds the current maximum contained in that variable.

    The end result is that a global snapshot of the trace buffer will
    be saved in the tracing/snapshot file if 'var' exceeds the current
    maximum for any hist trigger entry.

    Note that in this case the maximum is a global maximum for the
    current trace instance, which is the maximum across all buckets of
    the histogram.  The key of the specific trace event that caused
    the global maximum and the global maximum itself are displayed,
    along with a message stating that a snapshot has been taken and
    where to find it.  The user can use the key information displayed
    to locate the corresponding bucket in the histogram for even more
    detail.

    As an example the below defines a couple of hist triggers, one for
    sched_waking and another for sched_switch, keyed on pid.  Whenever
    a sched_waking event occurs, the timestamp is saved in the entry
    corresponding to the current pid, and when the scheduler switches
    back to that pid, the timestamp difference is calculated.  If the
    resulting latency, stored in wakeup_lat, exceeds the current
    maximum latency, a snapshot is taken.  As part of the setup, all
    the scheduler events are also enabled, which are the events that
    will show up in the snapshot when it is taken at some point:

    # echo 1 > /sys/kernel/debug/tracing/events/sched/enable

    # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
            if comm=="cyclictest"' >> \
            /sys/kernel/debug/tracing/events/sched/sched_waking/trigger

    # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0: \
            onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio, \
	    prev_comm):onmax($wakeup_lat).snapshot() \
	    if next_comm=="cyclictest"' >> \
	    /sys/kernel/debug/tracing/events/sched/sched_switch/trigger

    When the histogram is displayed, for each bucket the max value
    and the saved values corresponding to the max are displayed
    following the rest of the fields.

    If a snapshot was taken, there is also a message indicating that,
    along with the value and event that triggered the global maximum:

    # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist
      { next_pid:       2101 } hitcount:        200
	max:         52  next_prio:        120  next_comm: cyclictest \
        prev_pid:          0  prev_prio:        120  prev_comm: swapper/6

      { next_pid:       2103 } hitcount:       1326
	max:        572  next_prio:         19  next_comm: cyclictest \
        prev_pid:          0  prev_prio:        120  prev_comm: swapper/1

      { next_pid:       2102 } hitcount:       1982 \
	max:         74  next_prio:         19  next_comm: cyclictest \
        prev_pid:          0  prev_prio:        120  prev_comm: swapper/5

    Snapshot taken (see tracing/snapshot).  Details:
	triggering value { onmax($wakeup_lat) }:        572	\
	triggered by event with key: { next_pid:       2103 }

    Totals:
        Hits: 3508
        Entries: 3
        Dropped: 0

    In the above case, the event that triggered the global maximum has
    the key with next_pid == 2103.  If you look at the bucket that has
    2103 as the key, you'll find the additional values save()'d along
    with the local maximum for that bucket, which should be the same
    as the global maximum (since that was the same value that
    triggered the global snapshot).

    And finally, looking at the snapshot data should show at or near
    the end the event that triggered the snapshot (in this case you
    can verify the timestamps between the sched_waking and
    sched_switch events, which should match the time displayed in the
    global maximum)::

     # cat /sys/kernel/debug/tracing/snapshot

         <...>-2103  [005] d..3   309.873125: sched_switch: prev_comm=cyclictest prev_pid=2103 prev_prio=19 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
         <idle>-0     [005] d.h3   309.873611: sched_waking: comm=cyclictest pid=2102 prio=19 target_cpu=005
         <idle>-0     [005] dNh4   309.873613: sched_wakeup: comm=cyclictest pid=2102 prio=19 target_cpu=005
         <idle>-0     [005] d..3   309.873616: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2102 next_prio=19
         <...>-2102  [005] d..3   309.873625: sched_switch: prev_comm=cyclictest prev_pid=2102 prev_prio=19 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
         <idle>-0     [005] d.h3   309.874624: sched_waking: comm=cyclictest pid=2102 prio=19 target_cpu=005
         <idle>-0     [005] dNh4   309.874626: sched_wakeup: comm=cyclictest pid=2102 prio=19 target_cpu=005
         <idle>-0     [005] dNh3   309.874628: sched_waking: comm=cyclictest pid=2103 prio=19 target_cpu=005
         <idle>-0     [005] dNh4   309.874630: sched_wakeup: comm=cyclictest pid=2103 prio=19 target_cpu=005
         <idle>-0     [005] d..3   309.874633: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2102 next_prio=19
         <idle>-0     [004] d.h3   309.874757: sched_waking: comm=gnome-terminal- pid=1699 prio=120 target_cpu=004
         <idle>-0     [004] dNh4   309.874762: sched_wakeup: comm=gnome-terminal- pid=1699 prio=120 target_cpu=004
         <idle>-0     [004] d..3   309.874766: sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=gnome-terminal- next_pid=1699 next_prio=120
     gnome-terminal--1699  [004] d.h2   309.874941: sched_stat_runtime: comm=gnome-terminal- pid=1699 runtime=180706 [ns] vruntime=1126870572 [ns]
         <idle>-0     [003] d.s4   309.874956: sched_waking: comm=rcu_sched pid=9 prio=120 target_cpu=007
         <idle>-0     [003] d.s5   309.874960: sched_wake_idle_without_ipi: cpu=7
         <idle>-0     [003] d.s5   309.874961: sched_wakeup: comm=rcu_sched pid=9 prio=120 target_cpu=007
         <idle>-0     [007] d..3   309.874963: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=rcu_sched next_pid=9 next_prio=120
      rcu_sched-9     [007] d..3   309.874973: sched_stat_runtime: comm=rcu_sched pid=9 runtime=13646 [ns] vruntime=22531430286 [ns]
      rcu_sched-9     [007] d..3   309.874978: sched_switch: prev_comm=rcu_sched prev_pid=9 prev_prio=120 prev_state=R+ ==> next_comm=swapper/7 next_pid=0 next_prio=120
          <...>-2102  [005] d..4   309.874994: sched_migrate_task: comm=cyclictest pid=2103 prio=19 orig_cpu=5 dest_cpu=1
          <...>-2102  [005] d..4   309.875185: sched_wake_idle_without_ipi: cpu=1
         <idle>-0     [001] d..3   309.875200: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2103 next_prio=19
  - onchange(var).save(field,..	.)

    The 'onchange(var).save(field,...)' hist trigger action is invoked
    whenever the value of 'var' associated with a histogram entry
    changes.

    The end result is that the trace event fields specified as the
    onchange.save() params will be saved if 'var' changes for that
    hist trigger entry.  This allows context from the event that
    changed the value to be saved for later reference.  When the
    histogram is displayed, additional fields displaying the saved
    values will be printed.

  - onchange(var).snapshot()

    The 'onchange(var).snapshot()' hist trigger action is invoked
    whenever the value of 'var' associated with a histogram entry
    changes.

    The end result is that a global snapshot of the trace buffer will
    be saved in the tracing/snapshot file if 'var' changes for any
    hist trigger entry.

    Note that in this case the changed value is a global variable
    associated with current trace instance.  The key of the specific
    trace event that caused the value to change and the global value
    itself are displayed, along with a message stating that a snapshot
    has been taken and where to find it.  The user can use the key
    information displayed to locate the corresponding bucket in the
    histogram for even more detail.

    As an example the below defines a hist trigger on the tcp_probe
    event, keyed on dport.  Whenever a tcp_probe event occurs, the
    cwnd field is checked against the current value stored in the
    $cwnd variable.  If the value has changed, a snapshot is taken.
    As part of the setup, all the scheduler and tcp events are also
    enabled, which are the events that will show up in the snapshot
    when it is taken at some point:

    # echo 1 > /sys/kernel/debug/tracing/events/sched/enable
    # echo 1 > /sys/kernel/debug/tracing/events/tcp/enable

    # echo 'hist:keys=dport:cwnd=snd_cwnd: \
            onchange($cwnd).save(snd_wnd,srtt,rcv_wnd): \
	    onchange($cwnd).snapshot()' >> \
	    /sys/kernel/debug/tracing/events/tcp/tcp_probe/trigger

    When the histogram is displayed, for each bucket the tracked value
    and the saved values corresponding to that value are displayed
    following the rest of the fields.

    If a snapshot was taken, there is also a message indicating that,
    along with the value and event that triggered the snapshot::

      # cat /sys/kernel/debug/tracing/events/tcp/tcp_probe/hist

      { dport:       1521 } hitcount:          8
	changed:         10  snd_wnd:      35456  srtt:     154262  rcv_wnd:      42112

      { dport:         80 } hitcount:         23
	changed:         10  snd_wnd:      28960  srtt:      19604  rcv_wnd:      29312

      { dport:       9001 } hitcount:        172
	changed:         10  snd_wnd:      48384  srtt:     260444  rcv_wnd:      55168

      { dport:        443 } hitcount:        211
	changed:         10  snd_wnd:      26960  srtt:      17379  rcv_wnd:      28800

    Snapshot taken (see tracing/snapshot).  Details::

        triggering value { onchange($cwnd) }:         10
        triggered by event with key: { dport:         80 }

      Totals:
          Hits: 414
          Entries: 4
          Dropped: 0

    In the above case, the event that triggered the snapshot has the
    key with dport == 80.  If you look at the bucket that has 80 as
    the key, you'll find the additional values save()'d along with the
    changed value for that bucket, which should be the same as the
    global changed value (since that was the same value that triggered
    the global snapshot).

    And finally, looking at the snapshot data should show at or near
    the end the event that triggered the snapshot::

      # cat /sys/kernel/debug/tracing/snapshot

         gnome-shell-1261  [006] dN.3    49.823113: sched_stat_runtime: comm=gnome-shell pid=1261 runtime=49347 [ns] vruntime=1835730389 [ns]
       kworker/u16:4-773   [003] d..3    49.823114: sched_switch: prev_comm=kworker/u16:4 prev_pid=773 prev_prio=120 prev_state=R+ ==> next_comm=kworker/3:2 next_pid=135 next_prio=120
         gnome-shell-1261  [006] d..3    49.823114: sched_switch: prev_comm=gnome-shell prev_pid=1261 prev_prio=120 prev_state=R+ ==> next_comm=kworker/6:2 next_pid=387 next_prio=120
         kworker/3:2-135   [003] d..3    49.823118: sched_stat_runtime: comm=kworker/3:2 pid=135 runtime=5339 [ns] vruntime=17815800388 [ns]
         kworker/6:2-387   [006] d..3    49.823120: sched_stat_runtime: comm=kworker/6:2 pid=387 runtime=9594 [ns] vruntime=14589605367 [ns]
         kworker/6:2-387   [006] d..3    49.823122: sched_switch: prev_comm=kworker/6:2 prev_pid=387 prev_prio=120 prev_state=R+ ==> next_comm=gnome-shell next_pid=1261 next_prio=120
         kworker/3:2-135   [003] d..3    49.823123: sched_switch: prev_comm=kworker/3:2 prev_pid=135 prev_prio=120 prev_state=T ==> next_comm=swapper/3 next_pid=0 next_prio=120
              <idle>-0     [004] ..s7    49.823798: tcp_probe: src=10.0.0.10:54326 dest=23.215.104.193:80 mark=0x0 length=32 snd_nxt=0xe3ae2ff5 snd_una=0xe3ae2ecd snd_cwnd=10 ssthresh=2147483647 snd_wnd=28960 srtt=19604 rcv_wnd=29312
3. User space creating a trigger
--------------------------------

Writing into /sys/kernel/tracing/trace_marker writes into the ftrace
ring buffer. This can also act like an event, by writing into the trigger
file located in /sys/kernel/tracing/events/ftrace/print/

Modifying cyclictest to write into the trace_marker file before it sleeps
and after it wakes up, something like this::
  static void traceputs(char *str)
  {
	/* tracemark_fd is the trace_marker file descriptor */
	if (tracemark_fd < 0)
		return;
	/* write the tracemark message */
	write(tracemark_fd, str, strlen(str));
And later add something like::

	traceputs("start");
	clock_nanosleep(...);
	traceputs("end");

We can make a histogram from this::

 # cd /sys/kernel/tracing
 # echo 'latency u64 lat' > synthetic_events
 # echo 'hist:keys=common_pid:ts0=common_timestamp.usecs if buf == "start"' > events/ftrace/print/trigger
 # echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(ftrace.print).latency($lat) if buf == "end"' >> events/ftrace/print/trigger
 # echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger

The above created a synthetic event called "latency" and two histograms
against the trace_marker, one gets triggered when "start" is written into the
trace_marker file and the other when "end" is written. If the pids match, then
it will call the "latency" synthetic event with the calculated latency as its
parameter. Finally, a histogram is added to the latency synthetic event to
record the calculated latency along with the pid.

Now running cyclictest with::

 # ./cyclictest -p80 -d0 -i250 -n -a -t --tracemark -b 1000

 -p80  : run threads at priority 80
 -d0   : have all threads run at the same interval
 -i250 : start the interval at 250 microseconds (all threads will do this)
 -n    : sleep with nanosleep
 -a    : affine all threads to a separate CPU
 -t    : one thread per available CPU
 --tracemark : enable trace mark writing
 -b 1000 : stop if any latency is greater than 1000 microseconds

Note, the -b 1000 is used just to make --tracemark available.

Then we can see the histogram created by this with::

 # cat events/synthetic/latency/hist
 # event histogram
 #
 # trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
 #

 { lat:        107, common_pid:       2039 } hitcount:          1
 { lat:        122, common_pid:       2041 } hitcount:          1
 { lat:        166, common_pid:       2039 } hitcount:          1
 { lat:        174, common_pid:       2039 } hitcount:          1
 { lat:        194, common_pid:       2041 } hitcount:          1
 { lat:        196, common_pid:       2036 } hitcount:          1
 { lat:        197, common_pid:       2038 } hitcount:          1
 { lat:        198, common_pid:       2039 } hitcount:          1
 { lat:        199, common_pid:       2039 } hitcount:          1
 { lat:        200, common_pid:       2041 } hitcount:          1
 { lat:        201, common_pid:       2039 } hitcount:          2
 { lat:        202, common_pid:       2038 } hitcount:          1
 { lat:        202, common_pid:       2043 } hitcount:          1
 { lat:        203, common_pid:       2039 } hitcount:          1
 { lat:        203, common_pid:       2036 } hitcount:          1
 { lat:        203, common_pid:       2041 } hitcount:          1
 { lat:        206, common_pid:       2038 } hitcount:          2
 { lat:        207, common_pid:       2039 } hitcount:          1
 { lat:        207, common_pid:       2036 } hitcount:          1
 { lat:        208, common_pid:       2040 } hitcount:          1
 { lat:        209, common_pid:       2043 } hitcount:          1
 { lat:        210, common_pid:       2039 } hitcount:          1
 { lat:        211, common_pid:       2039 } hitcount:          4
 { lat:        212, common_pid:       2043 } hitcount:          1
 { lat:        212, common_pid:       2039 } hitcount:          2
 { lat:        213, common_pid:       2039 } hitcount:          1
 { lat:        214, common_pid:       2038 } hitcount:          1
 { lat:        214, common_pid:       2039 } hitcount:          2
 { lat:        214, common_pid:       2042 } hitcount:          1
 { lat:        215, common_pid:       2039 } hitcount:          1
 { lat:        217, common_pid:       2036 } hitcount:          1
 { lat:        217, common_pid:       2040 } hitcount:          1
 { lat:        217, common_pid:       2039 } hitcount:          1
 { lat:        218, common_pid:       2039 } hitcount:          6
 { lat:        219, common_pid:       2039 } hitcount:          9
 { lat:        220, common_pid:       2039 } hitcount:         11
 { lat:        221, common_pid:       2039 } hitcount:          5
 { lat:        221, common_pid:       2042 } hitcount:          1
 { lat:        222, common_pid:       2039 } hitcount:          7
 { lat:        223, common_pid:       2036 } hitcount:          1
 { lat:        223, common_pid:       2039 } hitcount:          3
 { lat:        224, common_pid:       2039 } hitcount:          4
 { lat:        224, common_pid:       2037 } hitcount:          1
 { lat:        224, common_pid:       2036 } hitcount:          2
 { lat:        225, common_pid:       2039 } hitcount:          5
 { lat:        225, common_pid:       2042 } hitcount:          1
 { lat:        226, common_pid:       2039 } hitcount:          7
 { lat:        226, common_pid:       2036 } hitcount:          4
 { lat:        227, common_pid:       2039 } hitcount:          6
 { lat:        227, common_pid:       2036 } hitcount:         12
 { lat:        227, common_pid:       2043 } hitcount:          1
 { lat:        228, common_pid:       2039 } hitcount:          7
 { lat:        228, common_pid:       2036 } hitcount:         14
 { lat:        229, common_pid:       2039 } hitcount:          9
 { lat:        229, common_pid:       2036 } hitcount:          8
 { lat:        229, common_pid:       2038 } hitcount:          1
 { lat:        230, common_pid:       2039 } hitcount:         11
 { lat:        230, common_pid:       2036 } hitcount:          6
 { lat:        230, common_pid:       2043 } hitcount:          1
 { lat:        230, common_pid:       2042 } hitcount:          2
 { lat:        231, common_pid:       2041 } hitcount:          1
 { lat:        231, common_pid:       2036 } hitcount:          6
 { lat:        231, common_pid:       2043 } hitcount:          1
 { lat:        231, common_pid:       2039 } hitcount:          8
 { lat:        232, common_pid:       2037 } hitcount:          1
 { lat:        232, common_pid:       2039 } hitcount:          6
 { lat:        232, common_pid:       2040 } hitcount:          2
 { lat:        232, common_pid:       2036 } hitcount:          5
 { lat:        232, common_pid:       2043 } hitcount:          1
 { lat:        233, common_pid:       2036 } hitcount:          5
 { lat:        233, common_pid:       2039 } hitcount:         11
 { lat:        234, common_pid:       2039 } hitcount:          4
 { lat:        234, common_pid:       2038 } hitcount:          2
 { lat:        234, common_pid:       2043 } hitcount:          2
 { lat:        234, common_pid:       2036 } hitcount:         11
 { lat:        234, common_pid:       2040 } hitcount:          1
 { lat:        235, common_pid:       2037 } hitcount:          2
 { lat:        235, common_pid:       2036 } hitcount:          8
 { lat:        235, common_pid:       2043 } hitcount:          2
 { lat:        235, common_pid:       2039 } hitcount:          5
 { lat:        235, common_pid:       2042 } hitcount:          2
 { lat:        235, common_pid:       2040 } hitcount:          4
 { lat:        235, common_pid:       2041 } hitcount:          1
 { lat:        236, common_pid:       2036 } hitcount:          7
 { lat:        236, common_pid:       2037 } hitcount:          1
 { lat:        236, common_pid:       2041 } hitcount:          5
 { lat:        236, common_pid:       2039 } hitcount:          3
 { lat:        236, common_pid:       2043 } hitcount:          9
 { lat:        236, common_pid:       2040 } hitcount:          7
 { lat:        237, common_pid:       2037 } hitcount:          1
 { lat:        237, common_pid:       2040 } hitcount:          1
 { lat:        237, common_pid:       2036 } hitcount:          9
 { lat:        237, common_pid:       2039 } hitcount:          3
 { lat:        237, common_pid:       2043 } hitcount:          8
 { lat:        237, common_pid:       2042 } hitcount:          2
 { lat:        237, common_pid:       2041 } hitcount:          2
 { lat:        238, common_pid:       2043 } hitcount:         10
 { lat:        238, common_pid:       2040 } hitcount:          1
 { lat:        238, common_pid:       2037 } hitcount:          9
 { lat:        238, common_pid:       2038 } hitcount:          1
 { lat:        238, common_pid:       2039 } hitcount:          1
 { lat:        238, common_pid:       2042 } hitcount:          3
 { lat:        238, common_pid:       2036 } hitcount:          7
 { lat:        239, common_pid:       2041 } hitcount:          1
 { lat:        239, common_pid:       2043 } hitcount:         11
 { lat:        239, common_pid:       2037 } hitcount:         11
 { lat:        239, common_pid:       2038 } hitcount:          6
 { lat:        239, common_pid:       2036 } hitcount:          7
 { lat:        239, common_pid:       2040 } hitcount:          1
 { lat:        239, common_pid:       2042 } hitcount:          9
 { lat:        240, common_pid:       2037 } hitcount:         29
 { lat:        240, common_pid:       2043 } hitcount:         15
 { lat:        240, common_pid:       2040 } hitcount:         44
 { lat:        240, common_pid:       2039 } hitcount:          1
 { lat:        240, common_pid:       2041 } hitcount:          2
 { lat:        240, common_pid:       2038 } hitcount:          1
 { lat:        240, common_pid:       2036 } hitcount:         10
 { lat:        240, common_pid:       2042 } hitcount:         13
 { lat:        241, common_pid:       2036 } hitcount:         21
 { lat:        241, common_pid:       2041 } hitcount:         36
 { lat:        241, common_pid:       2037 } hitcount:         34
 { lat:        241, common_pid:       2042 } hitcount:         14
 { lat:        241, common_pid:       2040 } hitcount:         94
 { lat:        241, common_pid:       2039 } hitcount:         12
 { lat:        241, common_pid:       2038 } hitcount:          2
 { lat:        241, common_pid:       2043 } hitcount:         28
 { lat:        242, common_pid:       2040 } hitcount:        109
 { lat:        242, common_pid:       2041 } hitcount:        506
 { lat:        242, common_pid:       2039 } hitcount:        155
 { lat:        242, common_pid:       2042 } hitcount:         21
 { lat:        242, common_pid:       2037 } hitcount:         52
 { lat:        242, common_pid:       2043 } hitcount:         21
 { lat:        242, common_pid:       2036 } hitcount:         16
 { lat:        242, common_pid:       2038 } hitcount:        156
 { lat:        243, common_pid:       2037 } hitcount:         46
 { lat:        243, common_pid:       2039 } hitcount:         40
 { lat:        243, common_pid:       2042 } hitcount:        119
 { lat:        243, common_pid:       2041 } hitcount:        611
 { lat:        243, common_pid:       2036 } hitcount:         69
 { lat:        243, common_pid:       2038 } hitcount:        784
 { lat:        243, common_pid:       2040 } hitcount:        323
 { lat:        243, common_pid:       2043 } hitcount:         14
 { lat:        244, common_pid:       2043 } hitcount:         35
 { lat:        244, common_pid:       2042 } hitcount:        305
 { lat:        244, common_pid:       2039 } hitcount:          8
 { lat:        244, common_pid:       2040 } hitcount:       4515
 { lat:        244, common_pid:       2038 } hitcount:        371
 { lat:        244, common_pid:       2037 } hitcount:         31
 { lat:        244, common_pid:       2036 } hitcount:        114
 { lat:        244, common_pid:       2041 } hitcount:       3396
 { lat:        245, common_pid:       2036 } hitcount:        700
 { lat:        245, common_pid:       2041 } hitcount:       2772
 { lat:        245, common_pid:       2037 } hitcount:        268
 { lat:        245, common_pid:       2039 } hitcount:        472
 { lat:        245, common_pid:       2038 } hitcount:       2758
 { lat:        245, common_pid:       2042 } hitcount:       3833
 { lat:        245, common_pid:       2040 } hitcount:       3105
 { lat:        245, common_pid:       2043 } hitcount:        645
 { lat:        246, common_pid:       2038 } hitcount:       3451
 { lat:        246, common_pid:       2041 } hitcount:        142
 { lat:        246, common_pid:       2037 } hitcount:       5101
 { lat:        246, common_pid:       2040 } hitcount:         68
 { lat:        246, common_pid:       2043 } hitcount:       5099
 { lat:        246, common_pid:       2039 } hitcount:       5608
 { lat:        246, common_pid:       2042 } hitcount:       3723
 { lat:        246, common_pid:       2036 } hitcount:       4738
 { lat:        247, common_pid:       2042 } hitcount:        312
 { lat:        247, common_pid:       2043 } hitcount:       2385
 { lat:        247, common_pid:       2041 } hitcount:        452
 { lat:        247, common_pid:       2038 } hitcount:        792
 { lat:        247, common_pid:       2040 } hitcount:         78
 { lat:        247, common_pid:       2036 } hitcount:       2375
 { lat:        247, common_pid:       2039 } hitcount:       1834
 { lat:        247, common_pid:       2037 } hitcount:       2655
 { lat:        248, common_pid:       2037 } hitcount:         36
 { lat:        248, common_pid:       2042 } hitcount:         11
 { lat:        248, common_pid:       2038 } hitcount:        122
 { lat:        248, common_pid:       2036 } hitcount:        135
 { lat:        248, common_pid:       2039 } hitcount:         26
 { lat:        248, common_pid:       2041 } hitcount:        503
 { lat:        248, common_pid:       2043 } hitcount:         66
 { lat:        248, common_pid:       2040 } hitcount:         46
 { lat:        249, common_pid:       2037 } hitcount:         29
 { lat:        249, common_pid:       2038 } hitcount:          1
 { lat:        249, common_pid:       2043 } hitcount:         29
 { lat:        249, common_pid:       2039 } hitcount:          8
 { lat:        249, common_pid:       2042 } hitcount:         56
 { lat:        249, common_pid:       2040 } hitcount:         27
 { lat:        249, common_pid:       2041 } hitcount:         11
 { lat:        249, common_pid:       2036 } hitcount:         27
 { lat:        250, common_pid:       2038 } hitcount:          1
 { lat:        250, common_pid:       2036 } hitcount:         30
 { lat:        250, common_pid:       2040 } hitcount:         19
 { lat:        250, common_pid:       2043 } hitcount:         22
 { lat:        250, common_pid:       2042 } hitcount:         20
 { lat:        250, common_pid:       2041 } hitcount:          1
 { lat:        250, common_pid:       2039 } hitcount:          6
 { lat:        250, common_pid:       2037 } hitcount:         48
 { lat:        251, common_pid:       2037 } hitcount:         43
 { lat:        251, common_pid:       2039 } hitcount:          1
 { lat:        251, common_pid:       2036 } hitcount:         12
 { lat:        251, common_pid:       2042 } hitcount:          2
 { lat:        251, common_pid:       2041 } hitcount:          1
 { lat:        251, common_pid:       2043 } hitcount:         15
 { lat:        251, common_pid:       2040 } hitcount:          3
 { lat:        252, common_pid:       2040 } hitcount:          1
 { lat:        252, common_pid:       2036 } hitcount:         12
 { lat:        252, common_pid:       2037 } hitcount:         21
 { lat:        252, common_pid:       2043 } hitcount:         14
 { lat:        253, common_pid:       2037 } hitcount:         21
 { lat:        253, common_pid:       2039 } hitcount:          2
 { lat:        253, common_pid:       2036 } hitcount:          9
 { lat:        253, common_pid:       2043 } hitcount:          6
 { lat:        253, common_pid:       2040 } hitcount:          1
 { lat:        254, common_pid:       2036 } hitcount:          8
 { lat:        254, common_pid:       2043 } hitcount:          3
 { lat:        254, common_pid:       2041 } hitcount:          1
 { lat:        254, common_pid:       2042 } hitcount:          1
 { lat:        254, common_pid:       2039 } hitcount:          1
 { lat:        254, common_pid:       2037 } hitcount:         12
 { lat:        255, common_pid:       2043 } hitcount:          1
 { lat:        255, common_pid:       2037 } hitcount:          2
 { lat:        255, common_pid:       2036 } hitcount:          2
 { lat:        255, common_pid:       2039 } hitcount:          8
 { lat:        256, common_pid:       2043 } hitcount:          1
 { lat:        256, common_pid:       2036 } hitcount:          4
 { lat:        256, common_pid:       2039 } hitcount:          6
 { lat:        257, common_pid:       2039 } hitcount:          5
 { lat:        257, common_pid:       2036 } hitcount:          4
 { lat:        258, common_pid:       2039 } hitcount:          5
 { lat:        258, common_pid:       2036 } hitcount:          2
 { lat:        259, common_pid:       2036 } hitcount:          7
 { lat:        259, common_pid:       2039 } hitcount:          7
 { lat:        260, common_pid:       2036 } hitcount:          8
 { lat:        260, common_pid:       2039 } hitcount:          6
 { lat:        261, common_pid:       2036 } hitcount:          5
 { lat:        261, common_pid:       2039 } hitcount:          7
 { lat:        262, common_pid:       2039 } hitcount:          5
 { lat:        262, common_pid:       2036 } hitcount:          5
 { lat:        263, common_pid:       2039 } hitcount:          7
 { lat:        263, common_pid:       2036 } hitcount:          7
 { lat:        264, common_pid:       2039 } hitcount:          9
 { lat:        264, common_pid:       2036 } hitcount:          9
 { lat:        265, common_pid:       2036 } hitcount:          5
 { lat:        265, common_pid:       2039 } hitcount:          1
 { lat:        266, common_pid:       2036 } hitcount:          1
 { lat:        266, common_pid:       2039 } hitcount:          3
 { lat:        267, common_pid:       2036 } hitcount:          1
 { lat:        267, common_pid:       2039 } hitcount:          3
 { lat:        268, common_pid:       2036 } hitcount:          1
 { lat:        268, common_pid:       2039 } hitcount:          6
 { lat:        269, common_pid:       2036 } hitcount:          1
 { lat:        269, common_pid:       2043 } hitcount:          1
 { lat:        269, common_pid:       2039 } hitcount:          2
 { lat:        270, common_pid:       2040 } hitcount:          1
 { lat:        270, common_pid:       2039 } hitcount:          6
 { lat:        271, common_pid:       2041 } hitcount:          1
 { lat:        271, common_pid:       2039 } hitcount:          5
 { lat:        272, common_pid:       2039 } hitcount:         10
 { lat:        273, common_pid:       2039 } hitcount:          8
 { lat:        274, common_pid:       2039 } hitcount:          2
 { lat:        275, common_pid:       2039 } hitcount:          1
 { lat:        276, common_pid:       2039 } hitcount:          2
 { lat:        276, common_pid:       2037 } hitcount:          1
 { lat:        276, common_pid:       2038 } hitcount:          1
 { lat:        277, common_pid:       2039 } hitcount:          1
 { lat:        277, common_pid:       2042 } hitcount:          1
 { lat:        278, common_pid:       2039 } hitcount:          1
 { lat:        279, common_pid:       2039 } hitcount:          4
 { lat:        279, common_pid:       2043 } hitcount:          1
 { lat:        280, common_pid:       2039 } hitcount:          3
 { lat:        283, common_pid:       2036 } hitcount:          2
 { lat:        284, common_pid:       2039 } hitcount:          1
 { lat:        284, common_pid:       2043 } hitcount:          1
 { lat:        288, common_pid:       2039 } hitcount:          1
 { lat:        289, common_pid:       2039 } hitcount:          1
 { lat:        300, common_pid:       2039 } hitcount:          1
 { lat:        384, common_pid:       2039 } hitcount:          1

 Totals:
     Hits: 67625
     Entries: 278
     Dropped: 0

Note, the writes are around the sleep, so ideally they will all be of 250
microseconds. If you are wondering how there are several that are under
250 microseconds, that is because the way cyclictest works, is if one
iteration comes in late, the next one will set the timer to wake up less that
250. That is, if an iteration came in 50 microseconds late, the next wake up
will be at 200 microseconds.

But this could easily be done in userspace. To make this even more
interesting, we can mix the histogram between events that happened in the

 # cd /sys/kernel/tracing
 # echo 'latency u64 lat' > synthetic_events
 # echo 'hist:keys=pid:ts0=common_timestamp.usecs' > events/sched/sched_waking/trigger
 # echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).latency($lat) if buf == "end"' > events/ftrace/print/trigger
 # echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger

The difference this time is that instead of using the trace_marker to start
the latency, the sched_waking event is used, matching the common_pid for the
trace_marker write with the pid that is being woken by sched_waking.

After running cyclictest again with the same parameters, we now have::

 # cat events/synthetic/latency/hist
 # event histogram
 #
 # trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
 #

 { lat:          7, common_pid:       2302 } hitcount:        640
 { lat:          7, common_pid:       2299 } hitcount:         42
 { lat:          7, common_pid:       2303 } hitcount:         18
 { lat:          7, common_pid:       2305 } hitcount:        166
 { lat:          7, common_pid:       2306 } hitcount:          1
 { lat:          7, common_pid:       2301 } hitcount:         91
 { lat:          7, common_pid:       2300 } hitcount:         17
 { lat:          8, common_pid:       2303 } hitcount:       8296
 { lat:          8, common_pid:       2304 } hitcount:       6864
 { lat:          8, common_pid:       2305 } hitcount:       9464
 { lat:          8, common_pid:       2301 } hitcount:       9213
 { lat:          8, common_pid:       2306 } hitcount:       6246
 { lat:          8, common_pid:       2302 } hitcount:       8797
 { lat:          8, common_pid:       2299 } hitcount:       8771
 { lat:          8, common_pid:       2300 } hitcount:       8119
 { lat:          9, common_pid:       2305 } hitcount:       1519
 { lat:          9, common_pid:       2299 } hitcount:       2346
 { lat:          9, common_pid:       2303 } hitcount:       2841
 { lat:          9, common_pid:       2301 } hitcount:       1846
 { lat:          9, common_pid:       2304 } hitcount:       3861
 { lat:          9, common_pid:       2302 } hitcount:       1210
 { lat:          9, common_pid:       2300 } hitcount:       2762
 { lat:          9, common_pid:       2306 } hitcount:       4247
 { lat:         10, common_pid:       2299 } hitcount:         16
 { lat:         10, common_pid:       2306 } hitcount:        333
 { lat:         10, common_pid:       2303 } hitcount:         16
 { lat:         10, common_pid:       2304 } hitcount:        168
 { lat:         10, common_pid:       2302 } hitcount:        240
 { lat:         10, common_pid:       2301 } hitcount:         28
 { lat:         10, common_pid:       2300 } hitcount:         95
 { lat:         10, common_pid:       2305 } hitcount:         18
 { lat:         11, common_pid:       2303 } hitcount:          5
 { lat:         11, common_pid:       2305 } hitcount:          8
 { lat:         11, common_pid:       2306 } hitcount:        221
 { lat:         11, common_pid:       2302 } hitcount:         76
 { lat:         11, common_pid:       2304 } hitcount:         26
 { lat:         11, common_pid:       2300 } hitcount:        125
 { lat:         11, common_pid:       2299 } hitcount:          2
 { lat:         12, common_pid:       2305 } hitcount:          3
 { lat:         12, common_pid:       2300 } hitcount:          6
 { lat:         12, common_pid:       2306 } hitcount:         90
 { lat:         12, common_pid:       2302 } hitcount:          4
 { lat:         12, common_pid:       2303 } hitcount:          1
 { lat:         12, common_pid:       2304 } hitcount:        122
 { lat:         13, common_pid:       2300 } hitcount:         12
 { lat:         13, common_pid:       2301 } hitcount:          1
 { lat:         13, common_pid:       2306 } hitcount:         32
 { lat:         13, common_pid:       2302 } hitcount:          5
 { lat:         13, common_pid:       2305 } hitcount:          1
 { lat:         13, common_pid:       2303 } hitcount:          1
 { lat:         13, common_pid:       2304 } hitcount:         61
 { lat:         14, common_pid:       2303 } hitcount:          4
 { lat:         14, common_pid:       2306 } hitcount:          5
 { lat:         14, common_pid:       2305 } hitcount:          4
 { lat:         14, common_pid:       2304 } hitcount:         62
 { lat:         14, common_pid:       2302 } hitcount:         19
 { lat:         14, common_pid:       2300 } hitcount:         33
 { lat:         14, common_pid:       2299 } hitcount:          1
 { lat:         14, common_pid:       2301 } hitcount:          4
 { lat:         15, common_pid:       2305 } hitcount:          1
 { lat:         15, common_pid:       2302 } hitcount:         25
 { lat:         15, common_pid:       2300 } hitcount:         11
 { lat:         15, common_pid:       2299 } hitcount:          5
 { lat:         15, common_pid:       2301 } hitcount:          1
 { lat:         15, common_pid:       2304 } hitcount:          8
 { lat:         15, common_pid:       2303 } hitcount:          1
 { lat:         15, common_pid:       2306 } hitcount:          6
 { lat:         16, common_pid:       2302 } hitcount:         31
 { lat:         16, common_pid:       2306 } hitcount:          3
 { lat:         16, common_pid:       2300 } hitcount:          5
 { lat:         17, common_pid:       2302 } hitcount:          6
 { lat:         17, common_pid:       2303 } hitcount:          1
 { lat:         18, common_pid:       2304 } hitcount:          1
 { lat:         18, common_pid:       2302 } hitcount:          8
 { lat:         18, common_pid:       2299 } hitcount:          1
 { lat:         18, common_pid:       2301 } hitcount:          1
 { lat:         19, common_pid:       2303 } hitcount:          4
 { lat:         19, common_pid:       2304 } hitcount:          5
 { lat:         19, common_pid:       2302 } hitcount:          4
 { lat:         19, common_pid:       2299 } hitcount:          3
 { lat:         19, common_pid:       2306 } hitcount:          1
 { lat:         19, common_pid:       2300 } hitcount:          4
 { lat:         19, common_pid:       2305 } hitcount:          5
 { lat:         20, common_pid:       2299 } hitcount:          2
 { lat:         20, common_pid:       2302 } hitcount:          3
 { lat:         20, common_pid:       2305 } hitcount:          1
 { lat:         20, common_pid:       2300 } hitcount:          2
 { lat:         20, common_pid:       2301 } hitcount:          2
 { lat:         20, common_pid:       2303 } hitcount:          3
 { lat:         21, common_pid:       2305 } hitcount:          1
 { lat:         21, common_pid:       2299 } hitcount:          5
 { lat:         21, common_pid:       2303 } hitcount:          4
 { lat:         21, common_pid:       2302 } hitcount:          7
 { lat:         21, common_pid:       2300 } hitcount:          1
 { lat:         21, common_pid:       2301 } hitcount:          5
 { lat:         21, common_pid:       2304 } hitcount:          2
 { lat:         22, common_pid:       2302 } hitcount:          5
 { lat:         22, common_pid:       2303 } hitcount:          1
 { lat:         22, common_pid:       2306 } hitcount:          3
 { lat:         22, common_pid:       2301 } hitcount:          2
 { lat:         22, common_pid:       2300 } hitcount:          1
 { lat:         22, common_pid:       2299 } hitcount:          1
 { lat:         22, common_pid:       2305 } hitcount:          1
 { lat:         22, common_pid:       2304 } hitcount:          1
 { lat:         23, common_pid:       2299 } hitcount:          1
 { lat:         23, common_pid:       2306 } hitcount:          2
 { lat:         23, common_pid:       2302 } hitcount:          6
 { lat:         24, common_pid:       2302 } hitcount:          3
 { lat:         24, common_pid:       2300 } hitcount:          1
 { lat:         24, common_pid:       2306 } hitcount:          2
 { lat:         24, common_pid:       2305 } hitcount:          1
 { lat:         24, common_pid:       2299 } hitcount:          1
 { lat:         25, common_pid:       2300 } hitcount:          1
 { lat:         25, common_pid:       2302 } hitcount:          4
 { lat:         26, common_pid:       2302 } hitcount:          2
 { lat:         27, common_pid:       2305 } hitcount:          1
 { lat:         27, common_pid:       2300 } hitcount:          1
 { lat:         27, common_pid:       2302 } hitcount:          3
 { lat:         28, common_pid:       2306 } hitcount:          1
 { lat:         28, common_pid:       2302 } hitcount:          4
 { lat:         29, common_pid:       2302 } hitcount:          1
 { lat:         29, common_pid:       2300 } hitcount:          2
 { lat:         29, common_pid:       2306 } hitcount:          1
 { lat:         29, common_pid:       2304 } hitcount:          1
 { lat:         30, common_pid:       2302 } hitcount:          4
 { lat:         31, common_pid:       2302 } hitcount:          6
 { lat:         32, common_pid:       2302 } hitcount:          1
 { lat:         33, common_pid:       2299 } hitcount:          1
 { lat:         33, common_pid:       2302 } hitcount:          3
 { lat:         34, common_pid:       2302 } hitcount:          2
 { lat:         35, common_pid:       2302 } hitcount:          1
 { lat:         35, common_pid:       2304 } hitcount:          1
 { lat:         36, common_pid:       2302 } hitcount:          4
 { lat:         37, common_pid:       2302 } hitcount:          6
 { lat:         38, common_pid:       2302 } hitcount:          2
 { lat:         39, common_pid:       2302 } hitcount:          2
 { lat:         39, common_pid:       2304 } hitcount:          1
 { lat:         40, common_pid:       2304 } hitcount:          2
 { lat:         40, common_pid:       2302 } hitcount:          5
 { lat:         41, common_pid:       2304 } hitcount:          1
 { lat:         41, common_pid:       2302 } hitcount:          8
 { lat:         42, common_pid:       2302 } hitcount:          6
 { lat:         42, common_pid:       2304 } hitcount:          1
 { lat:         43, common_pid:       2302 } hitcount:          3
 { lat:         43, common_pid:       2304 } hitcount:          4
 { lat:         44, common_pid:       2302 } hitcount:          6
 { lat:         45, common_pid:       2302 } hitcount:          5
 { lat:         46, common_pid:       2302 } hitcount:          5
 { lat:         47, common_pid:       2302 } hitcount:          7
 { lat:         48, common_pid:       2301 } hitcount:          1
 { lat:         48, common_pid:       2302 } hitcount:          9
 { lat:         49, common_pid:       2302 } hitcount:          3
 { lat:         50, common_pid:       2302 } hitcount:          1
 { lat:         50, common_pid:       2301 } hitcount:          1
 { lat:         51, common_pid:       2302 } hitcount:          2
 { lat:         51, common_pid:       2301 } hitcount:          1
 { lat:         61, common_pid:       2302 } hitcount:          1
 { lat:        110, common_pid:       2302 } hitcount:          1

 Totals:
     Hits: 89565
     Entries: 158
     Dropped: 0

This doesn't tell us any information about how late cyclictest may have
woken up, but it does show us a nice histogram of how long it took from
the time that cyclictest was woken to the time it made it into user space.