Skip to content
  1. Apr 25, 2016
    • Tejun Heo's avatar
      cgroup, cpuset: replace cpuset_post_attach_flush() with cgroup_subsys->post_attach callback · 5cf1cacb
      Tejun Heo authored
      
      
      Since e93ad19d ("cpuset: make mm migration asynchronous"), cpuset
      kicks off asynchronous NUMA node migration if necessary during task
      migration and flushes it from cpuset_post_attach_flush() which is
      called at the end of __cgroup_procs_write().  This is to avoid
      performing migration with cgroup_threadgroup_rwsem write-locked which
      can lead to deadlock through dependency on kworker creation.
      
      memcg has a similar issue with charge moving, so let's convert it to
      an official callback rather than the current one-off cpuset specific
      function.  This patch adds cgroup_subsys->post_attach callback and
      makes cpuset register cpuset_post_attach_flush() as its ->post_attach.
      
      The conversion is mostly one-to-one except that the new callback is
      called under cgroup_mutex.  This is to guarantee that no other
      migration operations are started before ->post_attach callbacks are
      finished.  cgroup_mutex is one of the outermost mutex in the system
      and has never been and shouldn't be a problem.  We can add specialized
      synchronization around __cgroup_procs_write() but I don't think
      there's any noticeable benefit.
      
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: <stable@vger.kernel.org> # 4.4+ prerequisite for the next patch
      5cf1cacb
  2. Apr 22, 2016
    • Sebastian Andrzej Siewior's avatar
      cpu/hotplug: Fix rollback during error-out in __cpu_disable() · 3b9d6da6
      Sebastian Andrzej Siewior authored
      
      
      The recent introduction of the hotplug thread which invokes the callbacks on
      the plugged cpu, cased the following regression:
      
      If takedown_cpu() fails, then we run into several issues:
      
       1) The rollback of the target cpu states is not invoked. That leaves the smp
          threads and the hotplug thread in disabled state.
      
       2) notify_online() is executed due to a missing skip_onerr flag. That causes
          that both CPU_DOWN_FAILED and CPU_ONLINE notifications are invoked which
          confuses quite some notifiers.
      
       3) The CPU_DOWN_FAILED notification is not invoked on the target CPU. That's
          not an issue per se, but it is inconsistent and in consequence blocks the
          patches which rely on these states being invoked on the target CPU and not
          on the controlling cpu. It also does not preserve the strict call order on
          rollback which is problematic for the ongoing state machine conversion as
          well.
      
      To fix this we add a rollback flag to the remote callback machinery and invoke
      the rollback including the CPU_DOWN_FAILED notification on the remote
      cpu. Further mark the notify online state with 'skip_onerr' so we don't get a
      double invokation.
      
      This workaround will go away once we moved the unplug invocation to the target
      cpu itself.
      
      [ tglx: Massaged changelog and moved the CPU_DOWN_FAILED notifiaction to the
        	target cpu ]
      
      Fixes: 4cb28ced ("cpu/hotplug: Create hotplug threads")
      Reported-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: linux-s390@vger.kernel.org
      Cc: rt@linutronix.de
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Anna-Maria Gleixner <anna-maria@linutronix.de>
      Link: http://lkml.kernel.org/r/20160408124015.GA21960@linutronix.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      3b9d6da6
  3. Apr 21, 2016
  4. Apr 20, 2016
  5. Apr 19, 2016
    • Davidlohr Bueso's avatar
      locking/pvqspinlock: Fix division by zero in qstat_read() · 66876595
      Davidlohr Bueso authored
      
      
      While playing with the qstat statistics (in <debugfs>/qlockstat/) I ran into
      the following splat on a VM when opening pv_hash_hops:
      
        divide error: 0000 [#1] SMP
        ...
        RIP: 0010:[<ffffffff810b61fe>]  [<ffffffff810b61fe>] qstat_read+0x12e/0x1e0
        ...
        Call Trace:
          [<ffffffff811cad7c>] ? mem_cgroup_commit_charge+0x6c/0xd0
          [<ffffffff8119750c>] ? page_add_new_anon_rmap+0x8c/0xd0
          [<ffffffff8118d3b9>] ? handle_mm_fault+0x1439/0x1b40
          [<ffffffff811937a9>] ? do_mmap+0x449/0x550
          [<ffffffff811d3de3>] ? __vfs_read+0x23/0xd0
          [<ffffffff811d4ab2>] ? rw_verify_area+0x52/0xd0
          [<ffffffff811d4bb1>] ? vfs_read+0x81/0x120
          [<ffffffff811d5f12>] ? SyS_read+0x42/0xa0
          [<ffffffff815720f6>] ? entry_SYSCALL_64_fastpath+0x1e/0xa8
      
      Fix this by verifying that qstat_pv_kick_unlock is in fact non-zero,
      similarly to what the qstat_pv_latency_wake case does, as if nothing
      else, this can come from resetting the statistics, thus having 0 kicks
      should be quite valid in this context.
      
      Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Reviewed-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dave@stgolabs.net
      Cc: waiman.long@hpe.com
      Link: http://lkml.kernel.org/r/1460961103-24953-1-git-send-email-dave@stgolabs.net
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      66876595
  6. Apr 14, 2016
  7. Apr 04, 2016
    • Kirill A. Shutemov's avatar
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov authored
      
      
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
    • Borislav Petkov's avatar
      locking/lockdep: Fix print_collision() unused warning · 5c8a010c
      Borislav Petkov authored
      
      
      Fix this:
      
        kernel/locking/lockdep.c:2051:13: warning: ‘print_collision’ defined but not used [-Wunused-function]
        static void print_collision(struct task_struct *curr,
                    ^
      
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1459759327-2880-1-git-send-email-bp@alien8.de
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5c8a010c
  8. Mar 31, 2016
    • Alfredo Alvarez Fernandez's avatar
      locking/lockdep: Print chain_key collision information · 39e2e173
      Alfredo Alvarez Fernandez authored
      
      
      A sequence of pairs [class_idx -> corresponding chain_key iteration]
      is printed for both the current held_lock chain and the cached chain.
      
      That exposes the two different class_idx sequences that led to that
      particular hash value.
      
      This helps with debugging hash chain collision reports.
      
      Signed-off-by: default avatarAlfredo Alvarez Fernandez <alfredoalvarezfernandez@gmail.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: sedat.dilek@gmail.com
      Cc: tytso@mit.edu
      Link: http://lkml.kernel.org/r/1459357416-19190-1-git-send-email-alfredoalvarezernandez@gmail.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      39e2e173
    • Alexander Shishkin's avatar
      perf/core: Don't leak event in the syscall error path · 201c2f85
      Alexander Shishkin authored
      
      
      In the error path, event_file not being NULL is used to determine
      whether the event itself still needs to be free'd, so fix it up to
      avoid leaking.
      
      Reported-by: default avatarLeon Yu <chianglungyu@gmail.com>
      Signed-off-by: default avatarAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 13005627 ("perf: Do not double free")
      Link: http://lkml.kernel.org/r/87twk06yxp.fsf@ashishki-desk.ger.corp.intel.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      201c2f85
    • Peter Zijlstra's avatar
      perf/core: Fix time tracking bug with multiplexing · 8fdc6539
      Peter Zijlstra authored
      
      
      Stephane reported that commit:
      
        3cbaa590 ("perf: Fix ctx time tracking by introducing EVENT_TIME")
      
      introduced a regression wrt. time tracking, as easily observed by:
      
      > This patch introduce a bug in the time tracking of events when
      > multiplexing is used.
      >
      > The issue is easily reproducible with the following perf run:
      >
      >  $ perf stat -a -C 0 -e branches,branches,branches,branches,branches,branches -I 1000
      >      1.000730239            652,394      branches   (66.41%)
      >      1.000730239            597,809      branches   (66.41%)
      >      1.000730239            593,870      branches   (66.63%)
      >      1.000730239            651,440      branches   (67.03%)
      >      1.000730239            656,725      branches   (66.96%)
      >      1.000730239      <not counted>      branches
      >
      > One branches event is shown as not having run. Yet, with
      > multiplexing, all events should run especially with a 1s (-I 1000)
      > interval. The delta for time_running comes out to 0. Yet, the event
      > has run because the kernel is actually multiplexing the events. The
      > problem is that the time tracking is the kernel and especially in
      > ctx_sched_out() is wrong now.
      >
      > The problem is that in case that the kernel enters ctx_sched_out() with the
      > following state:
      >    ctx->is_active=0x7 event_type=0x1
      >    Call Trace:
      >     [<ffffffff813ddd41>] dump_stack+0x63/0x82
      >     [<ffffffff81182bdc>] ctx_sched_out+0x2bc/0x2d0
      >     [<ffffffff81183896>] perf_mux_hrtimer_handler+0xf6/0x2c0
      >     [<ffffffff811837a0>] ? __perf_install_in_context+0x130/0x130
      >     [<ffffffff810f5818>] __hrtimer_run_queues+0xf8/0x2f0
      >     [<ffffffff810f6097>] hrtimer_interrupt+0xb7/0x1d0
      >     [<ffffffff810509a8>] local_apic_timer_interrupt+0x38/0x60
      >     [<ffffffff8175ca9d>] smp_apic_timer_interrupt+0x3d/0x50
      >     [<ffffffff8175ac7c>] apic_timer_interrupt+0x8c/0xa0
      >
      > In that case, the test:
      >       if (is_active & EVENT_TIME)
      >
      > will be false and the time will not be updated. Time must always be updated on
      > sched out.
      
      Fix this by always updating time if EVENT_TIME was set, as opposed to
      only updating time when EVENT_TIME changed.
      
      Reported-by: default avatarStephane Eranian <eranian@google.com>
      Tested-by: default avatarStephane Eranian <eranian@google.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: kan.liang@intel.com
      Cc: namhyung@kernel.org
      Fixes: 3cbaa590 ("perf: Fix ctx time tracking by introducing EVENT_TIME")
      Link: http://lkml.kernel.org/r/20160329072644.GB3408@twins.programming.kicks-ass.net
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8fdc6539
  9. Mar 29, 2016
  10. Mar 25, 2016
  11. Mar 23, 2016
    • Lukas Wunner's avatar
      PM / sleep: Clear pm_suspend_global_flags upon hibernate · 27614273
      Lukas Wunner authored
      
      
      When suspending to RAM, waking up and later suspending to disk,
      we gratuitously runtime resume devices after the thaw phase.
      This does not occur if we always suspend to RAM or always to disk.
      
      pm_complete_with_resume_check(), which gets called from
      pci_pm_complete() among others, schedules a runtime resume
      if PM_SUSPEND_FLAG_FW_RESUME is set. The flag is set during
      a suspend-to-RAM cycle. It is cleared at the beginning of
      the suspend-to-RAM cycle but not afterwards and it is not
      cleared during a suspend-to-disk cycle at all. Fix it.
      
      Fixes: ef25ba04 (PM / sleep: Add flags to indicate platform firmware involvement)
      Signed-off-by: default avatarLukas Wunner <lukas@wunner.de>
      Cc: 4.4+ <stable@vger.kernel.org> # 4.4+
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      27614273
  12. Mar 22, 2016
    • Joe Perches's avatar
      kernel/...: convert pr_warning to pr_warn · a395d6a7
      Joe Perches authored
      
      
      Use the more common logging method with the eventual goal of removing
      pr_warning altogether.
      
      Miscellanea:
      
       - Realign arguments
       - Coalesce formats
       - Add missing space between a few coalesced formats
      
      Signed-off-by: default avatarJoe Perches <joe@perches.com>
      Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>	[kernel/power/suspend.c]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a395d6a7
    • Brian Starkey's avatar
      memremap: add MEMREMAP_WC flag · c907e0eb
      Brian Starkey authored
      
      
      Add a flag to memremap() for writecombine mappings.  Mappings satisfied
      by this flag will not be cached, however writes may be delayed or
      combined into more efficient bursts.  This is most suitable for buffers
      written sequentially by the CPU for use by other DMA devices.
      
      Signed-off-by: default avatarBrian Starkey <brian.starkey@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c907e0eb
    • Brian Starkey's avatar
      memremap: don't modify flags · cf61e2a1
      Brian Starkey authored
      
      
      These patches implement a MEMREMAP_WC flag for memremap(), which can be
      used to obtain writecombine mappings.  This is then used for setting up
      dma_coherent_mem regions which use the DMA_MEMORY_MAP flag.
      
      The motivation is to fix an alignment fault on arm64, and the suggestion
      to implement MEMREMAP_WC for this case was made at [1].  That particular
      issue is handled in patch 4, which makes sure that the appropriate
      memset function is used when zeroing allocations mapped as IO memory.
      
      This patch (of 4):
      
      Don't modify the flags input argument to memremap(). MEMREMAP_WB is
      already a special case so we can check for it directly instead of
      clearing flag bits in each mapper.
      
      Signed-off-by: default avatarBrian Starkey <brian.starkey@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf61e2a1
    • Helge Deller's avatar
      kernel/signal.c: add compile-time check for __ARCH_SI_PREAMBLE_SIZE · 41b27154
      Helge Deller authored
      
      
      The value of __ARCH_SI_PREAMBLE_SIZE defines the size (including
      padding) of the part of the struct siginfo that is before the union, and
      it is then used to calculate the needed padding (SI_PAD_SIZE) to make
      the size of struct siginfo equal to 128 (SI_MAX_SIZE) bytes.
      
      Depending on the target architecture and word width it equals to either
      3 or 4 times sizeof int.
      
      Since the very beginning we had __ARCH_SI_PREAMBLE_SIZE wrong on the
      parisc architecture for the 64bit kernel build.  It's even more
      frustrating, because it can easily be checked at compile time if the
      value was defined correctly.
      
      This patch adds such a check for the correctness of
      __ARCH_SI_PREAMBLE_SIZE in the hope that it will prevent existing and
      future architectures from running into the same problem.
      
      I refrained from replacing __ARCH_SI_PREAMBLE_SIZE by offsetof() in
      copy_siginfo() in include/asm-generic/siginfo.h, because a) it doesn't
      make any difference and b) it's used in the Documentation/kmemcheck.txt
      example.
      
      I ran this patch through the 0-DAY kernel test infrastructure and only
      the parisc architecture triggered as expected.  That means that this
      patch should be OK for all major architectures.
      
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      41b27154
    • Dmitry Vyukov's avatar
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov authored
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
    • Arnd Bergmann's avatar
      profile: hide unused functions when !CONFIG_PROC_FS · ade356b9
      Arnd Bergmann authored
      
      
      A couple of functions and variables in the profile implementation are
      used only on SMP systems by the procfs code, but are unused if either
      procfs is disabled or in uniprocessor kernels.  gcc prints a harmless
      warning about the unused symbols:
      
        kernel/profile.c:243:13: error: 'profile_flip_buffers' defined but not used [-Werror=unused-function]
         static void profile_flip_buffers(void)
                     ^
        kernel/profile.c:266:13: error: 'profile_discard_flip_buffers' defined but not used [-Werror=unused-function]
         static void profile_discard_flip_buffers(void)
                     ^
        kernel/profile.c:330:12: error: 'profile_cpu_callback' defined but not used [-Werror=unused-function]
         static int profile_cpu_callback(struct notifier_block *info,
                    ^
      
      This adds further #ifdef to the file, to annotate exactly in which cases
      they are used.  I have done several thousand ARM randconfig kernels with
      this patch applied and no longer get any warnings in this file.
      
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Robin Holt <robinmholt@gmail.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ade356b9
    • Hidehiro Kawai's avatar
      panic: change nmi_panic from macro to function · ebc41f20
      Hidehiro Kawai authored
      
      
      Commit 1717f209 ("panic, x86: Fix re-entrance problem due to panic
      on NMI") and commit 58c5661f ("panic, x86: Allow CPUs to save
      registers even if looping in NMI context") introduced nmi_panic() which
      prevents concurrent/recursive execution of panic().  It also saves
      registers for the crash dump on x86.
      
      However, there are some cases where NMI handlers still use panic().
      This patch set partially replaces them with nmi_panic() in those cases.
      
      Even this patchset is applied, some NMI or similar handlers (e.g.  MCE
      handler) continue to use panic().  This is because I can't test them
      well and actual problems won't happen.  For example, the possibility
      that normal panic and panic on MCE happen simultaneously is very low.
      
      This patch (of 3):
      
      Convert nmi_panic() to a proper function and export it instead of
      exporting internal implementation details to modules, for obvious
      reasons.
      
      Signed-off-by: default avatarHidehiro Kawai <hidehiro.kawai.ez@hitachi.com>
      Acked-by: default avatarBorislav Petkov <bp@suse.de>
      Acked-by: default avatarMichal Nazarewicz <mina86@mina86.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org>
      Cc: Javi Merino <javi.merino@arm.com>
      Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com>
      Cc: "Steven Rostedt (Red Hat)" <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebc41f20
    • Jann Horn's avatar
      fs/coredump: prevent fsuid=0 dumps into user-controlled directories · 378c6520
      Jann Horn authored
      
      
      This commit fixes the following security hole affecting systems where
      all of the following conditions are fulfilled:
      
       - The fs.suid_dumpable sysctl is set to 2.
       - The kernel.core_pattern sysctl's value starts with "/". (Systems
         where kernel.core_pattern starts with "|/" are not affected.)
       - Unprivileged user namespace creation is permitted. (This is
         true on Linux >=3.8, but some distributions disallow it by
         default using a distro patch.)
      
      Under these conditions, if a program executes under secure exec rules,
      causing it to run with the SUID_DUMP_ROOT flag, then unshares its user
      namespace, changes its root directory and crashes, the coredump will be
      written using fsuid=0 and a path derived from kernel.core_pattern - but
      this path is interpreted relative to the root directory of the process,
      allowing the attacker to control where a coredump will be written with
      root privileges.
      
      To fix the security issue, always interpret core_pattern for dumps that
      are written under SUID_DUMP_ROOT relative to the root directory of init.
      
      Signed-off-by: default avatarJann Horn <jann@thejh.net>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      378c6520
    • Oleg Nesterov's avatar
      ptrace: change __ptrace_unlink() to clear ->ptrace under ->siglock · 1333ab03
      Oleg Nesterov authored
      
      
      This test-case (simplified version of generated by syzkaller)
      
      	#include <unistd.h>
      	#include <sys/ptrace.h>
      	#include <sys/wait.h>
      
      	void test(void)
      	{
      		for (;;) {
      			if (fork()) {
      				wait(NULL);
      				continue;
      			}
      
      			ptrace(PTRACE_SEIZE, getppid(), 0, 0);
      			ptrace(PTRACE_INTERRUPT, getppid(), 0, 0);
      			_exit(0);
      		}
      	}
      
      	int main(void)
      	{
      		int np;
      
      		for (np = 0; np < 8; ++np)
      			if (!fork())
      				test();
      
      		while (wait(NULL) > 0)
      			;
      		return 0;
      	}
      
      triggers the 2nd WARN_ON_ONCE(!signr) warning in do_jobctl_trap().  The
      problem is that __ptrace_unlink() clears task->jobctl under siglock but
      task->ptrace is cleared without this lock held; this fools the "else"
      branch which assumes that !PT_SEIZED means PT_PTRACED.
      
      Note also that most of other PTRACE_SEIZE checks can race with detach
      from the exiting tracer too.  Say, the callers of ptrace_trap_notify()
      assume that SEIZED can't go away after it was checked.
      
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1333ab03
    • Andy Lutomirski's avatar
      auditsc: for seccomp events, log syscall compat state using in_compat_syscall · efbc0fbf
      Andy Lutomirski authored
      
      
      Except on SPARC, this is what the code always did.  SPARC compat seccomp
      was buggy, although the impact of the bug was limited because SPARC
      32-bit and 64-bit syscall numbers are the same.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      efbc0fbf
    • Andy Lutomirski's avatar
      ptrace: in PEEK_SIGINFO, check syscall bitness, not task bitness · 5c465217
      Andy Lutomirski authored
      
      
      Users of the 32-bit ptrace() ABI expect the full 32-bit ABI.  siginfo
      translation should check ptrace() ABI, not caller task ABI.
      
      This is an ABI change on SPARC.  Let's hope that no one relied on the
      old buggy ABI.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c465217
    • Andy Lutomirski's avatar
      seccomp: check in_compat_syscall, not is_compat_task, in strict mode · 5c38065e
      Andy Lutomirski authored
      
      
      Seccomp wants to know the syscall bitness, not the caller task bitness,
      when it selects the syscall whitelist.
      
      As far as I know, this makes no difference on any architecture, so it's
      not a security problem.  (It generates identical code everywhere except
      sparc, and, on sparc, the syscall numbering is the same for both ABIs.)
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c38065e
    • Tetsuo Handa's avatar
      kernel/hung_task.c: use timeout diff when timeout is updated · b4aa14a6
      Tetsuo Handa authored
      
      
      When new timeout is written to /proc/sys/kernel/hung_task_timeout_secs,
      khungtaskd is interrupted and again sleeps for full timeout duration.
      
      This means that hang task will not be checked if new timeout is written
      periodically within old timeout duration and/or checking of hang task
      will be delayed for up to previous timeout duration.  Fix this by
      remembering last time khungtaskd checked hang task.
      
      This change will allow other watchdog tasks (if any) to share khungtaskd
      by sleeping for minimal timeout diff of all watchdog tasks.  Doing more
      watchdog tasks from khungtaskd will reduce the possibility of printk()
      collisions by multiple watchdog threads.
      
      Signed-off-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Aaron Tomlin <atomlin@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4aa14a6
    • Peter Zijlstra's avatar
      tracing: Record and show NMI state · 7e6867bf
      Peter Zijlstra authored
      The latency tracer format has a nice column to indicate IRQ state, but
      this is not able to tell us about NMI state.
      
      When tracing perf interrupt handlers (which often run in NMI context)
      it is very useful to see how the events nest.
      
      Link: http://lkml.kernel.org/r/20160318153022.105068893@infradead.org
      
      
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7e6867bf
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Fix trace_printk() to print when not using bprintk() · 3debb0a9
      Steven Rostedt (Red Hat) authored
      
      
      The trace_printk() code will allocate extra buffers if the compile detects
      that a trace_printk() is used. To do this, the format of the trace_printk()
      is saved to the __trace_printk_fmt section, and if that section is bigger
      than zero, the buffers are allocated (along with a message that this has
      happened).
      
      If trace_printk() uses a format that is not a constant, and thus something
      not guaranteed to be around when the print happens, the compiler optimizes
      the fmt out, as it is not used, and the __trace_printk_fmt section is not
      filled. This means the kernel will not allocate the special buffers needed
      for the trace_printk() and the trace_printk() will not write anything to the
      tracing buffer.
      
      Adding a "__used" to the variable in the __trace_printk_fmt section will
      keep it around, even though it is set to NULL. This will keep the string
      from being printed in the debugfs/tracing/printk_formats section as it is
      not needed.
      
      Reported-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Fixes: 07d777fe "tracing: Add percpu buffers for trace_printk()"
      Cc: stable@vger.kernel.org # v3.5+
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      3debb0a9
  13. Mar 21, 2016
Loading