Skip to content
  1. Jul 18, 2017
    • Tom Lendacky's avatar
      x86/mm, kexec: Allow kexec to be used with SME · bba4ed01
      Tom Lendacky authored
      
      
      Provide support so that kexec can be used to boot a kernel when SME is
      enabled.
      
      Support is needed to allocate pages for kexec without encryption.  This
      is needed in order to be able to reboot in the kernel in the same manner
      as originally booted.
      
      Additionally, when shutting down all of the CPUs we need to be sure to
      flush the caches and then halt. This is needed when booting from a state
      where SME was not active into a state where SME is active (or vice-versa).
      Without these steps, it is possible for cache lines to exist for the same
      physical location but tagged both with and without the encryption bit. This
      can cause random memory corruption when caches are flushed depending on
      which cacheline is written last.
      
      Signed-off-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: <kexec@lists.infradead.org>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Toshimitsu Kani <toshi.kani@hpe.com>
      Cc: kasan-dev@googlegroups.com
      Cc: kvm@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-doc@vger.kernel.org
      Cc: linux-efi@vger.kernel.org
      Cc: linux-mm@kvack.org
      Link: http://lkml.kernel.org/r/b95ff075db3e7cd545313f2fb609a49619a09625.1500319216.git.thomas.lendacky@amd.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      bba4ed01
  2. Jun 28, 2017
  3. Mar 20, 2017
    • Kyle Huey's avatar
      x86/arch_prctl: Add ARCH_[GET|SET]_CPUID · e9ea1e7f
      Kyle Huey authored
      Intel supports faulting on the CPUID instruction beginning with Ivy Bridge.
      When enabled, the processor will fault on attempts to execute the CPUID
      instruction with CPL>0. Exposing this feature to userspace will allow a
      ptracer to trap and emulate the CPUID instruction.
      
      When supported, this feature is controlled by toggling bit 0 of
      MSR_MISC_FEATURES_ENABLES. It is documented in detail in Section 2.3.2 of
      https://bugzilla.kernel.org/attachment.cgi?id=243991
      
      
      
      Implement a new pair of arch_prctls, available on both x86-32 and x86-64.
      
      ARCH_GET_CPUID: Returns the current CPUID state, either 0 if CPUID faulting
          is enabled (and thus the CPUID instruction is not available) or 1 if
          CPUID faulting is not enabled.
      
      ARCH_SET_CPUID: Set the CPUID state to the second argument. If
          cpuid_enabled is 0 CPUID faulting will be activated, otherwise it will
          be deactivated. Returns ENODEV if CPUID faulting is not supported on
          this system.
      
      The state of the CPUID faulting flag is propagated across forks, but reset
      upon exec.
      
      Signed-off-by: default avatarKyle Huey <khuey@kylehuey.com>
      Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: linux-kselftest@vger.kernel.org
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Robert O'Callahan <robert@ocallahan.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: user-mode-linux-devel@lists.sourceforge.net
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: user-mode-linux-user@lists.sourceforge.net
      Cc: David Matlack <dmatlack@google.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Link: http://lkml.kernel.org/r/20170320081628.18952-9-khuey@kylehuey.com
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      e9ea1e7f
    • Kyle Huey's avatar
      x86/arch_prctl: Add do_arch_prctl_common() · b0b9b014
      Kyle Huey authored
      
      
      Add do_arch_prctl_common() to handle arch_prctls that are not specific to 64
      bit mode. Call it from the syscall entry point, but not any of the other
      callsites in the kernel, which all want one of the existing 64 bit only
      arch_prctls.
      
      Signed-off-by: default avatarKyle Huey <khuey@kylehuey.com>
      Cc: Grzegorz Andrejczuk <grzegorz.andrejczuk@intel.com>
      Cc: kvm@vger.kernel.org
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: linux-kselftest@vger.kernel.org
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Robert O'Callahan <robert@ocallahan.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: user-mode-linux-devel@lists.sourceforge.net
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: user-mode-linux-user@lists.sourceforge.net
      Cc: David Matlack <dmatlack@google.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Dmitry Safonov <dsafonov@virtuozzo.com>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Link: http://lkml.kernel.org/r/20170320081628.18952-6-khuey@kylehuey.com
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      b0b9b014
  4. Mar 11, 2017
  5. Mar 02, 2017
  6. Mar 01, 2017
    • Andy Lutomirski's avatar
      x86/asm: Tidy up TSS limit code · b7ceaec1
      Andy Lutomirski authored
      
      
      In an earlier version of the patch ("x86/kvm/vmx: Defer TR reload
      after VM exit") that introduced TSS limit validity tracking, I
      confused which helper was which.  On reflection, the names I chose
      sucked.  Rename the helpers to make it more obvious what's going on
      and add some comments.
      
      While I'm at it, clear __tss_limit_invalid when force-reloading as
      well as when contitionally reloading, since any TR reload fixes the
      limit.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      b7ceaec1
  7. Feb 21, 2017
    • Andy Lutomirski's avatar
      x86/kvm/vmx: Defer TR reload after VM exit · b7ffc44d
      Andy Lutomirski authored
      
      
      Intel's VMX is daft and resets the hidden TSS limit register to 0x67
      on VMX reload, and the 0x67 is not configurable.  KVM currently
      reloads TR using the LTR instruction on every exit, but this is quite
      slow because LTR is serializing.
      
      The 0x67 limit is entirely harmless unless ioperm() is in use, so
      defer the reload until a task using ioperm() is actually running.
      
      Here's some poorly done benchmarking using kvm-unit-tests:
      
      Before:
      
      cpuid 1313
      vmcall 1195
      mov_from_cr8 11
      mov_to_cr8 17
      inl_from_pmtimer 6770
      inl_from_qemu 6856
      inl_from_kernel 2435
      outl_to_kernel 1402
      
      After:
      
      cpuid 1291
      vmcall 1181
      mov_from_cr8 11
      mov_to_cr8 16
      inl_from_pmtimer 6457
      inl_from_qemu 6209
      inl_from_kernel 2339
      outl_to_kernel 1391
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      [Force-reload TR in invalidate_tss_limit. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b7ffc44d
  8. Dec 24, 2016
  9. Dec 15, 2016
  10. Dec 09, 2016
    • Thomas Gleixner's avatar
      x86: Remove empty idle.h header · 34bc3560
      Thomas Gleixner authored
      
      
      One include less is always a good thing(tm). Good riddance.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/20161209182912.2726-6-bp@alien8.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      34bc3560
    • Borislav Petkov's avatar
      x86/amd: Simplify AMD E400 aware idle routine · 07c94a38
      Borislav Petkov authored
      
      
      Reorganize the E400 detection now that we have everything in place:
      switch the CPUs to broadcast mode after the LAPIC has been initialized
      and remove the facilities that were used previously on the idle path.
      
      Unfortunately static_cpu_has_bug() cannpt be used in the E400 idle routine
      because alternatives have been applied when the actual detection happens,
      so the static switching does not take effect and the test will stay
      false. Use boot_cpu_has_bug() instead which is definitely an improvement
      over the RDMSR and the cpumask handling.
      
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/20161209182912.2726-5-bp@alien8.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      07c94a38
    • Thomas Gleixner's avatar
      x86/amd: Check for the C1E bug post ACPI subsystem init · e7ff3a47
      Thomas Gleixner authored
      
      
      AMD CPUs affected by the E400 erratum suffer from the issue that the
      local APIC timer stops when the CPU goes into C1E. Unfortunately there
      is no way to detect the affected CPUs on early boot. It's only possible
      to determine the range of possibly affected CPUs from the family/model
      range.
      
      The actual decision whether to enter C1E and thus cause the bug is done
      by the firmware and we need to detect that case late, after ACPI has
      been initialized.
      
      The current solution is to check in the idle routine whether the CPU is
      affected by reading the MSR_K8_INT_PENDING_MSG MSR and checking for the
      K8_INTP_C1E_ACTIVE_MASK bits. If one of the bits is set then the CPU is
      affected and the system is switched into forced broadcast mode.
      
      This is ineffective and on non-affected CPUs every entry to idle does
      the extra RDMSR.
      
      After doing some research it turns out that the bits are visible on the
      boot CPU right after the ACPI subsystem is initialized in the early
      boot process. So instead of polling for the bits in the idle loop, add
      a detection function after acpi_subsystem_init() and check for the MSR
      bits. If set, then the X86_BUG_AMD_APIC_C1E is set on the boot CPU and
      the TSC is marked unstable when X86_FEATURE_NONSTOP_TSC is not set as it
      will stop in C1E state as well.
      
      The switch to broadcast mode cannot be done at this point because the
      boot CPU still uses HPET as a clockevent device and the local APIC timer
      is not yet calibrated and installed. The switch to broadcast mode on the
      affected CPUs needs to be done when the local APIC timer is actually set
      up.
      
      This allows to cleanup the amd_e400_idle() function in the next step.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/20161209182912.2726-4-bp@alien8.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      e7ff3a47
    • Thomas Gleixner's avatar
      x86/bugs: Separate AMD E400 erratum and C1E bug · 3344ed30
      Thomas Gleixner authored
      
      
      The workaround for the AMD Erratum E400 (Local APIC timer stops in C1E
      state) is a two step process:
      
       - Selection of the E400 aware idle routine
      
       - Detection whether the platform is affected
      
      The idle routine selection happens for possibly affected CPUs depending on
      family/model/stepping information. These range of CPUs is not necessarily
      affected as the decision whether to enable the C1E feature is made by the
      firmware. Unfortunately there is no way to query this at early boot.
      
      The current implementation polls a MSR in the E400 aware idle routine to
      detect whether the CPU is affected. This is inefficient on non affected
      CPUs because every idle entry has to do the MSR read.
      
      There is a better way to detect this before going idle for the first time
      which requires to seperate the bug flags:
      
        X86_BUG_AMD_E400 	- Selects the E400 aware idle routine and
        			  enables the detection
      			  
        X86_BUG_AMD_APIC_C1E  - Set when the platform is affected by E400
      
      Replace the current X86_BUG_AMD_APIC_C1E usage by the new X86_BUG_AMD_E400
      bug bit to select the idle routine which currently does an unconditional
      detection poll. X86_BUG_AMD_APIC_C1E is going to be used in later patches
      to remove the MSR polling and simplify the handling of this misfeature.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: http://lkml.kernel.org/r/20161209182912.2726-3-bp@alien8.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      3344ed30
  11. Nov 29, 2016
    • Thomas Gleixner's avatar
      x86/tsc: Verify TSC_ADJUST from idle · 1d0095fe
      Thomas Gleixner authored
      
      
      When entering idle, it's a good oportunity to verify that the TSC_ADJUST
      MSR has not been tampered with (BIOS hiding SMM cycles). If tampering is
      detected, emit a warning and restore it to the previous value.
      
      This is especially important for machines, which mark the TSC reliable
      because there is no watchdog clocksource available (SoCs).
      
      This is not sufficient for HPC (NOHZ_FULL) situations where a CPU never
      goes idle, but adding a timer to do the check periodically is not an option
      either. On a machine, which has this issue, the check triggeres right
      during boot, so there is a decent chance that the sysadmin will notice.
      
      Rate limit the check to once per second and warn only once per cpu.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: http://lkml.kernel.org/r/20161119134017.732180441@linutronix.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      1d0095fe
  12. Nov 18, 2016
  13. Oct 11, 2016
  14. Oct 08, 2016
  15. Sep 16, 2016
  16. Sep 15, 2016
  17. Aug 24, 2016
  18. Jul 20, 2016
  19. Jul 14, 2016
    • Paul Gortmaker's avatar
      x86/kernel: Audit and remove any unnecessary uses of module.h · 186f4360
      Paul Gortmaker authored
      
      
      Historically a lot of these existed because we did not have
      a distinction between what was modular code and what was providing
      support to modules via EXPORT_SYMBOL and friends.  That changed
      when we forked out support for the latter into the export.h file.
      
      This means we should be able to reduce the usage of module.h
      in code that is obj-y Makefile or bool Kconfig.  The advantage
      in doing so is that module.h itself sources about 15 other headers;
      adding significantly to what we feed cpp, and it can obscure what
      headers we are effectively using.
      
      Since module.h was the source for init.h (for __init) and for
      export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
      for the presence of either and replace as needed.  Build testing
      revealed some implicit header usage that was fixed up accordingly.
      
      Note that some bool/obj-y instances remain since module.h is
      the header for some exception table entry stuff, and for things
      like __init_or_module (code that is tossed when MODULES=n).
      
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160714001901.31603-4-paul.gortmaker@windriver.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      186f4360
  20. May 21, 2016
    • Jiri Slaby's avatar
      exit_thread: accept a task parameter to be exited · e6464694
      Jiri Slaby authored
      
      
      We need to call exit_thread from copy_process in a fail path.  So make it
      accept task_struct as a parameter.
      
      [v2]
      * s390: exit_thread_runtime_instr doesn't make sense to be called for
        non-current tasks.
      * arm: fix the comment in vfp_thread_copy
      * change 'me' to 'tsk' for task_struct
      * now we can change only archs that actually have exit_thread
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e6464694
  21. Mar 10, 2016
  22. Jan 29, 2016
    • Michael S. Tsirkin's avatar
      locking/x86: Use mb() around clflush() · ca59809f
      Michael S. Tsirkin authored
      
      
      The following commit:
      
        f8e617f4 ("sched/idle/x86: Optimize unnecessary mwait_idle() resched IPIs")
      
      adds memory barriers around clflush(), but this seems wrong for UP since
      barrier() has no effect on clflush().  We really want MFENCE, so switch
      to mb() instead.
      
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Davidlohr Bueso <dbueso@suse.de>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <bitbucket@online.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization <virtualization@lists.linux-foundation.org>
      Link: http://lkml.kernel.org/r/1453921746-16178-5-git-send-email-mst@redhat.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ca59809f
  23. Oct 31, 2015
  24. Oct 20, 2015
  25. Sep 30, 2015
  26. Aug 20, 2015
  27. Aug 13, 2015
  28. Jul 31, 2015
Loading