Skip to content
  1. Jun 21, 2016
  2. May 24, 2016
  3. May 21, 2016
    • Jiri Slaby's avatar
      exit_thread: remove empty bodies · 5f56a5df
      Jiri Slaby authored
      
      
      Define HAVE_EXIT_THREAD for archs which want to do something in
      exit_thread. For others, let's define exit_thread as an empty inline.
      
      This is a cleanup before we change the prototype of exit_thread to
      accept a task parameter.
      
      [akpm@linux-foundation.org: fix mips]
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5f56a5df
  4. May 20, 2016
  5. May 17, 2016
    • Yang Shi's avatar
      bpf: arm64: remove callee-save registers use for tmp registers · 4c1cd4fd
      Yang Shi authored
      
      
      In the current implementation of ARM64 eBPF JIT, R23 and R24 are used for
      tmp registers, which are callee-saved registers. This leads to variable size
      of JIT prologue and epilogue. The latest blinding constant change prefers to
      constant size of prologue and epilogue. AAPCS reserves R9 ~ R15 for temp
      registers which not need to be saved/restored during function call. So, replace
      R23 and R24 to R10 and R11, and remove tmp_used flag to save 2 instructions for
      some jited BPF program.
      
      CC: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarZi Shen Lim <zlim.lnx@gmail.com>
      Signed-off-by: default avatarYang Shi <yang.shi@linaro.org>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4c1cd4fd
    • Arnaldo Carvalho de Melo's avatar
      perf core: Add a 'nr' field to perf_event_callchain_context · 3b1fff08
      Arnaldo Carvalho de Melo authored
      We will use it to count how many addresses are in the entry->ip[] array,
      excluding PERF_CONTEXT_{KERNEL,USER,etc} entries, so that we can really
      return the number of entries specified by the user via the relevant
      sysctl, kernel.perf_event_max_contexts, or via the per event
      perf_event_attr.sample_max_stack knob.
      
      This way we keep the perf_sample->ip_callchain->nr meaning, that is the
      number of entries, be it real addresses or PERF_CONTEXT_ entries, while
      honouring the max_stack knobs, i.e. the end result will be max_stack
      entries if we have at least that many entries in a given stack trace.
      
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-s8teto51tdqvlfhefndtat9r@git.kernel.org
      
      
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      3b1fff08
    • Arnaldo Carvalho de Melo's avatar
      perf core: Pass max stack as a perf_callchain_entry context · cfbcf468
      Arnaldo Carvalho de Melo authored
      This makes perf_callchain_{user,kernel}() receive the max stack
      as context for the perf_callchain_entry, instead of accessing
      the global sysctl_perf_event_max_stack.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.org
      
      
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      cfbcf468
  6. May 16, 2016
  7. May 14, 2016
  8. May 13, 2016
    • Christian Borntraeger's avatar
      KVM: halt_polling: provide a way to qualify wakeups during poll · 3491caf2
      Christian Borntraeger authored
      
      
      Some wakeups should not be considered a sucessful poll. For example on
      s390 I/O interrupts are usually floating, which means that _ALL_ CPUs
      would be considered runnable - letting all vCPUs poll all the time for
      transactional like workload, even if one vCPU would be enough.
      This can result in huge CPU usage for large guests.
      This patch lets architectures provide a way to qualify wakeups if they
      should be considered a good/bad wakeups in regard to polls.
      
      For s390 the implementation will fence of halt polling for anything but
      known good, single vCPU events. The s390 implementation for floating
      interrupts does a wakeup for one vCPU, but the interrupt will be delivered
      by whatever CPU checks first for a pending interrupt. We prefer the
      woken up CPU by marking the poll of this CPU as "good" poll.
      This code will also mark several other wakeup reasons like IPI or
      expired timers as "good". This will of course also mark some events as
      not sucessful. As  KVM on z runs always as a 2nd level hypervisor,
      we prefer to not poll, unless we are really sure, though.
      
      This patch successfully limits the CPU usage for cases like uperf 1byte
      transactional ping pong workload or wakeup heavy workload like OLTP
      while still providing a proper speedup.
      
      This also introduced a new vcpu stat "halt_poll_no_tuning" that marks
      wakeups that are considered not good for polling.
      
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: Radim Krčmář <rkrcmar@redhat.com> (for an earlier version)
      Cc: David Matlack <dmatlack@google.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      [Rename config symbol. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      3491caf2
  9. May 12, 2016
  10. May 11, 2016
    • Kees Cook's avatar
      arm64: kernel: Fix incorrect brk randomization · 61462c8a
      Kees Cook authored
      
      
      This fixes two issues with the arm64 brk randomziation. First, the
      STACK_RND_MASK was being used incorrectly. The original code was:
      
      	unsigned long range_end = base + (STACK_RND_MASK << PAGE_SHIFT) + 1;
      
      STACK_RND_MASK is 0x7ff (32-bit) or 0x3ffff (64-bit), with 4K pages where
      PAGE_SHIFT is 12:
      
      	#define STACK_RND_MASK	(test_thread_flag(TIF_32BIT) ? \
      						0x7ff >> (PAGE_SHIFT - 12) : \
      						0x3ffff >> (PAGE_SHIFT - 12))
      
      This means the resulting offset from base would be 0x7ff0001 or 0x3ffff0001,
      which is wrong since it creates an unaligned end address. It was likely
      intended to be:
      
      	unsigned long range_end = base + ((STACK_RND_MASK + 1) << PAGE_SHIFT)
      
      Which would result in offsets of 0x800000 (32-bit) and 0x40000000 (64-bit).
      
      However, even this corrected 32-bit compat offset (0x00800000) is much
      smaller than native ARM's brk randomization value (0x02000000):
      
      	unsigned long arch_randomize_brk(struct mm_struct *mm)
      	{
      	        unsigned long range_end = mm->brk + 0x02000000;
      	        return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
      	}
      
      So, instead of basing arm64's brk randomization on mistaken STACK_RND_MASK
      calculations, just use specific corrected values for compat (0x2000000)
      and native arm64 (0x40000000).
      
      Reviewed-by: default avatarJon Medhurst <tixy@linaro.org>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      [will: use is_compat_task() as suggested by tixy]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      61462c8a
    • Julien Grall's avatar
      arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str · f228b494
      Julien Grall authored
      
      
      The loop that browses the array compat_hwcap_str will stop when a NULL
      is encountered, however NULL is missing at the end of array. This will
      lead to overrun until a NULL is found somewhere in the following memory.
      In reality, this works out because the compat_hwcap2_str array tends to
      follow immediately in memory, and that *is* terminated correctly.
      Furthermore, the unsigned int compat_elf_hwcap is checked before
      printing each capability, so we end up doing the right thing because
      the size of the two arrays is less than 32. Still, this is an obvious
      mistake and should be fixed.
      
      Note for backporting: commit 12d11817 ("arm64: Move
      /proc/cpuinfo handling code") moved this code in v4.4. Prior to that
      commit, the same change should be made in arch/arm64/kernel/setup.c.
      
      Fixes: 44b82b77 "arm64: Fix up /proc/cpuinfo"
      Cc: <stable@vger.kernel.org> # v3.19+ (but see note above prior to v4.4)
      Signed-off-by: default avatarJulien Grall <julien.grall@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f228b494
    • Suzuki K Poulose's avatar
      arm64: secondary_start_kernel: Remove unnecessary barrier · 99aa0362
      Suzuki K Poulose authored
      
      
      Remove the unnecessary smp_wmb(), which was added to make sure
      that the update_cpu_boot_status() completes before we mark the
      CPU online. But update_cpu_boot_status() already has dsb() (required
      for the failing CPUs) to ensure the correct behavior.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reported-by: default avatarDennis Chen <dennis.chen@arm.com>
      Signed-off-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      99aa0362
  11. May 10, 2016
  12. May 09, 2016
    • Catalin Marinas's avatar
      kvm: arm64: Enable hardware updates of the Access Flag for Stage 2 page tables · 06485053
      Catalin Marinas authored
      
      
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      VTCR_EL2.HA enabled (bit 21), when the CPU accesses an IPA with the
      PTE_AF bit cleared in the stage 2 page table, instead of raising an
      Access Flag fault to EL2 the CPU sets the actual page table entry bit
      (10). To ensure that kernel modifications to the page table do not
      inadvertently revert a bit set by hardware updates, certain Stage 2
      software pte/pmd operations must be performed atomically.
      
      The main user of the AF bit is the kvm_age_hva() mechanism. The
      kvm_age_hva_handler() function performs a "test and clear young" action
      on the pte/pmd. This needs to be atomic in respect of automatic hardware
      updates of the AF bit. Since the AF bit is in the same position for both
      Stage 1 and Stage 2, the patch reuses the existing
      ptep_test_and_clear_young() functionality if
      __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG is defined. Otherwise, the
      existing pte_young/pte_mkold mechanism is preserved.
      
      The kvm_set_s2pte_readonly() (and the corresponding pmd equivalent) have
      to perform atomic modifications in order to avoid a race with updates of
      the AF bit. The arm64 implementation has been re-written using
      exclusives.
      
      Currently, kvm_set_s2pte_writable() (and pmd equivalent) take a pointer
      argument and modify the pte/pmd in place. However, these functions are
      only used on local variables rather than actual page table entries, so
      it makes more sense to follow the pte_mkwrite() approach for stage 1
      attributes. The change to kvm_s2pte_mkwrite() makes it clear that these
      functions do not modify the actual page table entries.
      
      The (pte|pmd)_mkyoung() uses on Stage 2 entries (setting the AF bit
      explicitly) do not need to be modified since hardware updates of the
      dirty status are not supported by KVM, so there is no possibility of
      losing such information.
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      06485053
    • Robin Murphy's avatar
      iommu/dma: Finish optimising higher-order allocations · 3b6b7e19
      Robin Murphy authored
      
      
      Now that we know exactly which page sizes our caller wants to use in the
      given domain, we can restrict higher-order allocation attempts to just
      those sizes, if any, and avoid wasting any time or effort on other sizes
      which offer no benefit. In the same vein, this also lets us accommodate
      a minimum order greater than 0 for special cases.
      
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarYong Wu <yong.wu@mediatek.com>
      Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
      3b6b7e19
    • Robin Murphy's avatar
      iommu: of: enforce const-ness of struct iommu_ops · 53c92d79
      Robin Murphy authored
      
      
      As a set of driver-provided callbacks and static data, there is no
      compelling reason for struct iommu_ops to be mutable in core code, so
      enforce const-ness throughout.
      
      Acked-by: default avatarThierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
      53c92d79
  13. May 06, 2016
  14. May 05, 2016
  15. May 04, 2016
    • James Hogan's avatar
      asm-generic: Drop renameat syscall from default list · b0da6d44
      James Hogan authored
      
      
      The newer renameat2 syscall provides all the functionality provided by
      the renameat syscall and adds flags, so future architectures won't need
      to include renameat.
      
      Therefore drop the renameat syscall from the generic syscall list unless
      __ARCH_WANT_RENAMEAT is defined by the architecture's unistd.h prior to
      including asm-generic/unistd.h, and adjust all architectures using the
      generic syscall list to define it so that no in-tree architectures are
      affected.
      
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarVineet Gupta <vgupta@synopsys.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-snps-arc@lists.infradead.org
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: linux-c6x-dev@linux-c6x.org
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: linux-hexagon@vger.kernel.org
      Cc: linux-metag@vger.kernel.org
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: linux@lists.openrisc.net
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: nios2-dev@lists.rocketboards.org
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      b0da6d44
  16. May 03, 2016
    • Yang Shi's avatar
      arm64: always use STRICT_MM_TYPECHECKS · 2326df55
      Yang Shi authored
      Inspired by the counterpart of powerpc [1], which shows there is no negative
      effect on code generation from enabling STRICT_MM_TYPECHECKS with a modern
      compiler.
      
      And, Arnd's comment [2] about that patch says STRICT_MM_TYPECHECKS could
      be default as long as the architecture can pass structures in registers as
      function arguments. ARM64 can do it as long as the size of structure <= 16
      bytes. All the page table value types are u64 on ARM64.
      
      The below disassembly demonstrates it, entry is pte_t type:
      
                  entry = arch_make_huge_pte(entry, vma, page, writable);
         0xffff00000826fc38 <+80>:    and     x0, x0, #0xfffffffffffffffd
         0xffff00000826fc3c <+84>:    mov     w3, w21
         0xffff00000826fc40 <+88>:    mov     x2, x20
         0xffff00000826fc44 <+92>:    mov     x1, x19
         0xffff00000826fc48 <+96>:    orr     x0, x0, #0x400
         0xffff00000826fc4c <+100>:   bl      0xffff00000809bcc0 <arch_make_huge_pte>
      
      [1] http://www.spinics.net/lists/linux-mm/msg105951.html
      [2] http://www.spinics.net/lists/linux-mm/msg105969.html
      
      
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarYang Shi <yang.shi@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2326df55
    • James Morse's avatar
      arm64: kvm: Fix kvm teardown for systems using the extended idmap · c612505f
      James Morse authored
      
      
      If memory is located above 1<<VA_BITS, kvm adds an extra level to its page
      tables, merging the runtime tables and boot tables that contain the idmap.
      This lets us avoid the trampoline dance during initialisation.
      
      This also means there is no trampoline page mapped, so
      __cpu_reset_hyp_mode() can't call __kvm_hyp_reset() in this page. The good
      news is the idmap is still mapped, so we don't need the trampoline page.
      The bad news is we can't call it directly as the idmap is above
      HYP_PAGE_OFFSET, so its address is masked by kvm_call_hyp.
      
      Add a function __extended_idmap_trampoline which will branch into
      __kvm_hyp_reset in the idmap, change kvm_hyp_reset_entry() to return
      this address if __kvm_cpu_uses_extended_idmap(). In this case
      __kvm_hyp_reset() will still switch to the boot tables (which are the
      merged tables that were already in use), and branch into the idmap (where
      it already was).
      
      This fixes boot failures on these systems, where we fail to execute the
      missing trampoline page when tearing down kvm in init_subsystems():
      [    2.508922] kvm [1]: 8-bit VMID
      [    2.512057] kvm [1]: Hyp mode initialized successfully
      [    2.517242] kvm [1]: interrupt-controller@e1140000 IRQ13
      [    2.522622] kvm [1]: timer IRQ3
      [    2.525783] Kernel panic - not syncing: HYP panic:
      [    2.525783] PS:200003c9 PC:0000007ffffff820 ESR:86000005
      [    2.525783] FAR:0000007ffffff820 HPFAR:00000000003ffff0 PAR:0000000000000000
      [    2.525783] VCPU:          (null)
      [    2.525783]
      [    2.547667] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W       4.6.0-rc5+ #1
      [    2.555137] Hardware name: Default string Default string/Default string, BIOS ROD0084E 09/03/2015
      [    2.563994] Call trace:
      [    2.566432] [<ffffff80080888d0>] dump_backtrace+0x0/0x240
      [    2.571818] [<ffffff8008088b24>] show_stack+0x14/0x20
      [    2.576858] [<ffffff80083423ac>] dump_stack+0x94/0xb8
      [    2.581899] [<ffffff8008152130>] panic+0x10c/0x250
      [    2.586677] [<ffffff8008152024>] panic+0x0/0x250
      [    2.591281] SMP: stopping secondary CPUs
      [    3.649692] SMP: failed to stop secondary CPUs 0-2,4-7
      [    3.654818] Kernel Offset: disabled
      [    3.658293] Memory Limit: none
      [    3.661337] ---[ end Kernel panic - not syncing: HYP panic:
      [    3.661337] PS:200003c9 PC:0000007ffffff820 ESR:86000005
      [    3.661337] FAR:0000007ffffff820 HPFAR:00000000003ffff0 PAR:0000000000000000
      [    3.661337] VCPU:          (null)
      [    3.661337]
      
      Reported-by: default avatarWill Deacon <will.deacon@arm.com>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c612505f
  17. May 02, 2016
Loading