Skip to content
  1. Sep 04, 2018
  2. Sep 02, 2018
  3. Sep 01, 2018
  4. Aug 31, 2018
    • Joerg Roedel's avatar
      x86/efi: Load fixmap GDT in efi_call_phys_epilog() · eeb89e2b
      Joerg Roedel authored
      
      
      When PTI is enabled on x86-32 the kernel uses the GDT mapped in the fixmap
      for the simple reason that this address is also mapped for user-space.
      
      The efi_call_phys_prolog()/efi_call_phys_epilog() wrappers change the GDT
      to call EFI runtime services and switch back to the kernel GDT when they
      return. But the switch-back uses the writable GDT, not the fixmap GDT.
      
      When that happened and and the CPU returns to user-space it switches to the
      user %cr3 and tries to restore user segment registers. This fails because
      the writable GDT is not mapped in the user page-table, and without a GDT
      the fault handlers also can't be launched. The result is a triple fault and
      reboot of the machine.
      
      Fix that by restoring the GDT back to the fixmap GDT which is also mapped
      in the user page-table.
      
      Fixes: 7757d607 x86/pti: ('Allow CONFIG_PAGE_TABLE_ISOLATION for x86_32')
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Tested-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: hpa@zytor.com
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/1535702738-10971-1-git-send-email-joro@8bytes.org
      eeb89e2b
    • Andy Lutomirski's avatar
      x86/nmi: Fix NMI uaccess race against CR3 switching · 4012e77a
      Andy Lutomirski authored
      
      
      
      A NMI can hit in the middle of context switching or in the middle of
      switch_mm_irqs_off().  In either case, CR3 might not match current->mm,
      which could cause copy_from_user_nmi() and friends to read the wrong
      memory.
      
      Fix it by adding a new nmi_uaccess_okay() helper and checking it in
      copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarRik van Riel <riel@surriel.com>
      Cc: Nadav Amit <nadav.amit@gmail.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jann Horn <jannh@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/dd956eba16646fd0b15c3c0741269dfd84452dac.1535557289.git.luto@kernel.org
      
      4012e77a
    • Ben Hutchings's avatar
      x86: Allow generating user-space headers without a compiler · 829fe4aa
      Ben Hutchings authored
      
      
      
      When bootstrapping an architecture, it's usual to generate the kernel's
      user-space headers (make headers_install) before building a compiler.  Move
      the compiler check (for asm goto support) to the archprepare target so that
      it is only done when building code for the target.
      
      Fixes: e501ce95 ("x86: Force asm-goto")
      Reported-by: default avatarHelmut Grohne <helmutg@debian.org>
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180829194317.GA4765@decadent.org.uk
      
      829fe4aa
    • Jann Horn's avatar
      x86/dumpstack: Don't dump kernel memory based on usermode RIP · 342db04a
      Jann Horn authored
      
      
      
      show_opcodes() is used both for dumping kernel instructions and for dumping
      user instructions. If userspace causes #PF by jumping to a kernel address,
      show_opcodes() can be reached with regs->ip controlled by the user,
      pointing to kernel code. Make sure that userspace can't trick us into
      dumping kernel memory into dmesg.
      
      Fixes: 7cccf072 ("x86/dumpstack: Add a show_ip() function")
      Signed-off-by: default avatarJann Horn <jannh@google.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: security@kernel.org
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180828154901.112726-1-jannh@google.com
      
      342db04a
    • James Morse's avatar
      arm64: mm: always enable CONFIG_HOLES_IN_ZONE · f52bb98f
      James Morse authored
      Commit 6d526ee2 ("arm64: mm: enable CONFIG_HOLES_IN_ZONE for NUMA")
      only enabled HOLES_IN_ZONE for NUMA systems because the NUMA code was
      choking on the missing zone for nomap pages. This problem doesn't just
      apply to NUMA systems.
      
      If the architecture doesn't set HAVE_ARCH_PFN_VALID, pfn_valid() will
      return true if the pfn is part of a valid sparsemem section.
      
      When working with multiple pages, the mm code uses pfn_valid_within()
      to test each page it uses within the sparsemem section is valid. On
      most systems memory comes in MAX_ORDER_NR_PAGES chunks which all
      have valid/initialised struct pages. In this case pfn_valid_within()
      is optimised out.
      
      Systems where this isn't true (e.g. due to nomap) should set
      HOLES_IN_ZONE and provide HAVE_ARCH_PFN_VALID so that mm tests each
      page as it works with it.
      
      Currently non-NUMA arm64 systems can't enable HOLES_IN_ZONE, leading to
      a VM_BUG_ON():
      
      | page:fffffdff802e1780 is uninitialized and poisoned
      | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
      | raw: ffffffffffffffff ffffffffffffffff ffffffffffffffff ffffffffffffffff
      | page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
      | ------------[ cut here ]------------
      | kernel BUG at include/linux/mm.h:978!
      | Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
      [...]
      | CPU: 1 PID: 25236 Comm: dd Not tainted 4.18.0 #7
      | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
      | pstate: 40000085 (nZcv daIf -PAN -UAO)
      | pc : move_freepages_block+0x144/0x248
      | lr : move_freepages_block+0x144/0x248
      | sp : fffffe0071177680
      [...]
      | Process dd (pid: 25236, stack limit = 0x0000000094cc07fb)
      | Call trace:
      |  move_freepages_block+0x144/0x248
      |  steal_suitable_fallback+0x100/0x16c
      |  get_page_from_freelist+0x440/0xb20
      |  __alloc_pages_nodemask+0xe8/0x838
      |  new_slab+0xd4/0x418
      |  ___slab_alloc.constprop.27+0x380/0x4a8
      |  __slab_alloc.isra.21.constprop.26+0x24/0x34
      |  kmem_cache_alloc+0xa8/0x180
      |  alloc_buffer_head+0x1c/0x90
      |  alloc_page_buffers+0x68/0xb0
      |  create_empty_buffers+0x20/0x1ec
      |  create_page_buffers+0xb0/0xf0
      |  __block_write_begin_int+0xc4/0x564
      |  __block_write_begin+0x10/0x18
      |  block_write_begin+0x48/0xd0
      |  blkdev_write_begin+0x28/0x30
      |  generic_perform_write+0x98/0x16c
      |  __generic_file_write_iter+0x138/0x168
      |  blkdev_write_iter+0x80/0xf0
      |  __vfs_write+0xe4/0x10c
      |  vfs_write+0xb4/0x168
      |  ksys_write+0x44/0x88
      |  sys_write+0xc/0x14
      |  el0_svc_naked+0x30/0x34
      | Code: aa1303e0 90001a01 91296421 94008902 (d4210000)
      | ---[ end trace 1601ba47f6e883fe ]---
      
      Remove the NUMA dependency.
      
      Link: https://www.spinics.net/lists/arm-kernel/msg671851.html
      
      
      Cc: <stable@vger.kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Reviewed-by: default avatarPavel Tatashin <pavel.tatashin@microsoft.com>
      Tested-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f52bb98f
    • Finn Thain's avatar
      m68k/mac: Use correct PMU response format · 0986b16a
      Finn Thain authored
      
      
      Now that the 68k Mac port has adopted the via-pmu driver, it must decode
      the PMU response accordingly otherwise the date and time will be wrong.
      
      Fixes: ebd72227 ("macintosh/via-pmu: Replace via-pmu68k driver with via-pmu driver")
      Signed-off-by: default avatarFinn Thain <fthain@telegraphics.com.au>
      Signed-off-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      0986b16a
  5. Aug 30, 2018
  6. Aug 29, 2018
  7. Aug 28, 2018
  8. Aug 27, 2018
  9. Aug 25, 2018
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-gcm-ce - fix scatterwalk API violation · c2b24c36
      Ard Biesheuvel authored
      
      
      Commit 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on
      two input blocks at a time") modified the granularity at which
      the AES/GCM code processes its input to allow subsequent changes
      to be applied that improve performance by using aggregation to
      process multiple input blocks at once.
      
      For this reason, it doubled the algorithm's 'chunksize' property
      to 2 x AES_BLOCK_SIZE, but retained the non-SIMD fallback path that
      processes a single block at a time. In some cases, this violates the
      skcipher scatterwalk API, by calling skcipher_walk_done() with a
      non-zero residue value for a chunk that is expected to be handled
      in its entirety. This results in a WARN_ON() to be hit by the TLS
      self test code, but is likely to break other user cases as well.
      Unfortunately, none of the current test cases exercises this exact
      code path at the moment.
      
      Fixes: 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on two ...")
      Reported-by: default avatarVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarVakul Garg <vakul.garg@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c2b24c36
    • Dave Watson's avatar
      crypto: aesni - Use unaligned loads from gcm_context_data · e5b954e8
      Dave Watson authored
      
      
      A regression was reported bisecting to 1476db2d
      "Move HashKey computation from stack to gcm_context".  That diff
      moved HashKey computation from the stack, which was explicitly aligned
      in the asm, to a struct provided from the C code, depending on
      AESNI_ALIGN_ATTR for alignment.   It appears some compilers may not
      align this struct correctly, resulting in a crash on the movdqa
      instruction when attempting to encrypt or decrypt data.
      
      Fix by using unaligned loads for the HashKeys.  On modern
      hardware there is no perf difference between the unaligned and
      aligned loads.  All other accesses to gcm_context_data already use
      unaligned loads.
      
      Reported-by: default avatarMauro Rossi <issor.oruam@gmail.com>
      Fixes: 1476db2d ("Move HashKey computation from stack to gcm_context")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarDave Watson <davejwatson@fb.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e5b954e8
    • Ard Biesheuvel's avatar
      crypto: arm64/sm4-ce - check for the right CPU feature bit · 7fa885e2
      Ard Biesheuvel authored
      
      
      ARMv8.2 specifies special instructions for the SM3 cryptographic hash
      and the SM4 symmetric cipher. While it is unlikely that a core would
      implement one and not the other, we should only use SM4 instructions
      if the SM4 CPU feature bit is set, and we currently check the SM3
      feature bit instead. So fix that.
      
      Fixes: e99ce921 ("crypto: arm64 - add support for SM4...")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      7fa885e2
  10. Aug 24, 2018
Loading