Skip to content
  1. Jul 15, 2021
    • Mark Rutland's avatar
      arm64: entry: fix KCOV suppression · e6f85cbe
      Mark Rutland authored
      
      
      We suppress KCOV for entry.o rather than entry-common.o. As entry.o is
      built from entry.S, this is pointless, and permits instrumentation of
      entry-common.o, which is built from entry-common.c.
      
      Fix the Makefile to suppress KCOV for entry-common.o, as we had intended
      to begin with. I've verified with objdump that this is working as
      expected.
      
      Fixes: bf6fa2c0 ("arm64: entry: don't instrument entry code with KCOV")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210715123049.9990-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      e6f85cbe
    • Mark Rutland's avatar
      arm64: entry: add missing noinstr · 31a7f0f6
      Mark Rutland authored
      
      
      We intend that all the early exception handling code is marked as
      `noinstr`, but we forgot this for __el0_error_handler_common(), which is
      called before we have completed entry from user mode. If it were
      instrumented, we could run into problems with RCU, lockdep, etc.
      
      Mark it as `noinstr` to prevent this.
      
      The few other functions in entry-common.c which do not have `noinstr` are
      called once we've completed entry, and are safe to instrument.
      
      Fixes: bb8e93a2 ("arm64: entry: convert SError handlers to C")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Joey Gouly <joey.gouly@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210714172801.16475-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      31a7f0f6
    • Mark Rutland's avatar
      arm64: mte: fix restoration of GCR_EL1 from suspend · 59f44069
      Mark Rutland authored
      
      
      Since commit:
      
        bad1e1c6 ("arm64: mte: switch GCR_EL1 in kernel entry and exit")
      
      we saved/restored the user GCR_EL1 value at exception boundaries, and
      update_gcr_el1_excl() is no longer used for this. However it is used to
      restore the kernel's GCR_EL1 value when returning from a suspend state.
      Thus, the comment is misleading (and an ISB is necessary).
      
      When restoring the kernel's GCR value, we need an ISB to ensure this is
      used by subsequent instructions. We don't necessarily get an ISB by
      other means (e.g. if the kernel is built without support for pointer
      authentication). As __cpu_setup() initialised GCR_EL1.Exclude to 0xffff,
      until a context synchronization event, allocation tag 0 may be used
      rather than the desired set of tags.
      
      This patch drops the misleading comment, adds the missing ISB, and for
      clarity folds update_gcr_el1_excl() into its only user.
      
      Fixes: bad1e1c6 ("arm64: mte: switch GCR_EL1 in kernel entry and exit")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210714143843.56537-2-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      59f44069
    • Robin Murphy's avatar
      arm64: Avoid premature usercopy failure · 295cf156
      Robin Murphy authored
      
      
      Al reminds us that the usercopy API must only return complete failure
      if absolutely nothing could be copied. Currently, if userspace does
      something silly like giving us an unaligned pointer to Device memory,
      or a size which overruns MTE tag bounds, we may fail to honour that
      requirement when faulting on a multi-byte access even though a smaller
      access could have succeeded.
      
      Add a mitigation to the fixup routines to fall back to a single-byte
      copy if we faulted on a larger access before anything has been written
      to the destination, to guarantee making *some* forward progress. We
      needn't be too concerned about the overall performance since this should
      only occur when callers are doing something a bit dodgy in the first
      place. Particularly broken userspace might still be able to trick
      generic_perform_write() into an infinite loop by targeting write() at
      an mmap() of some read-only device register where the fault-in load
      succeeds but any store synchronously aborts such that copy_to_user() is
      genuinely unable to make progress, but, well, don't do that...
      
      CC: stable@vger.kernel.org
      Reported-by: default avatarChen Huang <chenhuang5@huawei.com>
      Suggested-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      295cf156
  2. Jul 13, 2021
  3. Jul 12, 2021
    • Will Deacon's avatar
      Revert "arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)" · c1132702
      Will Deacon authored
      This reverts commit 65688d2a.
      
      Unfortunately, the original Qualcomm Kryo cores integrated into the
      MSM8996 SoC feature an L2 cache with 128-byte lines which sits above
      the Point of Coherency. Consequently, we must restore ARCH_DMA_MINALIGN
      to its former ugly self so that non-coherent DMA can be performed safely
      on devices built using this SoC.
      
      Thanks to Jeffrey Hugo for confirming this with a hardware designer.
      
      Link: https://lore.kernel.org/r/CAOCk7NqdpUZFMSXfGjw0_1NaSK5gyTLgpS9kSdZn1jmBy-QkfA@mail.gmail.com/
      
      
      Reported-by: default avatarYassine Oudjana <y.oudjana@protonmail.com>
      Link: https://lore.kernel.org/r/uHgsRacR8hJ7nW-I-pIcehzg-lNIn7NJvaL7bP9tfAftFsBjsgaY2qTjG9zyBgxHkjNL1WPNrD7YVv2JVD2_Wy-a5VTbcq-1xEi8ZnwrXBo=@protonmail.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      c1132702
    • Carlos Bilbao's avatar
      arm64: Add missing header <asm/smp.h> in two files · e62e0748
      Carlos Bilbao authored
      
      
      Add missing header <asm/smp.h> on include/asm/smp_plat.h, as it calls function
      cpu_logical_map(). Also include it on kernel/cpufeature.c since it has calls to
      functions cpu_panic_kernel() and cpu_die_early().
      
      Both files call functions defined on this header, make the header dependencies
      less fragile.
      
      Signed-off-by: default avatarCarlos Bilbao <bilbao@vt.edu>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/4325940.LvFx2qVVIh@iron-maiden
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      e62e0748
    • Mark Rutland's avatar
      arm64: fix strlen() with CONFIG_KASAN_HW_TAGS · 5f34b1eb
      Mark Rutland authored
      
      
      When the kernel is built with CONFIG_KASAN_HW_TAGS and the CPU supports
      MTE, memory accesses are checked at 16-byte granularity, and
      out-of-bounds accesses can result in tag check faults. Our current
      implementation of strlen() makes unaligned 16-byte accesses (within a
      naturally aligned 4096-byte window), and can trigger tag check faults.
      
      This can be seen at boot time, e.g.
      
      | BUG: KASAN: invalid-access in __pi_strlen+0x14/0x150
      | Read at addr f4ff0000c0028300 by task swapper/0/0
      | Pointer tag: [f4], memory tag: [fe]
      |
      | CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.13.0-09550-g03c2813535a2-dirty #20
      | Hardware name: linux,dummy-virt (DT)
      | Call trace:
      |  dump_backtrace+0x0/0x1b0
      |  show_stack+0x1c/0x30
      |  dump_stack_lvl+0x68/0x84
      |  print_address_description+0x7c/0x2b4
      |  kasan_report+0x138/0x38c
      |  __do_kernel_fault+0x190/0x1c4
      |  do_tag_check_fault+0x78/0x90
      |  do_mem_abort+0x44/0xb4
      |  el1_abort+0x40/0x60
      |  el1h_64_sync_handler+0xb0/0xd0
      |  el1h_64_sync+0x78/0x7c
      |  __pi_strlen+0x14/0x150
      |  __register_sysctl_table+0x7c4/0x890
      |  register_leaf_sysctl_tables+0x1a4/0x210
      |  register_leaf_sysctl_tables+0xc8/0x210
      |  __register_sysctl_paths+0x22c/0x290
      |  register_sysctl_table+0x2c/0x40
      |  sysctl_init+0x20/0x30
      |  proc_sys_init+0x3c/0x48
      |  proc_root_init+0x80/0x9c
      |  start_kernel+0x640/0x69c
      |  __primary_switched+0xc0/0xc8
      
      To fix this, we can reduce the (strlen-internal) MIN_PAGE_SIZE to 16
      bytes when CONFIG_KASAN_HW_TAGS is selected. This will cause strlen() to
      align the base pointer downwards to a 16-byte boundary, and to discard
      the additional prefix bytes without counting them. All subsequent
      accesses will be 16-byte aligned 16-byte LDPs. While the comments say
      the body of the loop will access 32 bytes, this is performed as two
      16-byte acceses, with the second made only if the first did not
      encounter a NUL byte, so the body of the loop will not over-read across
      a 16-byte boundary.
      
      No other string routines are affected. The other str*() routines will
      not make any access which straddles a 16-byte boundary, and the mem*()
      routines will only make acceses which straddle a 16-byte boundary when
      which is entirely within the bounds of the relevant base and size
      arguments.
      
      Fixes: 325a1de8 ("arm64: Import updated version of Cortex Strings' strlen")
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Potapenko <glider@google.com
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Link: https://lore.kernel.org/r/20210712090043.20847-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      5f34b1eb
  4. Jul 08, 2021
  5. Jul 01, 2021
  6. Jun 29, 2021
  7. Jun 25, 2021
  8. Jun 24, 2021
  9. Jun 23, 2021
  10. Jun 22, 2021
Loading