Skip to content
  1. Apr 11, 2021
  2. Mar 11, 2021
    • Ard Biesheuvel's avatar
      arm64: mm: use a 48-bit ID map when possible on 52-bit VA builds · 7ba8f2b2
      Ard Biesheuvel authored
      
      
      52-bit VA kernels can run on hardware that is only 48-bit capable, but
      configure the ID map as 52-bit by default. This was not a problem until
      recently, because the special T0SZ value for a 52-bit VA space was never
      programmed into the TCR register anwyay, and because a 52-bit ID map
      happens to use the same number of translation levels as a 48-bit one.
      
      This behavior was changed by commit 1401bef7 ("arm64: mm: Always update
      TCR_EL1 from __cpu_set_tcr_t0sz()"), which causes the unsupported T0SZ
      value for a 52-bit VA to be programmed into TCR_EL1. While some hardware
      simply ignores this, Mark reports that Amberwing systems choke on this,
      resulting in a broken boot. But even before that commit, the unsupported
      idmap_t0sz value was exposed to KVM and used to program TCR_EL2 incorrectly
      as well.
      
      Given that we already have to deal with address spaces being either 48-bit
      or 52-bit in size, the cleanest approach seems to be to simply default to
      a 48-bit VA ID map, and only switch to a 52-bit one if the placement of the
      kernel in DRAM requires it. This is guaranteed not to happen unless the
      system is actually 52-bit VA capable.
      
      Fixes: 90ec95cd ("arm64: mm: Introduce VA_BITS_MIN")
      Reported-by: default avatarMark Salter <msalter@redhat.com>
      Link: http://lore.kernel.org/r/20210310003216.410037-1-msalter@redhat.com
      
      
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20210310171515.416643-2-ardb@kernel.org
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      7ba8f2b2
  3. Mar 10, 2021
  4. Mar 08, 2021
  5. Mar 06, 2021
  6. Feb 26, 2021
  7. Feb 25, 2021
  8. Feb 24, 2021
  9. Feb 23, 2021
  10. Feb 22, 2021
  11. Feb 19, 2021
    • qiuguorui1's avatar
      arm64: kexec_file: fix memory leakage in create_dtb() when fdt_open_into() fails · 656d1d58
      qiuguorui1 authored
      
      
      in function create_dtb(), if fdt_open_into() fails, we need to vfree
      buf before return.
      
      Fixes: 52b2a8af ("arm64: kexec_file: load initrd and device-tree")
      Cc: stable@vger.kernel.org # v5.0
      Signed-off-by: default avatarqiuguorui1 <qiuguorui1@huawei.com>
      Link: https://lore.kernel.org/r/20210218125900.6810-1-qiuguorui1@huawei.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      656d1d58
    • Will Deacon's avatar
      arm64: spectre: Prevent lockdep splat on v4 mitigation enable path · a2c42bba
      Will Deacon authored
      
      
      The Spectre-v4 workaround is re-configured when resuming from suspend,
      as the firmware may have re-enabled the mitigation despite the user
      previously asking for it to be disabled.
      
      Enabling or disabling the workaround can result in an undefined
      instruction exception on CPUs which implement PSTATE.SSBS but only allow
      it to be configured by adjusting the SPSR on exception return. We handle
      this by installing an 'undef hook' which effectively emulates the access.
      
      Installing this hook requires us to take a couple of spinlocks both to
      avoid corrupting the internal list of hooks but also to ensure that we
      don't run into an unhandled exception. Unfortunately, when resuming from
      suspend, we haven't yet called rcu_idle_exit() and so lockdep gets angry
      about "suspicious RCU usage". In doing so, it tries to print a warning,
      which leads it to get even more suspicious, this time about itself:
      
       |  rcu_scheduler_active = 2, debug_locks = 1
       |  RCU used illegally from extended quiescent state!
       |  1 lock held by swapper/0:
       |   #0: (logbuf_lock){-.-.}-{2:2}, at: vprintk_emit+0x88/0x198
       |
       |  Call trace:
       |   dump_backtrace+0x0/0x1d8
       |   show_stack+0x18/0x24
       |   dump_stack+0xe0/0x17c
       |   lockdep_rcu_suspicious+0x11c/0x134
       |   trace_lock_release+0xa0/0x160
       |   lock_release+0x3c/0x290
       |   _raw_spin_unlock+0x44/0x80
       |   vprintk_emit+0xbc/0x198
       |   vprintk_default+0x44/0x6c
       |   vprintk_func+0x1f4/0x1fc
       |   printk+0x54/0x7c
       |   lockdep_rcu_suspicious+0x30/0x134
       |   trace_lock_acquire+0xa0/0x188
       |   lock_acquire+0x50/0x2fc
       |   _raw_spin_lock+0x68/0x80
       |   spectre_v4_enable_mitigation+0xa8/0x30c
       |   __cpu_suspend_exit+0xd4/0x1a8
       |   cpu_suspend+0xa0/0x104
       |   psci_cpu_suspend_enter+0x3c/0x5c
       |   psci_enter_idle_state+0x44/0x74
       |   cpuidle_enter_state+0x148/0x2f8
       |   cpuidle_enter+0x38/0x50
       |   do_idle+0x1f0/0x2b4
      
      Prevent these splats by running __cpu_suspend_exit() with RCU watching.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Saravana Kannan <saravanak@google.com>
      Suggested-by: default avatar"Paul E . McKenney" <paulmck@kernel.org>
      Reported-by: default avatarSami Tolvanen <samitolvanen@google.com>
      Fixes: c2876207 ("arm64: Rewrite Spectre-v4 mitigation code")
      Cc: <stable@vger.kernel.org>
      Acked-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20210218140346.5224-1-will@kernel.org
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      a2c42bba
  12. Feb 12, 2021
  13. Feb 09, 2021
  14. Feb 08, 2021
    • Mark Rutland's avatar
      arm64: entry: consolidate Cortex-A76 erratum 1463225 workaround · 6459b846
      Mark Rutland authored
      
      
      The workaround for Cortex-A76 erratum 1463225 is split across the
      syscall and debug handlers in separate files. This structure currently
      forces us to do some redundant work for debug exceptions from EL0, is a
      little difficult to follow, and gets in the way of some future rework of
      the exception entry code as it requires exceptions to be unmasked late
      in the syscall handling path.
      
      To simplify things, and as a preparatory step for future rework of
      exception entry, this patch moves all the workaround logic into
      entry-common.c. As the debug handler only needs to run for EL1 debug
      exceptions, we no longer call it for EL0 debug exceptions, and no longer
      need to check user_mode(regs) as this is always false. For clarity
      cortex_a76_erratum_1463225_debug_handler() is changed to return bool.
      
      In the SVC path, the workaround is applied earlier, but this should have
      no functional impact as exceptions are still masked. In the debug path
      we run the fixup before explicitly disabling preemption, but we will not
      attempt to preempt before returning from the exception.
      
      There should be no functional change as a result of this patch.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20210202120341.28858-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      6459b846
Loading