Skip to content
  1. Jun 18, 2021
  2. May 12, 2021
    • Valentin Schneider's avatar
      sched/core: Initialize the idle task with preemption disabled · f1a0a376
      Valentin Schneider authored
      
      
      As pointed out by commit
      
        de9b8f5d ("sched: Fix crash trying to dequeue/enqueue the idle thread")
      
      init_idle() can and will be invoked more than once on the same idle
      task. At boot time, it is invoked for the boot CPU thread by
      sched_init(). Then smp_init() creates the threads for all the secondary
      CPUs and invokes init_idle() on them.
      
      As the hotplug machinery brings the secondaries to life, it will issue
      calls to idle_thread_get(), which itself invokes init_idle() yet again.
      In this case it's invoked twice more per secondary: at _cpu_up(), and at
      bringup_cpu().
      
      Given smp_init() already initializes the idle tasks for all *possible*
      CPUs, no further initialization should be required. Now, removing
      init_idle() from idle_thread_get() exposes some interesting expectations
      with regards to the idle task's preempt_count: the secondary startup always
      issues a preempt_disable(), requiring some reset of the preempt count to 0
      between hot-unplug and hotplug, which is currently served by
      idle_thread_get() -> idle_init().
      
      Given the idle task is supposed to have preemption disabled once and never
      see it re-enabled, it seems that what we actually want is to initialize its
      preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove
      init_idle() from idle_thread_get().
      
      Secondary startups were patched via coccinelle:
      
        @begone@
        @@
      
        -preempt_disable();
        ...
        cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
      
      Signed-off-by: default avatarValentin Schneider <valentin.schneider@arm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com
      f1a0a376
  3. May 06, 2021
  4. May 05, 2021
    • Mark Rutland's avatar
      arm64: entry: always set GIC_PRIO_PSR_I_SET during entry · 4d6a38da
      Mark Rutland authored
      Zenghui reports that booting a kernel with "irqchip.gicv3_pseudo_nmi=1"
      on the command line hits a warning during kernel entry, due to the way
      we manipulate the PMR.
      
      Early in the entry sequence, we call lockdep_hardirqs_off() to inform
      lockdep that interrupts have been masked (as the HW sets DAIF wqhen
      entering an exception). Architecturally PMR_EL1 is not affected by
      exception entry, and we don't set GIC_PRIO_PSR_I_SET in the PMR early in
      the exception entry sequence, so early in exception entry the PMR can
      indicate that interrupts are unmasked even though they are masked by
      DAIF.
      
      If DEBUG_LOCKDEP is selected, lockdep_hardirqs_off() will check that
      interrupts are masked, before we set GIC_PRIO_PSR_I_SET in any of the
      exception entry paths, and hence lockdep_hardirqs_off() will WARN() that
      something is amiss.
      
      We can avoid this by consistently setting GIC_PRIO_PSR_I_SET during
      exception entry so that kernel code sees a consistent environment. We
      must also update local_daif_inherit() to undo this, as currently only
      touches DAIF. For other paths, local_daif_restore() will update both
      DAIF and the PMR. With this done, we can remove the existing special
      cases which set this later in the entry code.
      
      We always use (GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET) for consistency with
      local_daif_save(), as this will warn if it ever encounters
      (GIC_PRIO_IRQOFF | GIC_PRIO_PSR_I_SET), and never sets this itself. This
      matches the gic_prio_kentry_setup that we have to retain for
      ret_to_user.
      
      The original splat from Zenghui's report was:
      
      | DEBUG_LOCKS_WARN_ON(!irqs_disabled())
      | WARNING: CPU: 3 PID: 125 at kernel/locking/lockdep.c:4258 lockdep_hardirqs_off+0xd4/0xe8
      | Modules linked in:
      | CPU: 3 PID: 125 Comm: modprobe Tainted: G        W         5.12.0-rc8+ #463
      | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
      | pstate: 604003c5 (nZCv DAIF +PAN -UAO -TCO BTYPE=--)
      | pc : lockdep_hardirqs_off+0xd4/0xe8
      | lr : lockdep_hardirqs_off+0xd4/0xe8
      | sp : ffff80002a39bad0
      | pmr_save: 000000e0
      | x29: ffff80002a39bad0 x28: ffff0000de214bc0
      | x27: ffff0000de1c0400 x26: 000000000049b328
      | x25: 0000000000406f30 x24: ffff0000de1c00a0
      | x23: 0000000020400005 x22: ffff8000105f747c
      | x21: 0000000096000044 x20: 0000000000498ef9
      | x19: ffff80002a39bc88 x18: ffffffffffffffff
      | x17: 0000000000000000 x16: ffff800011c61eb0
      | x15: ffff800011700a88 x14: 0720072007200720
      | x13: 0720072007200720 x12: 0720072007200720
      | x11: 0720072007200720 x10: 0720072007200720
      | x9 : ffff80002a39bad0 x8 : ffff80002a39bad0
      | x7 : ffff8000119f0800 x6 : c0000000ffff7fff
      | x5 : ffff8000119f07a8 x4 : 0000000000000001
      | x3 : 9bcdab23f2432800 x2 : ffff800011730538
      | x1 : 9bcdab23f2432800 x0 : 0000000000000000
      | Call trace:
      |  lockdep_hardirqs_off+0xd4/0xe8
      |  enter_from_kernel_mode.isra.5+0x7c/0xa8
      |  el1_abort+0x24/0x100
      |  el1_sync_handler+0x80/0xd0
      |  el1_sync+0x6c/0x100
      |  __arch_clear_user+0xc/0x90
      |  load_elf_binary+0x9fc/0x1450
      |  bprm_execve+0x404/0x880
      |  kernel_execve+0x180/0x188
      |  call_usermodehelper_exec_async+0xdc/0x158
      |  ret_from_fork+0x10/0x18
      
      Fixes: 23529049 ("arm64: entry: fix non-NMI user<->kernel transitions")
      Fixes: 7cd1ea10 ("arm64: entry: fix non-NMI kernel<->kernel transitions")
      Fixes: f0cd5ac1 ("arm64: entry: fix NMI {user, kernel}->kernel transitions")
      Fixes: 2a9b3e6a ("arm64: entry: fix EL1 debug transitions")
      Link: https://lore.kernel.org/r/f4012761-026f-4e51-3a0c-7524e434e8b3@huawei.com
      
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reported-by: default avatarZenghui Yu <yuzenghui@huawei.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210428111555.50880-1-mark.rutland@arm.com
      
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4d6a38da
  5. Apr 30, 2021
  6. Apr 23, 2021
  7. Apr 16, 2021
  8. Apr 15, 2021
  9. Apr 13, 2021
  10. Apr 12, 2021
  11. Apr 11, 2021
  12. Apr 08, 2021
  13. Apr 06, 2021
  14. Apr 01, 2021
    • Qi Liu's avatar
      arm64: perf: Remove redundant initialization in perf_event.c · 2c2e21e7
      Qi Liu authored
      
      
      The initialization of value in function armv8pmu_read_hw_counter()
      and armv8pmu_read_counter() seem redundant, as they are soon updated.
      So, We can remove them.
      
      Signed-off-by: default avatarQi Liu <liuqi115@huawei.com>
      Link: https://lore.kernel.org/r/1617275801-1980-1-git-send-email-liuqi115@huawei.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      2c2e21e7
    • Andrew Scull's avatar
      KVM: arm64: Log source when panicking from nVHE hyp · aec0fae6
      Andrew Scull authored
      
      
      To aid with debugging, add details of the source of a panic from nVHE
      hyp. This is done by having nVHE hyp exit to nvhe_hyp_panic_handler()
      rather than directly to panic(). The handler will then add the extra
      details for debugging before panicking the kernel.
      
      If the panic was due to a BUG(), look up the metadata to log the file
      and line, if available, otherwise log an address that can be looked up
      in vmlinux. The hyp offset is also logged to allow other hyp VAs to be
      converted, similar to how the kernel offset is logged during a panic.
      
      __hyp_panic_string is now inlined since it no longer needs to be
      referenced as a symbol and the message is free to diverge between VHE
      and nVHE.
      
      The following is an example of the logs generated by a BUG in nVHE hyp.
      
      [   46.754840] kvm [307]: nVHE hyp BUG at: arch/arm64/kvm/hyp/nvhe/switch.c:242!
      [   46.755357] kvm [307]: Hyp Offset: 0xfffea6c58e1e0000
      [   46.755824] Kernel panic - not syncing: HYP panic:
      [   46.755824] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800
      [   46.755824] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000
      [   46.755824] VCPU:0000d93a880d0000
      [   46.756960] CPU: 3 PID: 307 Comm: kvm-vcpu-0 Not tainted 5.12.0-rc3-00005-gc572b99cf65b-dirty #133
      [   46.757459] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
      [   46.758366] Call trace:
      [   46.758601]  dump_backtrace+0x0/0x1b0
      [   46.758856]  show_stack+0x18/0x70
      [   46.759057]  dump_stack+0xd0/0x12c
      [   46.759236]  panic+0x16c/0x334
      [   46.759426]  arm64_kernel_unmapped_at_el0+0x0/0x30
      [   46.759661]  kvm_arch_vcpu_ioctl_run+0x134/0x750
      [   46.759936]  kvm_vcpu_ioctl+0x2f0/0x970
      [   46.760156]  __arm64_sys_ioctl+0xa8/0xec
      [   46.760379]  el0_svc_common.constprop.0+0x60/0x120
      [   46.760627]  do_el0_svc+0x24/0x90
      [   46.760766]  el0_svc+0x2c/0x54
      [   46.760915]  el0_sync_handler+0x1a4/0x1b0
      [   46.761146]  el0_sync+0x170/0x180
      [   46.761889] SMP: stopping secondary CPUs
      [   46.762786] Kernel Offset: 0x3e1cd2820000 from 0xffff800010000000
      [   46.763142] PHYS_OFFSET: 0xffffa9f680000000
      [   46.763359] CPU features: 0x00240022,61806008
      [   46.763651] Memory Limit: none
      [   46.813867] ---[ end Kernel panic - not syncing: HYP panic:
      [   46.813867] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800
      [   46.813867] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000
      [   46.813867] VCPU:0000d93a880d0000 ]---
      
      Signed-off-by: default avatarAndrew Scull <ascull@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20210318143311.839894-6-ascull@google.com
      aec0fae6
  15. Mar 29, 2021
  16. Mar 28, 2021
Loading