- Jun 18, 2021
-
-
Peter Zijlstra authored
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper: task_is_running(p). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Davidlohr Bueso <dave@stgolabs.net> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org
-
- May 12, 2021
-
-
Valentin Schneider authored
As pointed out by commit de9b8f5d ("sched: Fix crash trying to dequeue/enqueue the idle thread") init_idle() can and will be invoked more than once on the same idle task. At boot time, it is invoked for the boot CPU thread by sched_init(). Then smp_init() creates the threads for all the secondary CPUs and invokes init_idle() on them. As the hotplug machinery brings the secondaries to life, it will issue calls to idle_thread_get(), which itself invokes init_idle() yet again. In this case it's invoked twice more per secondary: at _cpu_up(), and at bringup_cpu(). Given smp_init() already initializes the idle tasks for all *possible* CPUs, no further initialization should be required. Now, removing init_idle() from idle_thread_get() exposes some interesting expectations with regards to the idle task's preempt_count: the secondary startup always issues a preempt_disable(), requiring some reset of the preempt count to 0 between hot-unplug and hotplug, which is currently served by idle_thread_get() -> idle_init(). Given the idle task is supposed to have preemption disabled once and never see it re-enabled, it seems that what we actually want is to initialize its preempt_count to PREEMPT_DISABLED and leave it there. Do that, and remove init_idle() from idle_thread_get(). Secondary startups were patched via coccinelle: @begone@ @@ -preempt_disable(); ... cpu_startup_entry(CPUHP_AP_ONLINE_IDLE); Signed-off-by:
Valentin Schneider <valentin.schneider@arm.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512094636.2958515-1-valentin.schneider@arm.com
-
- May 06, 2021
-
-
Shaokun Zhang authored
Commit af391b15 ("arm64: kernel: rename __cpu_suspend to keep it aligned with arm") has used @index instead of @arg, but the comment is stale, update it. Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/1620280462-21937-1-git-send-email-zhangshaokun@hisilicon.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- May 05, 2021
-
-
Mark Rutland authored
Zenghui reports that booting a kernel with "irqchip.gicv3_pseudo_nmi=1" on the command line hits a warning during kernel entry, due to the way we manipulate the PMR. Early in the entry sequence, we call lockdep_hardirqs_off() to inform lockdep that interrupts have been masked (as the HW sets DAIF wqhen entering an exception). Architecturally PMR_EL1 is not affected by exception entry, and we don't set GIC_PRIO_PSR_I_SET in the PMR early in the exception entry sequence, so early in exception entry the PMR can indicate that interrupts are unmasked even though they are masked by DAIF. If DEBUG_LOCKDEP is selected, lockdep_hardirqs_off() will check that interrupts are masked, before we set GIC_PRIO_PSR_I_SET in any of the exception entry paths, and hence lockdep_hardirqs_off() will WARN() that something is amiss. We can avoid this by consistently setting GIC_PRIO_PSR_I_SET during exception entry so that kernel code sees a consistent environment. We must also update local_daif_inherit() to undo this, as currently only touches DAIF. For other paths, local_daif_restore() will update both DAIF and the PMR. With this done, we can remove the existing special cases which set this later in the entry code. We always use (GIC_PRIO_IRQON | GIC_PRIO_PSR_I_SET) for consistency with local_daif_save(), as this will warn if it ever encounters (GIC_PRIO_IRQOFF | GIC_PRIO_PSR_I_SET), and never sets this itself. This matches the gic_prio_kentry_setup that we have to retain for ret_to_user. The original splat from Zenghui's report was: | DEBUG_LOCKS_WARN_ON(!irqs_disabled()) | WARNING: CPU: 3 PID: 125 at kernel/locking/lockdep.c:4258 lockdep_hardirqs_off+0xd4/0xe8 | Modules linked in: | CPU: 3 PID: 125 Comm: modprobe Tainted: G W 5.12.0-rc8+ #463 | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 | pstate: 604003c5 (nZCv DAIF +PAN -UAO -TCO BTYPE=--) | pc : lockdep_hardirqs_off+0xd4/0xe8 | lr : lockdep_hardirqs_off+0xd4/0xe8 | sp : ffff80002a39bad0 | pmr_save: 000000e0 | x29: ffff80002a39bad0 x28: ffff0000de214bc0 | x27: ffff0000de1c0400 x26: 000000000049b328 | x25: 0000000000406f30 x24: ffff0000de1c00a0 | x23: 0000000020400005 x22: ffff8000105f747c | x21: 0000000096000044 x20: 0000000000498ef9 | x19: ffff80002a39bc88 x18: ffffffffffffffff | x17: 0000000000000000 x16: ffff800011c61eb0 | x15: ffff800011700a88 x14: 0720072007200720 | x13: 0720072007200720 x12: 0720072007200720 | x11: 0720072007200720 x10: 0720072007200720 | x9 : ffff80002a39bad0 x8 : ffff80002a39bad0 | x7 : ffff8000119f0800 x6 : c0000000ffff7fff | x5 : ffff8000119f07a8 x4 : 0000000000000001 | x3 : 9bcdab23f2432800 x2 : ffff800011730538 | x1 : 9bcdab23f2432800 x0 : 0000000000000000 | Call trace: | lockdep_hardirqs_off+0xd4/0xe8 | enter_from_kernel_mode.isra.5+0x7c/0xa8 | el1_abort+0x24/0x100 | el1_sync_handler+0x80/0xd0 | el1_sync+0x6c/0x100 | __arch_clear_user+0xc/0x90 | load_elf_binary+0x9fc/0x1450 | bprm_execve+0x404/0x880 | kernel_execve+0x180/0x188 | call_usermodehelper_exec_async+0xdc/0x158 | ret_from_fork+0x10/0x18 Fixes: 23529049 ("arm64: entry: fix non-NMI user<->kernel transitions") Fixes: 7cd1ea10 ("arm64: entry: fix non-NMI kernel<->kernel transitions") Fixes: f0cd5ac1 ("arm64: entry: fix NMI {user, kernel}->kernel transitions") Fixes: 2a9b3e6a ("arm64: entry: fix EL1 debug transitions") Link: https://lore.kernel.org/r/f4012761-026f-4e51-3a0c-7524e434e8b3@huawei.com Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reported-by:
Zenghui Yu <yuzenghui@huawei.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210428111555.50880-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 30, 2021
-
-
kernel test robot authored
Use min and max to make the effect more clear. Generated by: scripts/coccinelle/misc/minmax.cocci CC: Denis Efremov <efremov@linux.com> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Julia Lawall <julia.lawall@inria.fr> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/alpine.DEB.2.22.394.2104292246300.16899@hadrien [catalin.marinas@arm.com: include <linux/minmax.h> explicitly] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
We removed the terminal frame records in commit: 6106e111 ("arm64: remove EL0 exception frame record") ... on the assumption that as we no longer used them to find the pt_regs at exception boundaries, they were no longer necessary. However, Leo reports that as an unintended side-effect, this causes traces which cross secondary_start_kernel to terminate one entry too late, with a spurious "0" entry. There are a few ways we could sovle this, but as we're planning to use terminal records for RELIABLE_STACKTRACE, let's revert the logic change for now, keeping the update comments and accounting for the changes in commit: 3c026001 ("arm64: stacktrace: Report when we reach the end of the stack") This is effectively a partial revert of commit: 6106e111 ("arm64: remove EL0 exception frame record") Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Fixes: 6106e111 ("arm64: remove EL0 exception frame record") Reported-by:
Leo Yan <leo.yan@linaro.org> Tested-by:
Leo Yan <leo.yan@linaro.org> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com> Link: https://lore.kernel.org/r/20210429104813.GA33550@C02TD0UTHF1T.local Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Bill Wendling authored
The arm64 assembler in binutils 2.32 and above generates a program property note in a note section, .note.gnu.property, to encode used x86 ISAs and features. But the kernel linker script only contains a single NOTE segment: PHDRS { text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */ dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ note PT_NOTE FLAGS(4); /* PF_R */ } The NOTE segment generated by the vDSO linker script is aligned to 4 bytes. But the .note.gnu.property section must be aligned to 8 bytes on arm64. $ readelf -n vdso64.so Displaying notes found in: .note Owner Data size Description Linux 0x00000004 Unknown note type: (0x00000000) description data: 06 00 00 00 readelf: Warning: note with invalid namesz and/or descsz found at offset 0x20 readelf: Warning: type: 0x78, namesize: 0x00000100, descsize: 0x756e694c, alignment: 8 Since the note.gnu.property section in the vDSO is not checked by the dynamic linker, discard the .note.gnu.property sections in the vDSO. Similar to commit 4caffe6a ("x86/vdso: Discard .note.gnu.property sections in vDSO"), but for arm64. Signed-off-by:
Bill Wendling <morbo@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20210423205159.830854-1-morbo@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 23, 2021
-
-
Matthew Wilcox (Oracle) authored
Displaying two registers per line takes 15 lines. That improves to just 10 lines if we display three registers per line, which reduces the amount of information lost when oopses are cut off. It stays within 80 columns and matches x86-64. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210420172245.3679077-1-willy@infradead.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
In __apply_alternatives() we take a pointer to void which we later assign to a pointer to struct alt_region. As all callers are passing a pointer to struct alt_region to begin with, it's simpler and more robust to take a pointer to struct alt region, so let's do so and avoid the need for a temporary variable. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210416163032.10857-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Nick Desaulniers authored
Clang can assemble these files just fine; this is a relic from the top level Makefile conditionally adding this. We no longer need --prefix, --gcc-toolchain, or -Qunused-arguments flags either with this change, so remove those too. To test building: $ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \ CROSS_COMPILE_COMPAT=arm-linux-gnueabi- make LLVM=1 LLVM_IAS=1 \ defconfig arch/arm64/kernel/vdso32/ Suggested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Tested-by:
Stephen Boyd <swboyd@chromium.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210420174427.230228-1-ndesaulniers@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 16, 2021
-
-
Walter Wu authored
CONFIG_KASAN_STACK and CONFIG_KASAN_STACK_ENABLE both enable KASAN stack instrumentation, but we should only need one config, so that we remove CONFIG_KASAN_STACK_ENABLE and make CONFIG_KASAN_STACK workable. see [1]. When enable KASAN stack instrumentation, then for gcc we could do no prompt and default value y, and for clang prompt and default value n. This patch fixes the following compilation warning: include/linux/kasan.h:333:30: warning: 'CONFIG_KASAN_STACK' is not defined, evaluates to 0 [-Wundef] [akpm@linux-foundation.org: fix merge snafu] Link: https://bugzilla.kernel.org/show_bug.cgi?id=210221 [1] Link: https://lkml.kernel.org/r/20210226012531.29231-1-walter-zh.wu@mediatek.com Fixes: d9b571c8 ("kasan: fix KASAN_STACK dependency for HW_TAGS") Signed-off-by:
Walter Wu <walter-zh.wu@mediatek.com> Suggested-by:
Dmitry Vyukov <dvyukov@google.com> Reviewed-by:
Nathan Chancellor <natechancellor@gmail.com> Acked-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 15, 2021
-
-
Mark Brown authored
The FPSIMD code was relying on IS_ENABLED() checks in system_suppors_sve() to cause the compiler to delete references to SVE functions in some places, add explicit IS_ENABLED() checks back. Fixes: ef9c5d09 ("arm64/sve: Remove redundant system_supports_sve() tests") Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210415121742.36628-1-broonie@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 13, 2021
-
-
zhouchuangao authored
It can be optimized at compile time. Signed-off-by:
zhouchuangao <zhouchuangao@vivo.com> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/1617105472-6081-1-git-send-email-zhouchuangao@vivo.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Peter Collingbourne authored
The kernel does not use any keys besides IA so we don't need to install IB/DA/DB/GA on kernel exit if we arrange to install them on task switch instead, which we can expect to happen an order of magnitude less often. Furthermore we can avoid installing the user IA in the case where the user task has IA disabled and just leave the kernel IA installed. This also lets us avoid needing to install IA on kernel entry. On an Apple M1 under a hypervisor, the overhead of kernel entry/exit has been measured to be reduced by 15.6ns in the case where IA is enabled, and 31.9ns in the case where IA is disabled. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ieddf6b580d23c9e0bed45a822dabe72d2ffc9a8e Link: https://lore.kernel.org/r/2d653d055f38f779937f2b92f8ddd5cf9e4af4f4.1616123271.git.pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Peter Collingbourne authored
This change introduces a prctl that allows the user program to control which PAC keys are enabled in a particular task. The main reason why this is useful is to enable a userspace ABI that uses PAC to sign and authenticate function pointers and other pointers exposed outside of the function, while still allowing binaries conforming to the ABI to interoperate with legacy binaries that do not sign or authenticate pointers. The idea is that a dynamic loader or early startup code would issue this prctl very early after establishing that a process may load legacy binaries, but before executing any PAC instructions. This change adds a small amount of overhead to kernel entry and exit due to additional required instruction sequences. On a DragonBoard 845c (Cortex-A75) with the powersave governor, the overhead of similar instruction sequences was measured as 4.9ns when simulating the common case where IA is left enabled, or 43.7ns when simulating the uncommon case where IA is disabled. These numbers can be seen as the worst case scenario, since in more realistic scenarios a better performing governor would be used and a newer chip would be used that would support PAC unlike Cortex-A75 and would be expected to be faster than Cortex-A75. On an Apple M1 under a hypervisor, the overhead of the entry/exit instruction sequences introduced by this patch was measured as 0.3ns in the case where IA is left enabled, and 33.0ns in the case where IA is disabled. Signed-off-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Dave Martin <Dave.Martin@arm.com> Link: https://linux-review.googlesource.com/id/Ibc41a5e6a76b275efbaa126b31119dc197b927a5 Link: https://lore.kernel.org/r/d6609065f8f40397a4124654eb68c9f490b4d477.1616123271.git.pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Peter Collingbourne authored
In an upcoming change we are going to introduce per-task SCTLR_EL1 bits for PAC. Move the existing per-task SCTLR_EL1 field out of the MTE-specific code so that we will be able to use it from both the PAC and MTE code paths and make the task switching code more efficient. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Ic65fac78a7926168fa68f9e8da591c9e04ff7278 Link: https://lore.kernel.org/r/13d725cb8e741950fb9d6e64b2cd9bd54ff7c3f9.1616123271.git.pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Currently there are a number of places in the SVE code where we check both system_supports_sve() and TIF_SVE. This is a bit redundant given that we should never get into a situation where we have set TIF_SVE without having SVE support and it is not clear that silently ignoring a mistakenly set TIF_SVE flag is the most sensible error handling approach. For now let's just drop the system_supports_sve() checks since this will at least reduce overhead a little. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210412172320.3315-1-broonie@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Jisheng Zhang authored
If instruction being single stepped caused a page fault, the kprobes is cancelled to let the page fault handler continue as a normal page fault. But the local irqflags are disabled so cpu will restore pstate with DAIF masked. After pagefault is serviced, the kprobes is triggerred again, we overwrite the saved_irqflag by calling kprobes_save_local_irqflag(). NOTE, DAIF is masked in this new saved irqflag. After kprobes is serviced, the cpu pstate is retored with DAIF masked. This patch is inspired by one patch for riscv from Liao Chang. Signed-off-by:
Jisheng Zhang <Jisheng.Zhang@synaptics.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Link: https://lore.kernel.org/r/20210412174101.6bfb0594@xhacker.debian Signed-off-by:
Will Deacon <will@kernel.org>
-
- Apr 12, 2021
-
-
Catalin Marinas authored
The entry from EL0 code checks the TFSRE0_EL1 register for any asynchronous tag check faults in user space and sets the TIF_MTE_ASYNC_FAULT flag. This is not done atomically, potentially racing with another CPU calling set_tsk_thread_flag(). Replace the non-atomic ORR+STR with an STSET instruction. While STSET requires ARMv8.1 and an assembler that understands LSE atomics, the MTE feature is part of ARMv8.5 and already requires an updated assembler. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Fixes: 637ec831 ("arm64: mte: Handle synchronous and asynchronous tag check faults") Cc: <stable@vger.kernel.org> # 5.10.x Reported-by:
Will Deacon <will@kernel.org> Cc: Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210409173710.18582-1-catalin.marinas@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Ard Biesheuvel authored
Kernel mode NEON can be used in task or softirq context, but only in a non-nesting manner, i.e., softirq context is only permitted if the interrupt was not taken at a point where the kernel was using the NEON in task context. This means all users of kernel mode NEON have to be aware of this limitation, and either need to provide scalar fallbacks that may be much slower (up to 20x for AES instructions) and potentially less safe, or use an asynchronous interface that defers processing to a later time when the NEON is guaranteed to be available. Given that grabbing and releasing the NEON is cheap, we can relax this restriction, by increasing the granularity of kernel mode NEON code, and always disabling softirq processing while the NEON is being used in task context. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210302090118.30666-4-ardb@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 11, 2021
-
-
Vincenzo Frascino authored
When MTE async mode is enabled TFSR_EL1 contains the accumulative asynchronous tag check faults for EL1 and EL0. During the suspend/resume operations the firmware might perform some operations that could change the state of the register resulting in a spurious tag check fault report. Report asynchronous tag faults before suspend and clear the TFSR_EL1 register after resume to prevent this to happen. Cc: Will Deacon <will@kernel.org> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by:
Andrey Konovalov <andreyknvl@google.com> Tested-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-9-vincenzo.frascino@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Vincenzo Frascino authored
MTE provides a mode that asynchronously updates the TFSR_EL1 register when a tag check exception is detected. To take advantage of this mode the kernel has to verify the status of the register at: 1. Context switching 2. Return to user/EL0 (Not required in entry from EL0 since the kernel did not run) 3. Kernel entry from EL1 4. Kernel exit to EL1 If the register is non-zero a trace is reported. Add the required features for EL1 detection and reporting. Note: ITFSB bit is set in the SCTLR_EL1 register hence it guaranties that the indirect writes to TFSR_EL1 are synchronized at exception entry to EL1. On the context switch path the synchronization is guarantied by the dsb() in __switch_to(). The dsb(nsh) in mte_check_tfsr_exit() is provisional pending confirmation by the architects. Cc: Will Deacon <will@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Andrey Konovalov <andreyknvl@google.com> Tested-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-8-vincenzo.frascino@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Vincenzo Frascino authored
mte_enable_kernel_*() are not needed if KASAN_HW is disabled. Add ash defines around the functions to conditionally compile the functions. Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-7-vincenzo.frascino@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Vincenzo Frascino authored
load_unaligned_zeropad() and __get/put_kernel_nofault() functions can read past some buffer limits which may include some MTE granule with a different tag. When MTE async mode is enabled, the load operation crosses the boundaries and the next granule has a different tag the PE sets the TFSR_EL1.TF1 bit as if an asynchronous tag fault is happened. Enable Tag Check Override (TCO) in these functions before the load and disable it afterwards to prevent this to happen. Note: The same condition can be hit in MTE sync mode but we deal with it through the exception handling. In the current implementation, mte_async_mode flag is set only at boot time but in future kasan might acquire some runtime features that that change the mode dynamically, hence we disable it when sync mode is selected for future proof. Cc: Will Deacon <will@kernel.org> Reported-by:
Branislav Rankov <Branislav.Rankov@arm.com> Tested-by:
Branislav Rankov <Branislav.Rankov@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Andrey Konovalov <andreyknvl@google.com> Tested-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-6-vincenzo.frascino@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Vincenzo Frascino authored
MTE provides an asynchronous mode for detecting tag exceptions. In particular instead of triggering a fault the arm64 core updates a register which is checked by the kernel after the asynchronous tag check fault has occurred. Add support for MTE asynchronous mode. The exception handling mechanism will be added with a future patch. Note: KASAN HW activates async mode via kasan.mode kernel parameter. The default mode is set to synchronous. The code that verifies the status of TFSR_EL1 will be added with a future patch. Cc: Will Deacon <will@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Acked-by:
Andrey Konovalov <andreyknvl@google.com> Tested-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210315132019.33202-2-vincenzo.frascino@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 08, 2021
-
-
Sami Tolvanen authored
With CONFIG_CFI_CLANG, the compiler replaces function pointers with jump table addresses, which breaks dynamic ftrace as the address of ftrace_call is replaced with the address of ftrace_call.cfi_jt. Use function_nocfi() to get the address of the actual function instead. Suggested-by:
Ben Dai <ben.dai@unisoc.com> Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-17-samitolvanen@google.com
-
Sami Tolvanen authored
__apply_alternatives makes indirect calls to functions whose address is taken in assembly code using the alternative_cb macro. With non-canonical CFI, the compiler won't replace these function references with the jump table addresses, which trips CFI. Disable CFI checking in the function to work around the issue. Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-16-samitolvanen@google.com
-
Sami Tolvanen authored
Disable CFI checking for functions that switch to linear mapping and make an indirect call to a physical address, since the compiler only understands virtual addresses and the CFI check for such indirect calls would always fail. Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-15-samitolvanen@google.com
-
Sami Tolvanen authored
With CONFIG_CFI_CLANG, the compiler replaces function address references with the address of the function's CFI jump table entry. This means that __pa_symbol(function) returns the physical address of the jump table entry, which can lead to address space confusion as the jump table points to the function's virtual address. Therefore, use the function_nocfi() macro to ensure we are always taking the address of the actual function instead. Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-14-samitolvanen@google.com
-
Marc Zyngier authored
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by:
Will Deacon <will@kernel.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
It seems that the CPUs part of the SoC known as Apple M1 have the terrible habit of being stuck with HCR_EL2.E2H==1, in violation of the architecture. Try and work around this deplorable state of affairs by detecting the stuck bit early and short-circuit the nVHE dance. Additional filtering code ensures that attempts at switching to nVHE from the command-line are also ignored. It is still unknown whether there are many more such nuggets to be found... Reported-by:
Hector Martin <marcan@marcan.st> Acked-by:
Will Deacon <will@kernel.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-3-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Some CPUs are broken enough that some overrides need to be rejected at the earliest opportunity. In some cases, that's right at cpu feature override time. Provide the necessary infrastructure to filter out overrides, and to report such filtered out overrides to the core cpufeature code. Acked-by:
Will Deacon <will@kernel.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Jisheng Zhang authored
They are not needed after booting, so mark them as __init to move them to the .init section. Signed-off-by:
Jisheng Zhang <Jisheng.Zhang@synaptics.com> Reviewed-by:
Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/20210330135449.4dcffd7f@xhacker.debian Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
When we enable SVE usage in userspace after taking a SVE access trap we need to ensure that the portions of the register state that are not shared with the FPSIMD registers are zeroed. Currently we do this by forcing the FPSIMD registers to be saved to the task struct and converting them there. This is wasteful in the common case where the task state is loaded into the registers and we will immediately return to userspace since we can initialise the SVE state directly in registers instead of accessing multiple copies of the register state in memory. Instead in that common case do the conversion in the registers and update the task metadata so that we can return to userspace without spilling the register state to memory unless there is some other reason to do so. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20210312190313.24598-1-broonie@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Kees Cook authored
Allow for a randomized stack offset on a per-syscall basis, with roughly 5 bits of entropy. (And include AAPCS rationale AAPCS thanks to Mark Rutland.) In order to avoid unconditional stack canaries on syscall entry (due to the use of alloca()), also disable stack protector to avoid triggering needless checks and slowing down the entry path. As there is no general way to control stack protector coverage with a function attribute[1], this must be disabled at the compilation unit level. This isn't a problem here, though, since stack protector was not triggered before: examining the resulting syscall.o, there are no changes in canary coverage (none before, none now). [1] a working __attribute__((no_stack_protector)) has been added to GCC and Clang but has not been released in any version yet: https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=346b302d09c1e6db56d9fe69048acb32fbb97845 https://reviews.llvm.org/rG4fbf84c1732fca596ad1d6e96015e19760eb8a9b Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401232347.2791257-6-keescook@chromium.org
-
- Apr 06, 2021
-
-
Suzuki K Poulose authored
For a nvhe host, the EL2 must allow the EL1&0 translation regime for TraceBuffer (MDCR_EL2.E2TB == 0b11). This must be saved/restored over a trip to the guest. Also, before entering the guest, we must flush any trace data if the TRBE was enabled. And we must prohibit the generation of trace while we are in EL1 by clearing the TRFCR_EL1. For vhe, the EL2 must prevent the EL1 access to the Trace Buffer. The MDCR_EL2 bit definitions for TRBE are available here : https://developer.arm.com/documentation/ddi0601/2020-12/AArch64-Registers/ Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210405164307.1720226-8-suzuki.poulose@arm.com Signed-off-by:
Mathieu Poirier <mathieu.poirier@linaro.org>
-
- Apr 01, 2021
-
-
Qi Liu authored
The initialization of value in function armv8pmu_read_hw_counter() and armv8pmu_read_counter() seem redundant, as they are soon updated. So, We can remove them. Signed-off-by:
Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1617275801-1980-1-git-send-email-liuqi115@huawei.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Andrew Scull authored
To aid with debugging, add details of the source of a panic from nVHE hyp. This is done by having nVHE hyp exit to nvhe_hyp_panic_handler() rather than directly to panic(). The handler will then add the extra details for debugging before panicking the kernel. If the panic was due to a BUG(), look up the metadata to log the file and line, if available, otherwise log an address that can be looked up in vmlinux. The hyp offset is also logged to allow other hyp VAs to be converted, similar to how the kernel offset is logged during a panic. __hyp_panic_string is now inlined since it no longer needs to be referenced as a symbol and the message is free to diverge between VHE and nVHE. The following is an example of the logs generated by a BUG in nVHE hyp. [ 46.754840] kvm [307]: nVHE hyp BUG at: arch/arm64/kvm/hyp/nvhe/switch.c:242! [ 46.755357] kvm [307]: Hyp Offset: 0xfffea6c58e1e0000 [ 46.755824] Kernel panic - not syncing: HYP panic: [ 46.755824] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800 [ 46.755824] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000 [ 46.755824] VCPU:0000d93a880d0000 [ 46.756960] CPU: 3 PID: 307 Comm: kvm-vcpu-0 Not tainted 5.12.0-rc3-00005-gc572b99cf65b-dirty #133 [ 46.757459] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015 [ 46.758366] Call trace: [ 46.758601] dump_backtrace+0x0/0x1b0 [ 46.758856] show_stack+0x18/0x70 [ 46.759057] dump_stack+0xd0/0x12c [ 46.759236] panic+0x16c/0x334 [ 46.759426] arm64_kernel_unmapped_at_el0+0x0/0x30 [ 46.759661] kvm_arch_vcpu_ioctl_run+0x134/0x750 [ 46.759936] kvm_vcpu_ioctl+0x2f0/0x970 [ 46.760156] __arm64_sys_ioctl+0xa8/0xec [ 46.760379] el0_svc_common.constprop.0+0x60/0x120 [ 46.760627] do_el0_svc+0x24/0x90 [ 46.760766] el0_svc+0x2c/0x54 [ 46.760915] el0_sync_handler+0x1a4/0x1b0 [ 46.761146] el0_sync+0x170/0x180 [ 46.761889] SMP: stopping secondary CPUs [ 46.762786] Kernel Offset: 0x3e1cd2820000 from 0xffff800010000000 [ 46.763142] PHYS_OFFSET: 0xffffa9f680000000 [ 46.763359] CPU features: 0x00240022,61806008 [ 46.763651] Memory Limit: none [ 46.813867] ---[ end Kernel panic - not syncing: HYP panic: [ 46.813867] PS:400003c9 PC:0000d93a82c705ac ESR:f2000800 [ 46.813867] FAR:0000000080080000 HPFAR:0000000000800800 PAR:0000000000000000 [ 46.813867] VCPU:0000d93a880d0000 ]--- Signed-off-by:
Andrew Scull <ascull@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210318143311.839894-6-ascull@google.com
-
- Mar 29, 2021
-
-
Lecopzer Chen authored
After KASAN_VMALLOC works in arm64, we can randomize module region into vmalloc area now. Test: VMALLOC area ffffffc010000000 fffffffdf0000000 before the patch: module_alloc_base/end ffffffc008b80000 ffffffc010000000 after the patch: module_alloc_base/end ffffffdcf4bed000 ffffffc010000000 And the function that insmod some modules is fine. Suggested-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Lecopzer Chen <lecopzer.chen@mediatek.com> Link: https://lore.kernel.org/r/20210324040522.15548-5-lecopzer.chen@mediatek.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Mar 28, 2021
-
-
Mark Brown authored
Currently start_backtrace() is a static inline function in the header. Since it really shouldn't be sufficiently performance critical that we actually need to have it inlined move it into a C file, this will save anyone else scratching their head about why it is defined in the header. As far as I can see it's only there because it was factored out of the various callers. Signed-off-by:
Mark Brown <broonie@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210319174022.33051-1-broonie@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-