- Jul 15, 2021
-
-
Mark Rutland authored
We suppress KCOV for entry.o rather than entry-common.o. As entry.o is built from entry.S, this is pointless, and permits instrumentation of entry-common.o, which is built from entry-common.c. Fix the Makefile to suppress KCOV for entry-common.o, as we had intended to begin with. I've verified with objdump that this is working as expected. Fixes: bf6fa2c0 ("arm64: entry: don't instrument entry code with KCOV") Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210715123049.9990-1-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We intend that all the early exception handling code is marked as `noinstr`, but we forgot this for __el0_error_handler_common(), which is called before we have completed entry from user mode. If it were instrumented, we could run into problems with RCU, lockdep, etc. Mark it as `noinstr` to prevent this. The few other functions in entry-common.c which do not have `noinstr` are called once we've completed entry, and are safe to instrument. Fixes: bb8e93a2 ("arm64: entry: convert SError handlers to C") Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210714172801.16475-1-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Since commit: bad1e1c6 ("arm64: mte: switch GCR_EL1 in kernel entry and exit") we saved/restored the user GCR_EL1 value at exception boundaries, and update_gcr_el1_excl() is no longer used for this. However it is used to restore the kernel's GCR_EL1 value when returning from a suspend state. Thus, the comment is misleading (and an ISB is necessary). When restoring the kernel's GCR value, we need an ISB to ensure this is used by subsequent instructions. We don't necessarily get an ISB by other means (e.g. if the kernel is built without support for pointer authentication). As __cpu_setup() initialised GCR_EL1.Exclude to 0xffff, until a context synchronization event, allocation tag 0 may be used rather than the desired set of tags. This patch drops the misleading comment, adds the missing ISB, and for clarity folds update_gcr_el1_excl() into its only user. Fixes: bad1e1c6 ("arm64: mte: switch GCR_EL1 in kernel entry and exit") Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210714143843.56537-2-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jul 12, 2021
-
-
Carlos Bilbao authored
Add missing header <asm/smp.h> on include/asm/smp_plat.h, as it calls function cpu_logical_map(). Also include it on kernel/cpufeature.c since it has calls to functions cpu_panic_kernel() and cpu_die_early(). Both files call functions defined on this header, make the header dependencies less fragile. Signed-off-by:
Carlos Bilbao <bilbao@vt.edu> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/4325940.LvFx2qVVIh@iron-maiden Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jul 08, 2021
-
-
Stephen Boyd authored
Let's use the new printk format to print the stacktrace entry when printing a backtrace to the kernel logs. This will include any module's build ID[1] in it so that offline/crash debugging can easily locate the debuginfo for a module via something like debuginfod[2]. Link: https://lkml.kernel.org/r/20210511003845.2429846-7-swboyd@chromium.org Link: https://fedoraproject.org/wiki/Releases/FeatureBuildId [1] Link: https://sourceware.org/elfutils/Debuginfod.html [2] Signed-off-by:
Stephen Boyd <swboyd@chromium.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Jessica Yu <jeyu@kernel.org> Cc: Evan Green <evgreen@chromium.org> Cc: Hsin-Yi Wang <hsinyi@chromium.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Young <dyoung@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sasha Levin <sashal@kernel.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Kefeng Wang authored
Use setup_initial_init_mm() helper to simplify code. Link: https://lkml.kernel.org/r/20210608083418.137226-5-wangkefeng.wang@huawei.com Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
On arm64, set_direct_map_*() functions may return 0 without actually changing the linear map. This behaviour can be controlled using kernel parameters, so we need a way to determine at runtime whether calls to set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have any effect. Extend set_memory API with can_set_direct_map() function that allows checking if calling set_direct_map_*() will actually change the page table, replace several occurrences of open coded checks in arm64 with the new function and provide a generic stub for architectures that always modify page tables upon calls to set_direct_map APIs. [arnd@arndb.de: arm64: kfence: fix header inclusion ] Link: https://lkml.kernel.org/r/20210518072034.31572-4-rppt@kernel.org Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Acked-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christopher Lameter <cl@linux.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Elena Reshetova <elena.reshetova@intel.com> Cc: Hagen Paul Pfeifer <hagen@jauu.net> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Bottomley <jejb@linux.ibm.com> Cc: "Kirill A. Shutemov" <kirill@shutemov.name> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael Kerrisk <mtk.manpages@gmail.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tycho Andersen <tycho@tycho.ws> Cc: Will Deacon <will@kernel.org> Cc: kernel test robot <lkp@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jul 01, 2021
-
-
Andy Shevchenko authored
kernel.h is being used as a dump for all kinds of stuff for a long time. Here is the attempt to start cleaning it up by splitting out panic and oops helpers. There are several purposes of doing this: - dropping dependency in bug.h - dropping a loop by moving out panic_notifier.h - unload kernel.h from something which has its own domain At the same time convert users tree-wide to use new headers, although for the time being include new header back to kernel.h to avoid twisted indirected includes for existing users. [akpm@linux-foundation.org: thread_info.h needs limits.h] [andriy.shevchenko@linux.intel.com: ia64 fix] Link: https://lkml.kernel.org/r/20210520130557.55277-1-andriy.shevchenko@linux.intel.com Link: https://lkml.kernel.org/r/20210511074137.33666-1-andriy.shevchenko@linux.intel.com Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by:
Bjorn Andersson <bjorn.andersson@linaro.org> Co-developed-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Mike Rapoport <rppt@linux.ibm.com> Acked-by:
Corey Minyard <cminyard@mvista.com> Acked-by:
Christian Brauner <christian.brauner@ubuntu.com> Acked-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Kees Cook <keescook@chromium.org> Acked-by:
Wei Liu <wei.liu@kernel.org> Acked-by:
Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Sebastian Reichel <sre@kernel.org> Acked-by:
Luis Chamberlain <mcgrof@kernel.org> Acked-by:
Stephen Boyd <sboyd@kernel.org> Acked-by:
Thomas Bogendoerfer <tsbogend@alpha.franken.de> Acked-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jun 22, 2021
-
-
Steven Price authored
Define the new system registers that MTE introduces and context switch them. The MTE feature is still hidden from the ID register as it isn't supported in a VM yet. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Steven Price <steven.price@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-4-steven.price@arm.com
-
Raphael Gault authored
This commit modifies the mask of the mrs_hook declared in arch/arm64/kernel/cpufeatures.c which emulates only feature register access. This is necessary because this hook's mask was too large and thus masking any mrs instruction, even if not related to the emulated registers which made the pmu emulation inefficient. Signed-off-by:
Raphael Gault <raphael.gault@arm.com> Signed-off-by:
Rob Herring <robh@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210517180256.2881891-1-robh@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Steven Price authored
A KVM guest could store tags in a page even if the VMM hasn't mapped the page with PROT_MTE. So when restoring pages from swap we will need to check to see if there are any saved tags even if !pte_tagged(). However don't check pages for which pte_access_permitted() returns false as these will not have been swapped out. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Steven Price <steven.price@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-2-steven.price@arm.com
-
- Jun 18, 2021
-
-
Peter Zijlstra authored
Replace a bunch of 'p->state == TASK_RUNNING' with a new helper: task_is_running(p). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Davidlohr Bueso <dave@stgolabs.net> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210611082838.222401495@infradead.org
-
- Jun 17, 2021
-
-
Lee Jones authored
This sort of information is only generally useful when debugging. No need to have these sprinkled through the kernel log otherwise. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by:
Lee Jones <lee.jones@linaro.org> Link: https://lore.kernel.org/r/20210617073059.315542-1-lee.jones@linaro.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Use cpuidle context helpers to switch to using DAIF.IF instead of PMR to mask interrupts, ensuring that we suspend with interrupts being able to reach the CPU interface. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210615111227.2454465-5-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Now that we have helpers that are aware of the pseudo-NMI feature, introduce them to cpu_do_idle(). This allows for some nice cleanup. No functional change intended. Tested-by:
Valentin Schneider <valentin.schneider@arm.com> Reviewed-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210615111227.2454465-3-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 15, 2021
-
-
Dong Aisheng authored
Up to here, the CPU boot mode can either be EL1 or EL2. Correct the code comments a bit. Signed-off-by:
Dong Aisheng <aisheng.dong@nxp.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-5-aisheng.dong@nxp.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Dong Aisheng authored
x5 is not used in the following map_memory. Instead, __pa(__idmap_text_start) is stored in x3 which is used later. Signed-off-by:
Dong Aisheng <aisheng.dong@nxp.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-4-aisheng.dong@nxp.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Dong Aisheng authored
'count - 1' is confusing and not comply with the real code running. 'count' actually represents the extra entries required, no need minus 1. Signed-off-by:
Dong Aisheng <aisheng.dong@nxp.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210518101405.1048860-3-aisheng.dong@nxp.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Anshuman Khandual authored
When using CONFIG_ARM64_SW_TTBR0_PAN, a task's thread_info::ttbr0 must be the TTBR0_EL1 value used to run userspace. With 52-bit PAs, the PA must be packed into the TTBR using phys_to_ttbr(), but we forget to do this in some of the SW PAN code. Thus, if the value is installed into TTBR0_EL1 (as may happen in the uaccess routines), this could result in UNPREDICTABLE behaviour. Since hardware with 52-bit PA support almost certainly has HW PAN, which will be used in preference, this shouldn't be a practical issue, but let's fix this for consistency. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: James Morse <james.morse@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Fixes: 529c4b05 ("arm64: handle 52-bit addresses in TTBR") Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/1623749578-11231-1-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Daniel Kiss authored
If the kernel is not compiled with CONFIG_ARM64_PTR_AUTH_KERNEL=y, then no PACI/AUTI instructions are expected while the kernel is running so the kernel's key will not be used. Write of a system registers is expensive therefore avoid if not required. Signed-off-by:
Daniel Kiss <daniel.kiss@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210613092632.93591-3-daniel.kiss@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Daniel Kiss authored
This patch add the ARM64_PTR_AUTH_KERNEL config and deals with the build aspect of it. Userspace support has no dependency on the toolchain therefore all toolchain checks and build flags are controlled the new config option. The default config behavior will not be changed. Signed-off-by:
Daniel Kiss <daniel.kiss@arm.com> Acked-by:
Will Deacon <will@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210613092632.93591-2-daniel.kiss@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 13, 2021
-
-
Guenter Roeck authored
All users of arm_pm_restart() have been converted to use the kernel restart handler. Acked-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by:
Wolfram Sang <wsa+renesas@sang-engineering.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Thierry Reding <treding@nvidia.com> Signed-off-by:
Lee Jones <lee.jones@linaro.org> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Jun 11, 2021
-
-
Will Deacon authored
Scheduling a 32-bit application on a 64-bit-only CPU is a bad idea. Ensure that 32-bit applications always take the slow-path when returning to userspace on a system with mismatched support at EL0, so that we can avoid trying to run on a 64-bit-only CPU and force a SIGKILL instead. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-5-will@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
When confronted with a mixture of CPUs, some of which support 32-bit applications and others which don't, we quite sensibly treat the system as 64-bit only for userspace and prevent execve() of 32-bit binaries. Unfortunately, some crazy folks have decided to build systems like this with the intention of running 32-bit applications, so relax our sanitisation logic to continue to advertise 32-bit support to userspace on these systems and track the real 32-bit capable cores in a cpumask instead. For now, the default behaviour remains but will be tied to a command-line option in a later patch. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-3-will@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
In preparation for late initialisation of the "sanitised" AArch32 register state, move the AArch32 registers out of 'struct cpuinfo' and into their own struct definition. Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-2-will@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
For histroical reasons, we define AARCH64_INSN_SIZE in <asm/alternative-macros.h>, but it would make more sense to do so in <asm/insn.h>. Let's move it into <asm/insn.h>, and add the necessary include directives for this. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210609102301.17332-3-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Currently, <asm/insn.h> includes <asm/patching.h>. We intend that <asm/insn.h> will be usable from userspace, so it doesn't make sense to include headers for kernel-only features such as the patching routines, and we'd intended to restrict <asm/insn.h> to instruction encoding details. Let's decouple the patching code from <asm/insn.h>, and explicitly include <asm/patching.h> where it is needed. Since <asm/patching.h> isn't included from assembly, we can drop the __ASSEMBLY__ guards. At the same time, sort the kprobes includes so that it's easier to see what is and isn't incldued. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210609102301.17332-2-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Qi Liu authored
Use common macro PMU_EVENT_ATTR_ID to simplify ARMV8_EVENT_ATTR Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Qi Liu <liuqi115@huawei.com> Link: https://lore.kernel.org/r/1623220863-58233-8-git-send-email-liuqi115@huawei.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 08, 2021
-
-
Mark Brown authored
SMCCC v1.2 requires that all SVE state be preserved over SMC calls which introduces substantial overhead in the common case where there is no SVE state in the registers. To avoid this SMCCC v1.3 introduces a flag which allows the caller to say that there is no state that needs to be preserved in the registers. Make use of this flag, setting it if the SMCCC version indicates support for it and the TIF_ flags indicate that there is no live SVE state in the registers, this avoids placing any constraints on when SMCCC calls can be done or triggering extra saving and reloading of SVE register state in the kernel. This would be straightforward enough except for the rather entertaining inline assembly we use to do SMCCC v1.1 calls to allow us to take advantage of the limited number of registers it clobbers. Deal with this by having a function which we call immediately before issuing the SMCCC call to make our checks and set the flag. Using alternatives the overhead if SVE is supported but not detected at runtime can be reduced to a single NOP. Signed-off-by:
Mark Brown <broonie@kernel.org> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Reviewed-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210603184118.15090-1-broonie@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 07, 2021
-
-
Mark Rutland authored
The low-level idle code in arch_cpu_idle() and its callees runs at a time where where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code. We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by factoring these functions into a new idle.c so that we can disable KCOV for the entire compilation unit, as is done for the core idle code in kernel/sched/idle.c. We'd like to keep instrumentation of the rest of process.c, and for the existing code in cpuidle.c, so a new compilation unit is preferable. The arch_cpu_idle_dead() function in process.c is a cpu hotplug function that is safe to instrument, so it is left as-is in process.c. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-21-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
The code in entry-common.c runs at exception entry and return boundaries, where portions of the kernel environment aren't available. For example, RCU may not be watching, and lockdep state may be out-of-sync with the hardware. Due to this, it is not sound to instrument this code. We generally avoid instrumentation by marking the entry functions as `noinstr`, but currently this doesn't inhibit KCOV instrumentation. Prevent this by disabling KCOV for the entire compilation unit. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-20-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Now that we only call arm64_enter_nmi() and arm64_exit_nmi() from within entry-common.c, let's make these static to ensure this remains the case. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-19-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We'd like to keep all the entry sequencing in entry-common.c, as this will allow us to ensure this is consistent, and free from any unsound instrumentation. Currently __sdei_handler() performs the NMI entry/exit sequences in sdei.c. Let's split the low-level entry sequence from the event handling, moving the former to entry-common.c and keeping the latter in sdei.c. The event handling function is renamed to do_sdei_event(), matching the do_${FOO}() pattern used for other exception handlers. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-18-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We'd like to keep all the entry sequencing in entry-common.c, as this will allow us to ensure this is consistent, and free from any unsound instrumentation. Currently handle_bad_stack() performs the NMI entry sequence in traps.c. Let's split the low-level entry sequence from the reporting, moving the former to entry-common.c and keeping the latter in traps.c. To make it clear that reporting function never returns, it is renamed to panic_bad_stack(). Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-17-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
An unexpected synchronous exception from EL1h could happen at any time, and for robustness we should treat this as an NMI, making minimal assumptions about the context the exception was taken from. Currently el1_inv() assumes we can use enter_from_kernel_mode(), and also assumes that we should inherit the original DAIF value. Neither of these are desireable when we take an unexpected exception. Further, after el1_inv() calls __panic_unhandled(), the remainder of the function is unreachable, and therefore superfluous. Let's address this and simplify things by having el1h_64_sync_handler() call __panic_unhandled() directly, without any of the redundant logic. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reported-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-16-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
We have 16 architectural exception vectors, and depending on kernel configuration we handle 8 or 12 of these with C code, with the remaining 8 or 4 of these handled as special cases in the entry assembly. It would be nicer if the entry assembly were uniform for all exceptions, and we deferred any specific handling of the exceptions to C code. This way the entry assembly can be more easily templated without ifdeffery or special cases, and it's easier to modify the handling of these cases in future (e.g. to dump additional registers other context). This patch reworks the entry code so that we always have a C handler for every architectural exception vector, with the entry assembly being completely uniform. We now have to handle exceptions from EL1t and EL1h, and also have to handle exceptions from AArch32 even when the kernel is built without CONFIG_COMPAT. To make this clear and to simplify templating, we rename the top-level exception handlers with a consistent naming scheme: asm: <el+sp>_<regsize>_<type> c: <el+sp>_<regsize>_<type>_handler .. where: <el+sp> is `el1t`, `el1h`, or `el0t` <regsize> is `64` or `32` <type> is `sync`, `irq`, `fiq`, or `error` ... e.g. asm: el1h_64_sync c: el1h_64_sync_handler ... with lower-level handlers simply using "el1" and "compat" as today. For unexpected exceptions, this information is passed to __panic_unhandled(), so it can report the specific vector an unexpected exception was taken from, e.g. | Unhandled 64-bit el1t sync exception For vectors we never expect to enter legitimately, the C code is generated using a macro to avoid code duplication. The exceptions are handled via __panic_unhandled(), replacing bad_mode() (which is removed). The `kernel_ventry` and `entry_handler` assembly macros are updated to handle the new naming scheme. In theory it should be possible to generate the entry functions at the same time as the vectors using a single table, but this will require reworking the linker script to split the two into separate sections, so for now we have separate tables. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-15-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Now that the majority of the exception triage logic has been converted to C, the entry assembly functions all have a uniform structure. Let's generate them all with an assembly macro to reduce the amount of code and to ensure they all remain in sync if we make changes in future. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-14-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Our use of bad_mode() has a few rough edges: * AArch64 doesn't use the term "mode", and refers to "Execution states", "Exception levels", and "Selected stack pointer". * We log the exception type (SYNC/IRQ/FIQ/SError), but not the actual "mode" (though this can be decoded from the SPSR value). * We use bad_mode() as a second-level handler for unexpected synchronous exceptions, where the "mode" is legitimate, but the specific exception is not. * We dump the ESR value, but call this "code", and so it's not clear to all readers that this is the ESR. ... and all of this can be somewhat opaque to those who aren't extremely familiar with the code. Let's make this a bit clearer by having bad_mode() log "Unhandled ${TYPE} exception" rather than "Bad mode in ${TYPE} handler", using "ESR" rather than "code", and having the final panic() log "Unhandled exception" rather than "Bad mode". In future we'd like to log the specific architectural vector rather than just the type of exception, so we also split the core of bad_mode() out into a helper called __panic_unhandled(), which takes the vector as a string argument. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-13-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
In subsequent patches we'll rework the way bad_mode() is called by exception entry code. In preparation for this, let's move bad_mode() itself into entry-common.c. Let's also mark it as noinstr (e.g. to prevent it being kprobed), and let's also make the `handler` array a local variable, as this is only use by bad_mode(), and will be removed entirely in a subsequent patch. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-12-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Following the example of ret_to_user, let's consolidate all the EL1 return paths with a ret_to_kernel helper, rather than each entry point having its own copy of the return code. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Joey Gouly <joey.gouly@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210607094624.34689-11-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-