- Feb 09, 2021
-
-
Marc Zyngier authored
We can now move the initial SCTLR_EL1 setup to be used for both EL1 and EL2 setup. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-10-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As init_el2_state is now nVHE only, let's simplify it and drop the VHE setup. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-9-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
There isn't much that a VHE kernel needs on top of whatever has been done for nVHE, so let's move the little we need to the VHE stub (the SPE setup), and drop the init_el2_state macro. No expected functional change. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-8-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we are aiming to be able to control whether we enable VHE or not, let's always drop down to EL1 first, and only then upgrade to VHE if at all possible. This means that if the kernel is booted at EL2, we always start with a nVHE init, drop to EL1 to initialise the the kernel, and only then upgrade the kernel EL to EL2 if possible (the process is obviously shortened for secondary CPUs). The resume path is handled similarly to a secondary CPU boot. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-6-maz@kernel.org [will: Avoid calling switch_to_vhe twice on kaslr path] Signed-off-by:
Will Deacon <will@kernel.org>
-
- Feb 08, 2021
-
-
Marc Zyngier authored
As we are about to change the way a VHE system boots, let's provide the core helper, in the form of a stub hypercall that enables VHE and replicates the full EL1 context at EL2, thanks to EL1 and VHE-EL2 being extremely similar. On exception return, the kernel carries on at EL2. Fancy! Nothing calls this new hypercall yet, so no functional change. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-5-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Turning the MMU on is a popular sport in the arm64 kernel, and we do it more than once, or even twice. As we are about to add even more, let's turn it into a macro. No expected functional change. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jan 15, 2021
-
-
Mark Rutland authored
The kbuild test robot reports that when building with W=1, GCC will warn for a couple of missing prototypes in syscall.c: | arch/arm64/kernel/syscall.c:157:6: warning: no previous prototype for 'do_el0_svc' [-Wmissing-prototypes] | 157 | void do_el0_svc(struct pt_regs *regs) | | ^~~~~~~~~~ | arch/arm64/kernel/syscall.c:164:6: warning: no previous prototype for 'do_el0_svc_compat' [-Wmissing-prototypes] | 164 | void do_el0_svc_compat(struct pt_regs *regs) | | ^~~~~~~~~~~~~~~~~ While this isn't a functional problem, as a general policy we should include the prototype for functions wherever possible to catch any accidental divergence between the prototype and implementation. Here we can easily include <asm/exception.h>, so let's do so. While there are a number of warnings elsewhere and some warnings enabled under W=1 are of questionable benefit, this change helps to make the code more robust as it evolved and reduces the noise somewhat, so it seems worthwhile. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reported-by:
kernel test robot <lkp@intel.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/202101141046.n8iPO3mw-lkp@intel.com Link: https://lore.kernel.org/r/20210114124812.17754-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 13, 2021
-
-
Jianlin Lv authored
S_FRAME_SIZE is the size of the pt_regs structure, no longer the size of the kernel stack frame, the name is misleading. In keeping with arm32, rename S_FRAME_SIZE to PT_REGS_SIZE. Signed-off-by:
Jianlin Lv <Jianlin.Lv@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210112015813.2340969-1-Jianlin.Lv@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This reverts commit 367c820e. lockup_detector_init() makes heavy use of per-cpu variables and must be called with preemption disabled. Usually, it's handled early during boot in kernel_init_freeable(), before SMP has been initialised. Since we do not know whether or not our PMU interrupt can be signalled as an NMI until considerably later in the boot process, the Arm PMU driver attempts to re-initialise the lockup detector off the back of a device_initcall(). Unfortunately, this is called from preemptible context and results in the following splat: | BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1 | caller is debug_smp_processor_id+0x20/0x2c | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.10.0+ #276 | Hardware name: linux,dummy-virt (DT) | Call trace: | dump_backtrace+0x0/0x3c0 | show_stack+0x20/0x6c | dump_stack+0x2f0/0x42c | check_preemption_disabled+0x1cc/0x1dc | debug_smp_processor_id+0x20/0x2c | hardlockup_detector_event_create+0x34/0x18c | hardlockup_detector_perf_init+0x2c/0x134 | watchdog_nmi_probe+0x18/0x24 | lockup_detector_init+0x44/0xa8 | armv8_pmu_driver_init+0x54/0x78 | do_one_initcall+0x184/0x43c | kernel_init_freeable+0x368/0x380 | kernel_init+0x1c/0x1cc | ret_from_fork+0x10/0x30 Rather than bodge this with raw_smp_processor_id() or randomly disabling preemption, simply revert the culprit for now until we figure out how to do this properly. Reported-by:
Lecopzer Chen <lecopzer.chen@mediatek.com> Signed-off-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Cc: Sumit Garg <sumit.garg@linaro.org> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20201221162249.3119-1-lecopzer.chen@mediatek.com Link: https://lore.kernel.org/r/20210112221855.10666-1-will@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
All EL0 returns go via ret_to_user(), which masks IRQs and notifies lockdep and tracing before calling into do_notify_resume(). Therefore, there's no need for do_notify_resume() to call trace_hardirqs_off(), and the comment is stale. The call is simply redundant. In ret_to_user() we call exit_to_user_mode(), which notifies lockdep and tracing the IRQs will be enabled in userspace, so there's no need for el0_svc_common() to call trace_hardirqs_on() before returning. Further, at the start of ret_to_user() we call trace_hardirqs_off(), so not only is this redundant, but it is immediately undone. In addition to being redundant, the trace_hardirqs_on() in el0_svc_common() leaves lockdep inconsistent with the hardware state, and is liable to cause issues for any C code or instrumentation between this and the call to trace_hardirqs_off() which undoes it in ret_to_user(). This patch removes the redundant tracing calls and associated stale comments. Fixes: 23529049 ("arm64: entry: fix non-NMI user<->kernel transitions") Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107145310.44616-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 05, 2021
-
-
Peter Collingbourne authored
Currently with ld.lld we emit an empty .eh_frame_hdr section (and a corresponding program header) into the vDSO. With ld.bfd the section is not emitted but the program header is, with p_vaddr set to 0. This can lead to unwinders attempting to interpret the data at whichever location the program header happens to point to as an unwind info header. This happens to be mostly harmless as long as the byte at that location (interpreted as a version number) has a value other than 1, causing both libgcc and LLVM libunwind to ignore the section (in libunwind's case, after printing an error message to stderr), but it could lead to worse problems if the byte happened to be 1 or the program header points to non-readable memory (e.g. if the empty section was placed at a page boundary). Instead of disabling .eh_frame_hdr via --no-eh-frame-hdr (which also has the downside of being unsupported by older versions of GNU binutils), disable it by discarding the section, and stop emitting the program header that points to it. I understand that we intend to emit valid unwind info for the vDSO at some point. Once that happens this patch can be reverted. Signed-off-by:
Peter Collingbourne <pcc@google.com> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Acked-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://linux-review.googlesource.com/id/If745fd9cadcb31b4010acbf5693727fe111b0863 Link: https://lore.kernel.org/r/20201230221954.2007257-1-pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Tian Tao authored
asm/exception.h is included more than once. Remove the one that isn't necessary. Signed-off-by:
Tian Tao <tiantao6@hisilicon.com> Link: https://lore.kernel.org/r/1609139108-10819-1-git-send-email-tiantao6@hisilicon.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Shannon Zhao authored
Commit d82755b2 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") deletes CONFIG_KVM_ARM_HOST option, it should use CONFIG_KVM instead. Just remove CONFIG_KVM_ARM_HOST here. Fixes: d82755b2 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") Signed-off-by:
Shannon Zhao <shannon.zhao@linux.alibaba.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1609760324-92271-1-git-send-email-shannon.zhao@linux.alibaba.com
-
- Jan 04, 2021
-
-
Peter Collingbourne authored
This ISB is unnecessary because we will soon do an ERET. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I69f1ee6bb09b1372dd744a0e01cedaf090c8d448 Link: https://lore.kernel.org/r/20201203073458.2675400-1-pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Geert Uytterhoeven authored
arch/arm64/kernel/smp.c: In function ‘arch_show_interrupts’: arch/arm64/kernel/smp.c:808:16: warning: unused variable ‘irq’ [-Wunused-variable] 808 | unsigned int irq = irq_desc_get_irq(ipi_desc[i]); | ^~~ The removal of the last user forgot to remove the variable. Fixes: 5089bc51 ("arm64/smp: Use irq_desc_kstat_cpu() in arch_show_interrupts()") Signed-off-by:
Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20201215103026.2872532-1-geert+renesas@glider.be Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Dec 22, 2020
-
-
Andrey Konovalov authored
There's a config option CONFIG_KASAN_STACK that has to be enabled for KASAN to use stack instrumentation and perform validity checks for stack variables. There's no need to unpoison stack when CONFIG_KASAN_STACK is not enabled. Only call kasan_unpoison_task_stack[_below]() when CONFIG_KASAN_STACK is enabled. Note, that CONFIG_KASAN_STACK is an option that is currently always defined when CONFIG_KASAN is enabled, and therefore has to be tested with #if instead of #ifdef. Link: https://lkml.kernel.org/r/d09dd3f8abb388da397fd11598c5edeaa83fe559.1606162397.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/If8a891e9fe01ea543e00b576852685afec0887e3 Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Marco Elver <elver@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Dmitry Vyukov <dvyukov@google.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrey Konovalov authored
Provide implementation of KASAN functions required for the hardware tag-based mode. Those include core functions for memory and pointer tagging (tags_hw.c) and bug reporting (report_tags_hw.c). Also adapt common KASAN code to support the new mode. Link: https://lkml.kernel.org/r/cfd0fbede579a6b66755c98c88c108e54f9c56bf.1606161801.git.andreyknvl@google.com Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Alexander Potapenko <glider@google.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrey Konovalov authored
Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes (either related to shadow memory or compiler instrumentation). Expand those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS. Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Alexander Potapenko <glider@google.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Vincenzo Frascino authored
When MTE is present, the GCR_EL1 register contains the tags mask that allows to exclude tags from the random generation via the IRG instruction. With the introduction of the new Tag-Based KASAN API that provides a mechanism to reserve tags for special reasons, the MTE implementation has to make sure that the GCR_EL1 setting for the kernel does not affect the userspace processes and viceversa. Save and restore the kernel/user mask in GCR_EL1 in kernel entry and exit. Link: https://lkml.kernel.org/r/578b03294708cc7258fad0dc9c2a2e809e5a8214.1606161801.git.andreyknvl@google.com Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Vincenzo Frascino authored
The gcr_user mask is a per thread mask that represents the tags that are excluded from random generation when the Memory Tagging Extension is present and an 'irg' instruction is invoked. gcr_user affects the behavior on EL0 only. Currently that mask is an include mask and it is controlled by the user via prctl() while GCR_EL1 accepts an exclude mask. Convert the include mask into an exclude one to make it easier the register setting. Note: This change will affect gcr_kernel (for EL1) introduced with a future patch. Link: https://lkml.kernel.org/r/946dd31be833b660334c4f93410acf6d6c4cf3c4.1606161801.git.andreyknvl@google.com Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Vincenzo Frascino authored
Hardware tag-based KASAN relies on Memory Tagging Extension (MTE) feature and requires it to be enabled. MTE supports This patch adds a new mte_enable_kernel() helper, that enables MTE in Synchronous mode in EL1 and is intended to be called from KASAN runtime during initialization. The Tag Checking operation causes a synchronous data abort as a consequence of a tag check fault when MTE is configured in synchronous mode. As part of this change enable match-all tag for EL1 to allow the kernel to access user pages without faulting. This is required because the kernel does not have knowledge of the tags set by the user in a page. Note: For MTE, the TCF bit field in SCTLR_EL1 affects only EL1 in a similar way as TCF0 affects EL0. MTE that is built on top of the Top Byte Ignore (TBI) feature hence we enable it as part of this patch as well. Link: https://lkml.kernel.org/r/7352b0a0899af65c2785416c8ca6bf3845b66fa1.1606161801.git.andreyknvl@google.com Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Vincenzo Frascino authored
The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Link: https://lkml.kernel.org/r/9073d4e973747a6f78d5bdd7ebe17f290d087096.1606161801.git.andreyknvl@google.com Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Vincenzo Frascino authored
Provide helper functions to manipulate allocation and pointer tags for kernel addresses. Low-level helper functions (mte_assign_*, written in assembly) operate tag values from the [0x0, 0xF] range. High-level helper functions (mte_get/set_*) use the [0xF0, 0xFF] range to preserve compatibility with normal kernel pointers that have 0xFF in their top byte. MTE_GRANULE_SIZE and related definitions are moved to mte-def.h header that doesn't have any dependencies and is safe to include into any low-level header. Link: https://lkml.kernel.org/r/c31bf759b4411b2d98cdd801eb928e241584fd1f.1606161801.git.andreyknvl@google.com Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Andrey Konovalov authored
Rename kasan_init_tags() to kasan_init_sw_tags() as the upcoming hardware tag-based KASAN mode will have its own initialization routine. Also similarly to kasan_init() mark kasan_init_tags() as __init. Link: https://lkml.kernel.org/r/71e52af72a09f4b50c8042f16101c60e50649fbb.1606161801.git.andreyknvl@google.com Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Alexander Potapenko <glider@google.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Brazdil authored
Computing the hyp VA layout is redundant when the kernel runs in EL2 and hyp shares its VA mappings. Make calling kvm_compute_layout() conditional on not just CONFIG_KVM but also !is_kernel_in_hyp_mode(). Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201208142452.87237-4-dbrazdil@google.com
-
- Dec 15, 2020
-
-
Dmitry Safonov authored
Don't allow splitting of vm_special_mapping's. It affects vdso/vvar areas. Uprobes have only one page in xol_area so they aren't affected. Those restrictions were enforced by checks in .mremap() callbacks. Restrict resizing with generic .split() callback. Link: https://lkml.kernel.org/r/20201013013416.390574-7-dima@arista.com Signed-off-by:
Dmitry Safonov <dima@arista.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: John Hubbard <jhubbard@nvidia.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Ralph Campbell <rcampbell@nvidia.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Gleixner authored
The irq descriptor is already there, no need to look it up again. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201210194043.546326568@linutronix.de
-
Viresh Kumar authored
The previous call to update_freq_counters_refs() has already updated the per-cpu variables, don't overwrite them with the same value again. Fixes: 4b9cf23c ("arm64: wrap and generalise counter read functions") Signed-off-by:
Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by:
Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/7a171f710cdc0f808a2bfbd7db839c0d265527e7.1607579234.git.viresh.kumar@linaro.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Dec 08, 2020
-
-
Marc Zyngier authored
Conflict resolution gone astray results in the kernel not booting on VHE-capable HW when VHE support is disabled. Thankfully spotted by David. Reported-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- Dec 04, 2020
-
-
David Brazdil authored
When compiling with __KVM_NVHE_HYPERVISOR__, redefine per_cpu_offset() to __hyp_per_cpu_offset() which looks up the base of the nVHE per-CPU region of the given cpu and computes its offset from the .hyp.data..percpu section. This enables use of per_cpu_ptr() helpers in nVHE hyp code. Until now only this_cpu_ptr() was supported by setting TPIDR_EL2. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-14-dbrazdil@google.com
-
David Brazdil authored
Add rules for renaming the .data..ro_after_init ELF section in KVM nVHE object files to .hyp.data..ro_after_init, linking it into the kernel and mapping it in hyp at runtime. The section is RW to the host, then mapped RO in hyp. The expectation is that the host populates the variables in the section and they are never changed by hyp afterwards. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-13-dbrazdil@google.com
-
David Brazdil authored
MAIR_EL2 and TCR_EL2 are currently initialized from their _EL1 values. This will not work once KVM starts intercepting PSCI ON/SUSPEND SMCs and initializing EL2 state before EL1 state. Obtain the EL1 values during KVM init and store them in the init params struct. The struct will stay in memory and can be used when booting new cores. Take the opportunity to move copying the T0SZ value from idmap_t0sz in KVM init rather than in .hyp.idmap.text. This avoids the need for the idmap_t0sz symbol alias. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-12-dbrazdil@google.com
-
David Brazdil authored
Once we start initializing KVM on newly booted cores before the rest of the kernel, parameters to __do_hyp_init will need to be provided by EL2 rather than EL1. At that point it will not be possible to pass its three arguments directly because PSCI_CPU_ON only supports one context argument. Refactor __do_hyp_init to accept its parameters in a struct. This prepares the code for KVM booting cores as well as removes any limits on the number of __do_hyp_init arguments. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-11-dbrazdil@google.com
-
David Brazdil authored
When a CPU is booted in EL2, the kernel checks for VHE support and initializes the CPU core accordingly. For nVHE it also installs the stub vectors and drops down to EL1. Once KVM gains the ability to boot cores without going through the kernel entry point, it will need to initialize the CPU the same way. Extract the relevant bits of el2_setup into an init_el2_state macro with an argument specifying whether to initialize for VHE or nVHE. The following ifdefs are removed: * CONFIG_ARM_GIC_V3 - always selected on arm64 * CONFIG_COMPAT - hstr_el2 can be set even without 32-bit support No functional change intended. Size of el2_setup increased by 148 bytes due to duplication. Signed-off-by:
David Brazdil <dbrazdil@google.com> [maz: reworked to fit the new PSTATE initial setup code] Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-9-dbrazdil@google.com
-
David Brazdil authored
CPU index should never be negative. Change the signature of (set_)cpu_logical_map to take an unsigned int. This still works even if the users treat the CPU index as an int, and will allow the hypervisor's implementation to check that the index is valid with a single upper-bound check. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-8-dbrazdil@google.com
-
David Brazdil authored
Expose the boolean value whether the system is running with KVM in protected mode (nVHE + kernel param). CPU capability was selected over a global variable to allow use in alternatives. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-3-dbrazdil@google.com
-
- Dec 03, 2020
-
-
Peter Collingbourne authored
Previously we were always returning a tag inclusion mask of zero via PR_GET_TAGGED_ADDR_CTRL if TCF0 was set to NONE. Fix it by making the code for the NONE case match the others. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/Iefbea66cf7d2b4c80b82f9639b9ea7f33f7fac53 Fixes: af5ce952 ("arm64: mte: Allow user control of the generated random tags via prctl()") Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20201203075110.2781021-1-pcc@google.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
Now that the PAN toggling has been removed, the only user of __system_matches_cap() is has_generic_auth(), which is only built when CONFIG_ARM64_PTR_AUTH is selected, and Qian reports that this results in a build-time warning when CONFIG_ARM64_PTR_AUTH is not selected: | arch/arm64/kernel/cpufeature.c:2649:13: warning: '__system_matches_cap' defined but not used [-Wunused-function] | static bool __system_matches_cap(unsigned int n) | ^~~~~~~~~~~~~~~~~~~~ It's tricky to restructure things to prevent this, so let's mark __system_matches_cap() as __maybe_unused, as we used to do for the other user of __system_matches_cap() which we just removed. Reported-by:
Qian Cai <qcai@redhat.com> Suggested-by:
Qian Cai <qcai@redhat.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201203152403.26100-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Dec 02, 2020
-
-
Mark Rutland authored
Now that arm64 no longer uses UAO, remove the vestigal feature detection code and Kconfig text. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Some code (e.g. futex) needs to make privileged accesses to userspace memory, and uses uaccess_{enable,disable}_privileged() in order to permit this. All other uaccess primitives use LDTR/STTR, and never need to toggle PAN. Remove the redundant PAN toggling. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-