- Feb 09, 2021
-
-
Marc Zyngier authored
In order to be able to disable Pointer Authentication at runtime, whether it is for testing purposes, or to work around HW issues, let's add support for overriding the ID_AA64ISAR1_EL1.{GPI,GPA,API,APA} fields. This is further mapped on the arm64.nopauth command-line alias. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Tested-by:
Srinivas Ramana <sramana@codeaurora.org> Link: https://lore.kernel.org/r/20210208095732.3267263-23-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Srinivas Ramana authored
Defer enabling pointer authentication on boot core until after its required to be enabled by cpufeature framework. This will help in controlling the feature dynamically with a boot parameter. Signed-off-by:
Ajay Patil <pajay@qti.qualcomm.com> Signed-off-by:
Prasad Sodagudi <psodagud@codeaurora.org> Signed-off-by:
Srinivas Ramana <sramana@codeaurora.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1610152163-16554-2-git-send-email-sramana@codeaurora.org Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-22-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
In order to be able to disable BTI at runtime, whether it is for testing purposes, or to work around HW issues, let's add support for overriding the ID_AA64PFR1_EL1.BTI field. This is further mapped on the arm64.nobti command-line alias. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Tested-by:
Srinivas Ramana <sramana@codeaurora.org> Link: https://lore.kernel.org/r/20210208095732.3267263-21-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Given that the early cpufeature infrastructure has borrowed quite a lot of code from the kaslr implementation, let's reimplement the matching of the "nokaslr" option with it. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-20-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Admitedly, passing id_aa64mmfr1.vh=0 on the command-line isn't that easy to understand, and it is likely that users would much prefer write "kvm-arm.mode=nvhe", or "...=protected". So here you go. This has the added advantage that we can now always honor the "kvm-arm.mode=protected" option, even when booting on a VHE system. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-18-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
In order to map the override of idregs to options that a user can easily understand, let's introduce yet another option array, which maps an option to the corresponding idreg options. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-17-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Finally we can check whether VHE is disabled on the command line, and not enable it if that's the user's wish. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-16-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we want to be able to disable VHE at runtime, let's match "id_aa64mmfr1.vh=" from the command line as an override. This doesn't have much effect yet as our boot code doesn't look at the cpufeature, but only at the HW registers. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-15-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
In order to be able to override CPU features at boot time, let's add a command line parser that matches options of the form "cpureg.feature=value", and store the corresponding value into the override val/mask pair. No features are currently defined, so no expected change in functionality. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-14-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we want to parse more options very early in the kernel lifetime, let's always map the FDT early. This is achieved by moving that code out of kaslr_early_init(). No functional change expected. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-13-maz@kernel.org [will: Ensue KASAN is enabled before running C code] Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
__read_sysreg_by_encoding() is used by a bunch of cpufeature helpers, which should take the feature override into account. Let's do that. For a good measure (and because we are likely to need to further down the line), make this helper available to the rest of the non-modular kernel. Code that needs to know the *real* features of a CPU can still use read_sysreg_s(), and find the bare, ugly truth. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-12-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Add a facility to globally override a feature, no matter what the HW says. Yes, this sounds dangerous, but we do respect the "safe" value for a given feature. This doesn't mean the user doesn't need to know what they are doing. Nothing uses this yet, so we are pretty safe. For now. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-11-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
We can now move the initial SCTLR_EL1 setup to be used for both EL1 and EL2 setup. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-10-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As init_el2_state is now nVHE only, let's simplify it and drop the VHE setup. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-9-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
There isn't much that a VHE kernel needs on top of whatever has been done for nVHE, so let's move the little we need to the VHE stub (the SPE setup), and drop the init_el2_state macro. No expected functional change. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-8-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
When running VHE, we set MDSCR_EL2.TPMS very early on to force the trapping of EL1 SPE accesses to EL2. However: - we are running with HCR_EL2.{E2H,TGE}={1,1}, meaning that there is no EL1 to trap from - before entering a guest, we call kvm_arm_setup_debug(), which sets MDCR_EL2_TPMS in the per-vcpu shadow mdscr_el2, which gets applied on entry by __activate_traps_common(). The early setting of MDSCR_EL2.TPMS is therefore useless and can be dropped. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210208095732.3267263-7-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we are aiming to be able to control whether we enable VHE or not, let's always drop down to EL1 first, and only then upgrade to VHE if at all possible. This means that if the kernel is booted at EL2, we always start with a nVHE init, drop to EL1 to initialise the the kernel, and only then upgrade the kernel EL to EL2 if possible (the process is obviously shortened for secondary CPUs). The resume path is handled similarly to a secondary CPU boot. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-6-maz@kernel.org [will: Avoid calling switch_to_vhe twice on kaslr path] Signed-off-by:
Will Deacon <will@kernel.org>
-
- Feb 08, 2021
-
-
Marc Zyngier authored
As we are about to change the way a VHE system boots, let's provide the core helper, in the form of a stub hypercall that enables VHE and replicates the full EL1 context at EL2, thanks to EL1 and VHE-EL2 being extremely similar. On exception return, the kernel carries on at EL2. Fancy! Nothing calls this new hypercall yet, so no functional change. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-5-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Turning the MMU on is a popular sport in the arm64 kernel, and we do it more than once, or even twice. As we are about to add even more, let's turn it into a macro. No expected functional change. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-4-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
The arm64 kernel has long be able to use more than 39bit VAs. Since day one, actually. Let's rewrite the offending comment. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-3-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
If someone happens to write the following code: b 1f init_el2_state vhe 1: [...] they will be in for a long debugging session, as the label "1f" will be resolved *inside* the init_el2_state macro instead of after it. Not really what one expects. Instead, rewite the EL2 setup macros to use unambiguous labels, thanks to the usual macro counter trick. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Link: https://lore.kernel.org/r/20210208095732.3267263-2-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jan 15, 2021
-
-
Mark Rutland authored
The kbuild test robot reports that when building with W=1, GCC will warn for a couple of missing prototypes in syscall.c: | arch/arm64/kernel/syscall.c:157:6: warning: no previous prototype for 'do_el0_svc' [-Wmissing-prototypes] | 157 | void do_el0_svc(struct pt_regs *regs) | | ^~~~~~~~~~ | arch/arm64/kernel/syscall.c:164:6: warning: no previous prototype for 'do_el0_svc_compat' [-Wmissing-prototypes] | 164 | void do_el0_svc_compat(struct pt_regs *regs) | | ^~~~~~~~~~~~~~~~~ While this isn't a functional problem, as a general policy we should include the prototype for functions wherever possible to catch any accidental divergence between the prototype and implementation. Here we can easily include <asm/exception.h>, so let's do so. While there are a number of warnings elsewhere and some warnings enabled under W=1 are of questionable benefit, this change helps to make the code more robust as it evolved and reduces the noise somewhat, so it seems worthwhile. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Reported-by:
kernel test robot <lkp@intel.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/202101141046.n8iPO3mw-lkp@intel.com Link: https://lore.kernel.org/r/20210114124812.17754-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 13, 2021
-
-
Arnd Bergmann authored
With UBSAN enabled and building with clang, there are occasionally warnings like WARNING: modpost: vmlinux.o(.text+0xc533ec): Section mismatch in reference from the function arch_atomic64_or() to the variable .init.data:numa_nodes_parsed The function arch_atomic64_or() references the variable __initdata numa_nodes_parsed. This is often because arch_atomic64_or lacks a __initdata annotation or the annotation of numa_nodes_parsed is wrong. for functions that end up not being inlined as intended but operating on __initdata variables. Mark these as __always_inline, along with the corresponding asm-generic wrappers. Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210108092024.4034860-1-arnd@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Jianlin Lv authored
S_FRAME_SIZE is the size of the pt_regs structure, no longer the size of the kernel stack frame, the name is misleading. In keeping with arm32, rename S_FRAME_SIZE to PT_REGS_SIZE. Signed-off-by:
Jianlin Lv <Jianlin.Lv@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20210112015813.2340969-1-Jianlin.Lv@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This reverts commit 367c820e. lockup_detector_init() makes heavy use of per-cpu variables and must be called with preemption disabled. Usually, it's handled early during boot in kernel_init_freeable(), before SMP has been initialised. Since we do not know whether or not our PMU interrupt can be signalled as an NMI until considerably later in the boot process, the Arm PMU driver attempts to re-initialise the lockup detector off the back of a device_initcall(). Unfortunately, this is called from preemptible context and results in the following splat: | BUG: using smp_processor_id() in preemptible [00000000] code: swapper/0/1 | caller is debug_smp_processor_id+0x20/0x2c | CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.10.0+ #276 | Hardware name: linux,dummy-virt (DT) | Call trace: | dump_backtrace+0x0/0x3c0 | show_stack+0x20/0x6c | dump_stack+0x2f0/0x42c | check_preemption_disabled+0x1cc/0x1dc | debug_smp_processor_id+0x20/0x2c | hardlockup_detector_event_create+0x34/0x18c | hardlockup_detector_perf_init+0x2c/0x134 | watchdog_nmi_probe+0x18/0x24 | lockup_detector_init+0x44/0xa8 | armv8_pmu_driver_init+0x54/0x78 | do_one_initcall+0x184/0x43c | kernel_init_freeable+0x368/0x380 | kernel_init+0x1c/0x1cc | ret_from_fork+0x10/0x30 Rather than bodge this with raw_smp_processor_id() or randomly disabling preemption, simply revert the culprit for now until we figure out how to do this properly. Reported-by:
Lecopzer Chen <lecopzer.chen@mediatek.com> Signed-off-by:
Will Deacon <will@kernel.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Cc: Sumit Garg <sumit.garg@linaro.org> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Link: https://lore.kernel.org/r/20201221162249.3119-1-lecopzer.chen@mediatek.com Link: https://lore.kernel.org/r/20210112221855.10666-1-will@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
All EL0 returns go via ret_to_user(), which masks IRQs and notifies lockdep and tracing before calling into do_notify_resume(). Therefore, there's no need for do_notify_resume() to call trace_hardirqs_off(), and the comment is stale. The call is simply redundant. In ret_to_user() we call exit_to_user_mode(), which notifies lockdep and tracing the IRQs will be enabled in userspace, so there's no need for el0_svc_common() to call trace_hardirqs_on() before returning. Further, at the start of ret_to_user() we call trace_hardirqs_off(), so not only is this redundant, but it is immediately undone. In addition to being redundant, the trace_hardirqs_on() in el0_svc_common() leaves lockdep inconsistent with the hardware state, and is liable to cause issues for any C code or instrumentation between this and the call to trace_hardirqs_off() which undoes it in ret_to_user(). This patch removes the redundant tracing calls and associated stale comments. Fixes: 23529049 ("arm64: entry: fix non-NMI user<->kernel transitions") Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210107145310.44616-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 12, 2021
-
-
Catalin Marinas authored
With the introduction of a dynamic ZONE_DMA range based on DT or IORT information, there's no need for CMA allocations from the wider ZONE_DMA32 since on most platforms ZONE_DMA will cover the 32-bit addressable range. Remove the arm64_dma32_phys_limit and set arm64_dma_phys_limit to cover the smallest DMA range required on the platform. CMA allocation and crashkernel reservation now go in the dynamically sized ZONE_DMA, allowing correct functionality on RPi4. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Chen Zhou <chenzhou10@huawei.com> Reviewed-by:
Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> # On RPi4B
-
- Jan 07, 2021
-
-
Catalin Marinas authored
For consistency with __uaccess_{disable,enable}_hw_pan(), move the PSTATE.TCO setting into dedicated __uaccess_{disable,enable}_tco() functions. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com>
-
- Jan 05, 2021
-
-
Catalin Marinas authored
Commit 49b3cf03 ("kasan: arm64: set TCR_EL1.TBID1 when enabled") set the TBID1 bit for the KASAN_SW_TAGS configuration, freeing up 8 bits to be used by PAC. With in-kernel MTE now in mainline, also set this bit for the KASAN_HW_TAGS configuration. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Peter Collingbourne <pcc@google.com> Acked-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by:
Andrey Konovalov <andreyknvl@google.com>
-
Peter Collingbourne authored
Currently with ld.lld we emit an empty .eh_frame_hdr section (and a corresponding program header) into the vDSO. With ld.bfd the section is not emitted but the program header is, with p_vaddr set to 0. This can lead to unwinders attempting to interpret the data at whichever location the program header happens to point to as an unwind info header. This happens to be mostly harmless as long as the byte at that location (interpreted as a version number) has a value other than 1, causing both libgcc and LLVM libunwind to ignore the section (in libunwind's case, after printing an error message to stderr), but it could lead to worse problems if the byte happened to be 1 or the program header points to non-readable memory (e.g. if the empty section was placed at a page boundary). Instead of disabling .eh_frame_hdr via --no-eh-frame-hdr (which also has the downside of being unsupported by older versions of GNU binutils), disable it by discarding the section, and stop emitting the program header that points to it. I understand that we intend to emit valid unwind info for the vDSO at some point. Once that happens this patch can be reverted. Signed-off-by:
Peter Collingbourne <pcc@google.com> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Acked-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://linux-review.googlesource.com/id/If745fd9cadcb31b4010acbf5693727fe111b0863 Link: https://lore.kernel.org/r/20201230221954.2007257-1-pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Tian Tao authored
asm/exception.h is included more than once. Remove the one that isn't necessary. Signed-off-by:
Tian Tao <tiantao6@hisilicon.com> Link: https://lore.kernel.org/r/1609139108-10819-1-git-send-email-tiantao6@hisilicon.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Shannon Zhao authored
Commit d82755b2 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") deletes CONFIG_KVM_ARM_HOST option, it should use CONFIG_KVM instead. Just remove CONFIG_KVM_ARM_HOST here. Fixes: d82755b2 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") Signed-off-by:
Shannon Zhao <shannon.zhao@linux.alibaba.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1609760324-92271-1-git-send-email-shannon.zhao@linux.alibaba.com
-
Nick Desaulniers authored
With GNU binutils 2.35+, linking with BFD produces warnings for vmlinux: aarch64-linux-gnu-ld: warning: -z norelro ignored BFD can produce this warning when the target emulation mode does not support RELRO program headers, and -z relro or -z norelro is passed. Alan Modra clarifies: The default linker emulation for an aarch64-linux ld.bfd is -maarch64linux, the default for an aarch64-elf linker is -maarch64elf. They are not equivalent. If you choose -maarch64elf you get an emulation that doesn't support -z relro. The ARCH=arm64 kernel prefers -maarch64elf, but may fall back to -maarch64linux based on the toolchain configuration. LLD will always create RELRO program header regardless of target emulation. To avoid the above warning when linking with BFD, pass -z norelro only when linking with LLD or with -maarch64linux. Fixes: 3b92fa74 ("arm64: link with -z norelro regardless of CONFIG_RELOCATABLE") Fixes: 3bbd3db8 ("arm64: relocatable: fix inconsistencies in linker script and options") Cc: <stable@vger.kernel.org> # 5.0.x- Reported-by:
kernelci.org bot <bot@kernelci.org> Reported-by:
Quentin Perret <qperret@google.com> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Reviewed-by:
Nathan Chancellor <natechancellor@gmail.com> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Cc: Alan Modra <amodra@gmail.com> Cc: Fāng-ruì Sòng <maskray@google.com> Link: https://lore.kernel.org/r/20201218002432.788499-1-ndesaulniers@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jan 04, 2021
-
-
Marc Zyngier authored
KVM_ARM_PMU only existed for the benefit of 32bit ARM hosts, and makes no sense now that we are 64bit only. Get rid of it. Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Nicolas Saenz Julienne authored
Systems configured with CONFIG_ZONE_DMA32, CONFIG_ZONE_NORMAL and !CONFIG_ZONE_DMA will fail to properly setup ARCH_LOW_ADDRESS_LIMIT. The limit will default to ~0ULL, effectively spanning the whole memory, which is too high for a configuration that expects low memory to be capped at 4GB. Fix ARCH_LOW_ADDRESS_LIMIT by falling back to arm64_dma32_phys_limit when arm64_dma_phys_limit isn't set. arm64_dma32_phys_limit will honour CONFIG_ZONE_DMA32, or span the entire memory when not enabled. Fixes: 1a8e1cef ("arm64: use both ZONE_DMA and ZONE_DMA32") Signed-off-by:
Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Link: https://lore.kernel.org/r/20201218163307.10150-1-nsaenzjulienne@suse.de Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Peter Collingbourne authored
This ISB is unnecessary because we will soon do an ERET. Signed-off-by:
Peter Collingbourne <pcc@google.com> Link: https://linux-review.googlesource.com/id/I69f1ee6bb09b1372dd744a0e01cedaf090c8d448 Link: https://lore.kernel.org/r/20201203073458.2675400-1-pcc@google.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Geert Uytterhoeven authored
arch/arm64/kernel/smp.c: In function ‘arch_show_interrupts’: arch/arm64/kernel/smp.c:808:16: warning: unused variable ‘irq’ [-Wunused-variable] 808 | unsigned int irq = irq_desc_get_irq(ipi_desc[i]); | ^~~ The removal of the last user forgot to remove the variable. Fixes: 5089bc51 ("arm64/smp: Use irq_desc_kstat_cpu() in arch_show_interrupts()") Signed-off-by:
Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20201215103026.2872532-1-geert+renesas@glider.be Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Dec 31, 2020
-
-
Marc Zyngier authored
Although not a problem right now, it flared up while working on some other aspects of the code-base. Remove the useless semicolon. Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- Dec 30, 2020
-
-
Marc Zyngier authored
The __init annotations on hyp_cpu_pm_{init,exit} are obviously incorrect, and the build system shouts at you if you enable DEBUG_SECTION_MISMATCH. Nothing really bad happens as we never execute that code outside of the init context, but we can't label the callers as __int either, as kvm_init isn't __init itself. Oh well. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Nathan Chancellor <natechancellor@gmail.com> Link: https://lore.kernel.org/r/20201223120854.255347-1-maz@kernel.org
-
- Dec 29, 2020
-
-
Randy Dunlap authored
Make <asm-generic/local64.h> mandatory in include/asm-generic/Kbuild and remove all arch/*/include/asm/local64.h arch-specific files since they only #include <asm-generic/local64.h>. This fixes build errors on arch/c6x/ and arch/nios2/ for block/blk-iocost.c. Build-tested on 21 of 25 arch-es. (tools problems on the others) Yes, we could even rename <asm-generic/local64.h> to <linux/local64.h> and change all #includes to use <linux/local64.h> instead. Link: https://lkml.kernel.org/r/20201227024446.17018-1-rdunlap@infradead.org Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Suggested-by:
Christoph Hellwig <hch@infradead.org> Reviewed-by:
Masahiro Yamada <masahiroy@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <jacquiot.aurelien@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-