- Jul 12, 2021
-
-
Carlos Bilbao authored
Add missing header <asm/smp.h> on include/asm/smp_plat.h, as it calls function cpu_logical_map(). Also include it on kernel/cpufeature.c since it has calls to functions cpu_panic_kernel() and cpu_die_early(). Both files call functions defined on this header, make the header dependencies less fragile. Signed-off-by:
Carlos Bilbao <bilbao@vt.edu> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/4325940.LvFx2qVVIh@iron-maiden Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 22, 2021
-
-
Raphael Gault authored
This commit modifies the mask of the mrs_hook declared in arch/arm64/kernel/cpufeatures.c which emulates only feature register access. This is necessary because this hook's mask was too large and thus masking any mrs instruction, even if not related to the emulated registers which made the pmu emulation inefficient. Signed-off-by:
Raphael Gault <raphael.gault@arm.com> Signed-off-by:
Rob Herring <robh@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210517180256.2881891-1-robh@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 11, 2021
-
-
Will Deacon authored
When confronted with a mixture of CPUs, some of which support 32-bit applications and others which don't, we quite sensibly treat the system as 64-bit only for userspace and prevent execve() of 32-bit binaries. Unfortunately, some crazy folks have decided to build systems like this with the intention of running 32-bit applications, so relax our sanitisation logic to continue to advertise 32-bit support to userspace on these systems and track the real 32-bit capable cores in a cpumask instead. For now, the default behaviour remains but will be tied to a command-line option in a later patch. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-3-will@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
In preparation for late initialisation of the "sanitised" AArch32 register state, move the AArch32 registers out of 'struct cpuinfo' and into their own struct definition. Acked-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210608180313.11502-2-will@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Mark Rutland authored
For histroical reasons, we define AARCH64_INSN_SIZE in <asm/alternative-macros.h>, but it would make more sense to do so in <asm/insn.h>. Let's move it into <asm/insn.h>, and add the necessary include directives for this. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210609102301.17332-3-mark.rutland@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- May 26, 2021
-
-
Catalin Marinas authored
The GMID_EL1.BS field determines the number of tags accessed by the LDGM/STGM instructions (EL1 and up), used by the kernel for copying or zeroing page tags. Taint the kernel if GMID_EL1.BS differs between CPUs but only of CONFIG_ARM64_MTE is enabled. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com> Link: https://lore.kernel.org/r/20210526193621.21559-3-catalin.marinas@arm.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Apr 30, 2021
-
-
kernel test robot authored
Use min and max to make the effect more clear. Generated by: scripts/coccinelle/misc/minmax.cocci CC: Denis Efremov <efremov@linux.com> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Julia Lawall <julia.lawall@inria.fr> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/alpine.DEB.2.22.394.2104292246300.16899@hadrien [catalin.marinas@arm.com: include <linux/minmax.h> explicitly] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Apr 08, 2021
-
-
Sami Tolvanen authored
Disable CFI checking for functions that switch to linear mapping and make an indirect call to a physical address, since the compiler only understands virtual addresses and the CFI check for such indirect calls would always fail. Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-15-samitolvanen@google.com
-
Sami Tolvanen authored
With CONFIG_CFI_CLANG, the compiler replaces function address references with the address of the function's CFI jump table entry. This means that __pa_symbol(function) returns the physical address of the jump table entry, which can lead to address space confusion as the jump table points to the function's virtual address. Therefore, use the function_nocfi() macro to ensure we are always taking the address of the actual function instead. Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-14-samitolvanen@google.com
-
Marc Zyngier authored
CONFIG_ARM64_VHE was introduced with ARMv8.1 (some 7 years ago), and has been enabled by default for almost all that time. Given that newer systems that are VHE capable are finally becoming available, and that some systems are even incapable of not running VHE, drop the configuration altogether. Anyone willing to stick to non-VHE on VHE hardware for obscure reasons should use the 'kvm-arm.mode=nvhe' command-line option. Suggested-by:
Will Deacon <will@kernel.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-4-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Some CPUs are broken enough that some overrides need to be rejected at the earliest opportunity. In some cases, that's right at cpu feature override time. Provide the necessary infrastructure to filter out overrides, and to report such filtered out overrides to the core cpufeature code. Acked-by:
Will Deacon <will@kernel.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210408131010.1109027-2-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Mar 26, 2021
-
-
Vladimir Murzin authored
Enhanced Privileged Access Never (EPAN) allows Privileged Access Never to be used with Execute-only mappings. Absence of such support was a reason for 24cecc37 ("arm64: Revert support for execute-only user mappings"). Thus now it can be revisited and re-enabled. Cc: Kees Cook <keescook@chromium.org> Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210312173811.58284-2-vladimir.murzin@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Mar 25, 2021
-
-
Marc Zyngier authored
Now that the read_ctr macro has been specialised for nVHE, the whole CPU_FTR_REG_HYP_COPY infrastrcture looks completely overengineered. Simplify it by populating the two u64 quantities (MMFR0 and 1) that the hypervisor need. Reviewed-by:
Quentin Perret <qperret@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Rich Wiley authored
On NVIDIA Carmel cores, CNP behaves differently than it does on standard ARM cores. On Carmel, if two cores have CNP enabled and share an L2 TLB entry created by core0 for a specific ASID, a non-shareable TLBI from core1 may still see the shared entry. On standard ARM cores, that TLBI will invalidate the shared entry as well. This causes issues with patchsets that attempt to do local TLBIs based on cpumasks instead of broadcast TLBIs. Avoid these issues by disabling CNP support for NVIDIA Carmel cores. Signed-off-by:
Rich Wiley <rwiley@nvidia.com> Link: https://lore.kernel.org/r/20210324002809.30271-1-rwiley@nvidia.com [will: Fix pre-existing whitespace issue] Signed-off-by:
Will Deacon <will@kernel.org>
-
- Mar 24, 2021
-
-
Suzuki K Poulose authored
Currently we advertise the ID_AA6DFR0_EL1.TRACEVER for the guest, when the trace register accesses are trapped (CPTR_EL2.TTA == 1). So, the guest will get an undefined instruction, if trusts the ID registers and access one of the trace registers. Lets be nice to the guest and hide the feature to avoid unexpected behavior. Even though this can be done at KVM sysreg emulation layer, we do this by removing the TRACEVER from the sanitised feature register field. This is fine as long as the ETM drivers can handle the individual trace units separately, even when there are differences among the CPUs. Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210323120647.454211-2-suzuki.poulose@arm.com
-
- Mar 19, 2021
-
-
Quentin Perret authored
Introduce the infrastructure in KVM enabling to copy CPU feature registers into EL2-owned data-structures, to allow reading sanitised values directly at EL2 in nVHE. Given that only a subset of these features are being read by the hypervisor, the ones that need to be copied are to be listed under <asm/kvm_cpufeature.h> together with the name of the nVHE variable that will hold the copy. This introduces only the infrastructure enabling this copy. The first users will follow shortly. Signed-off-by:
Quentin Perret <qperret@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210319100146.1149909-14-qperret@google.com
-
- Feb 12, 2021
-
-
Catalin Marinas authored
The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns -EIO. A newly created (PROT_MTE) mapping points to the zero page which had its tags zeroed during cpu_enable_mte(). If there were no prior writes to this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero page does not have the PG_mte_tagged flag set. Set PG_mte_tagged on the zero page when its tags are cleared during boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on !PROT_MTE mappings pointing to the zero page, change the __access_remote_tags() check to (vm_flags & VM_MTE) instead of PG_mte_tagged. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Fixes: 34bfeea4 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") Cc: <stable@vger.kernel.org> # 5.10.x Cc: Will Deacon <will@kernel.org> Reported-by:
Luis Machado <luis.machado@linaro.org> Tested-by:
Luis Machado <luis.machado@linaro.org> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20210210180316.23654-1-catalin.marinas@arm.com
-
- Feb 09, 2021
-
-
Marc Zyngier authored
In order to be able to disable Pointer Authentication at runtime, whether it is for testing purposes, or to work around HW issues, let's add support for overriding the ID_AA64ISAR1_EL1.{GPI,GPA,API,APA} fields. This is further mapped on the arm64.nopauth command-line alias. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Tested-by:
Srinivas Ramana <sramana@codeaurora.org> Link: https://lore.kernel.org/r/20210208095732.3267263-23-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
In order to be able to disable BTI at runtime, whether it is for testing purposes, or to work around HW issues, let's add support for overriding the ID_AA64PFR1_EL1.BTI field. This is further mapped on the arm64.nobti command-line alias. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Tested-by:
Srinivas Ramana <sramana@codeaurora.org> Link: https://lore.kernel.org/r/20210208095732.3267263-21-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
As we want to be able to disable VHE at runtime, let's match "id_aa64mmfr1.vh=" from the command line as an override. This doesn't have much effect yet as our boot code doesn't look at the cpufeature, but only at the HW registers. Signed-off-by:
Marc Zyngier <maz@kernel.org> Acked-by:
David Brazdil <dbrazdil@google.com> Acked-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-15-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
__read_sysreg_by_encoding() is used by a bunch of cpufeature helpers, which should take the feature override into account. Let's do that. For a good measure (and because we are likely to need to further down the line), make this helper available to the rest of the non-modular kernel. Code that needs to know the *real* features of a CPU can still use read_sysreg_s(), and find the bare, ugly truth. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-12-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Add a facility to globally override a feature, no matter what the HW says. Yes, this sounds dangerous, but we do respect the "safe" value for a given feature. This doesn't mean the user doesn't need to know what they are doing. Nothing uses this yet, so we are pretty safe. For now. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
David Brazdil <dbrazdil@google.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20210208095732.3267263-11-maz@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Feb 08, 2021
-
-
Suzuki K Poulose authored
The erratum 1024718 affects Cortex-A55 r0p0 to r2p0. However we apply the work around for r0p0 - r1p0. Unfortunately this won't be fixed for the future revisions for the CPU. Thus extend the work around for all versions of A55, to cover for r2p0 and any future revisions. Cc: stable@vger.kernel.org Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Kunihiko Hayashi <hayashi.kunihiko@socionext.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Link: https://lore.kernel.org/r/20210203230057.3961239-1-suzuki.poulose@arm.com [will: Update Kconfig help text] Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jan 05, 2021
-
-
Shannon Zhao authored
Commit d82755b2 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") deletes CONFIG_KVM_ARM_HOST option, it should use CONFIG_KVM instead. Just remove CONFIG_KVM_ARM_HOST here. Fixes: d82755b2 ("KVM: arm64: Kill off CONFIG_KVM_ARM_HOST") Signed-off-by:
Shannon Zhao <shannon.zhao@linux.alibaba.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/1609760324-92271-1-git-send-email-shannon.zhao@linux.alibaba.com
-
- Dec 22, 2020
-
-
Andrey Konovalov authored
Provide implementation of KASAN functions required for the hardware tag-based mode. Those include core functions for memory and pointer tagging (tags_hw.c) and bug reporting (report_tags_hw.c). Also adapt common KASAN code to support the new mode. Link: https://lkml.kernel.org/r/cfd0fbede579a6b66755c98c88c108e54f9c56bf.1606161801.git.andreyknvl@google.com Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Alexander Potapenko <glider@google.com> Tested-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Marco Elver <elver@google.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Dec 04, 2020
-
-
David Brazdil authored
Expose the boolean value whether the system is running with KVM in protected mode (nVHE + kernel param). CPU capability was selected over a global variable to allow use in alternatives. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201202184122.26046-3-dbrazdil@google.com
-
- Dec 03, 2020
-
-
Mark Rutland authored
Now that the PAN toggling has been removed, the only user of __system_matches_cap() is has_generic_auth(), which is only built when CONFIG_ARM64_PTR_AUTH is selected, and Qian reports that this results in a build-time warning when CONFIG_ARM64_PTR_AUTH is not selected: | arch/arm64/kernel/cpufeature.c:2649:13: warning: '__system_matches_cap' defined but not used [-Wunused-function] | static bool __system_matches_cap(unsigned int n) | ^~~~~~~~~~~~~~~~~~~~ It's tricky to restructure things to prevent this, so let's mark __system_matches_cap() as __maybe_unused, as we used to do for the other user of __system_matches_cap() which we just removed. Reported-by:
Qian Cai <qcai@redhat.com> Suggested-by:
Qian Cai <qcai@redhat.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20201203152403.26100-1-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Dec 02, 2020
-
-
Mark Rutland authored
Now that arm64 no longer uses UAO, remove the vestigal feature detection code and Kconfig text. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Some code (e.g. futex) needs to make privileged accesses to userspace memory, and uses uaccess_{enable,disable}_privileged() in order to permit this. All other uaccess primitives use LDTR/STTR, and never need to toggle PAN. Remove the redundant PAN toggling. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Now that the uaccess primitives dont take addr_limit into account, we have no need to manipulate this via set_fs() and get_fs(). Remove support for these, along with some infrastructure this renders redundant. We no longer need to flip UAO to access kernel memory under KERNEL_DS, and head.S unconditionally clears UAO for all kernel configurations via an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO, nor do we need to context-switch it. However, we still need to adjust PAN during SDEI entry. Masking of __user pointers no longer needs to use the dynamic value of addr_limit, and can use a constant derived from the maximum possible userspace task size. A new TASK_SIZE_MAX constant is introduced for this, which is also used by core code. In configurations supporting 52-bit VAs, this may include a region of unusable VA space above a 48-bit TTBR0 limit, but never includes any portion of TTBR1. Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and KERNEL_DS were inclusive limits, and is converted to a mask by subtracting one. As the SDEI entry code repurposes the otherwise unnecessary pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted context, for now we rename that to pt_regs::sdei_ttbr1. In future we can consider factoring that out. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
James Morse <james.morse@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
To make callsites easier to read, add trivial C wrappers for the SET_PSTATE_*() helpers, and convert trivial uses over to these. The new wrappers will be used further in subsequent patches. There should be no functional change as a result of this patch. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Christoph Hellwig <hch@lst.de> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113124937.20574-3-mark.rutland@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 28, 2020
-
-
Marc Zyngier authored
Our Meltdown mitigation state isn't exposed outside of cpufeature.c, contrary to the rest of the Spectre mitigation state. As we are going to use it in KVM, expose a arm64_get_meltdown_state() helper which returns the same possible values as arm64_get_spectre_v?_state(). Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- Nov 13, 2020
-
-
Ionela Voinescu authored
If Activity Monitors (AMUs) are present, two of the counters can be used to implement support for CPPC's (Collaborative Processor Performance Control) delivered and reference performance monitoring functionality using FFH (Functional Fixed Hardware). Given that counters for a certain CPU can only be read from that CPU, while FFH operations can be called from any CPU for any of the CPUs, use smp_call_function_single() to provide the requested values. Therefore, depending on the register addresses, the following values are returned: - 0x0 (DeliveredPerformanceCounterRegister): AMU core counter - 0x1 (ReferencePerformanceCounterRegister): AMU constant counter The use of Activity Monitors is hidden behind the generic cpu_read_{corecnt,constcnt}() functions. Read functionality for these two registers represents the only current FFH support for CPPC. Read operations for other register values or write operation for all registers are unsupported. Therefore, keep CPPC's FFH unsupported if no CPUs have valid AMU frequency counters. For this purpose, the get_cpu_with_amu_feat() is introduced. Signed-off-by:
Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-4-ionela.voinescu@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Ionela Voinescu authored
In preparation for other uses of Activity Monitors (AMU) cycle counters, place counter read functionality in generic functions that can reused: read_corecnt() and read_constcnt(). As a result, implement update_freq_counters_refs() to replace init_cpu_freq_invariance_counters() and both initialise and update the per-cpu reference variables. Signed-off-by:
Ionela Voinescu <ionela.voinescu@arm.com> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201106125334.21570-2-ionela.voinescu@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Konrad Dybcio authored
QCOM KRYO2XX gold (big) silver (LITTLE) CPU cores are based on Cortex-A73 and Cortex-A53 respectively and are meltdown safe, hence add them to kpti_safe_list[]. Signed-off-by:
Konrad Dybcio <konrad.dybcio@somainline.org> Link: https://lore.kernel.org/r/20201104232218.198800-3-konrad.dybcio@somainline.org Signed-off-by:
Will Deacon <will@kernel.org>
-
- Nov 09, 2020
-
-
Will Deacon authored
Armv8.3 introduced the LDAPR instruction, which provides weaker memory ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an RCsc implementation when implementing the Linux memory model, but LDAPR can be used as a useful alternative to dependency ordering, particularly when the compiler is capable of breaking the dependencies. Since LDAPR is not available on all CPUs, add a cpufeature to detect it at runtime and allow the instruction to be used with alternative code patching. Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- Oct 01, 2020
-
-
Will Deacon authored
TCR_EL1.HD is permitted to be cached in a TLB, so invalidate the local TLB after setting the bit when detected support for the feature. Although this isn't strictly necessary, since we can happily operate with the bit effectively clear, the current code uses an ISB in a half-hearted attempt to make the change effective, so let's just fix that up. Link: https://lore.kernel.org/r/20201001110405.18617-1-will@kernel.org Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- Sep 29, 2020
-
-
Will Deacon authored
Rewrite the Spectre-v4 mitigation handling code to follow the same approach as that taken by Spectre-v2. For now, report to KVM that the system is vulnerable (by forcing 'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in subsequent steps. Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
If all CPUs discovered during boot have SSBS, then spectre-v4 will be considered to be "mitigated". However, we still allow late CPUs without SSBS to be onlined, albeit with a "SANITY CHECK" warning. This is problematic for userspace because it means that the system can quietly transition to "Vulnerable" at runtime. Avoid this by treating SSBS as a non-strict system feature: if all of the CPUs discovered during boot have SSBS, then late arriving secondaries better have it as well. Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
The spectre mitigations are too configurable for their own good, leading to confusing logic trying to figure out when we should mitigate and when we shouldn't. Although the plethora of command-line options need to stick around for backwards compatibility, the default-on CONFIG options that depend on EXPERT can be dropped, as the mitigations only do anything if the system is vulnerable, a mitigation is available and the command-line hasn't disabled it. Remove CONFIG_HARDEN_BRANCH_PREDICTOR and CONFIG_ARM64_SSBD in favour of enabling this code unconditionally. Signed-off-by:
Will Deacon <will@kernel.org>
-