- Sep 04, 2020
-
-
Catalin Marinas authored
By default, even if PROT_MTE is set on a memory range, there is no tag check fault reporting (SIGSEGV). Introduce a set of option to the exiting prctl(PR_SET_TAGGED_ADDR_CTRL) to allow user control of the tag check fault mode: PR_MTE_TCF_NONE - no reporting (default) PR_MTE_TCF_SYNC - synchronous tag check fault reporting PR_MTE_TCF_ASYNC - asynchronous tag check fault reporting These options translate into the corresponding SCTLR_EL1.TCF0 bitfield, context-switched by the kernel. Note that the kernel accesses to the user address space (e.g. read() system call) are not checked if the user thread tag checking mode is PR_MTE_TCF_NONE or PR_MTE_TCF_ASYNC. If the tag checking mode is PR_MTE_TCF_SYNC, the kernel makes a best effort to check its user address accesses, however it cannot always guarantee it. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
-
Catalin Marinas authored
When the Memory Tagging Extension is enabled, two pages are identical only if both their data and tags are identical. Make the generic memcmp_pages() a __weak function and add an arm64-specific implementation which returns non-zero if any of the two pages contain valid MTE tags (PG_mte_tagged set). There isn't much benefit in comparing the tags of two pages since these are normally used for heap allocations and likely to differ anyway. Co-developed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
-
Catalin Marinas authored
Pages allocated by the kernel are not guaranteed to have the tags zeroed, especially as the kernel does not (yet) use MTE itself. To ensure the user can still access such pages when mapped into its address space, clear the tags via set_pte_at(). A new page flag - PG_mte_tagged (PG_arch_2) - is used to track pages with valid allocation tags. Since the zero page is mapped as pte_special(), it won't be covered by the above set_pte_at() mechanism. Clear its tags during early MTE initialisation. Co-developed-by:
Steven Price <steven.price@arm.com> Signed-off-by:
Steven Price <steven.price@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
-
Vincenzo Frascino authored
The Memory Tagging Extension has two modes of notifying a tag check fault at EL0, configurable through the SCTLR_EL1.TCF0 field: 1. Synchronous raising of a Data Abort exception with DFSC 17. 2. Asynchronous setting of a cumulative bit in TFSRE0_EL1. Add the exception handler for the synchronous exception and handling of the asynchronous TFSRE0_EL1.TF0 bit setting via a new TIF flag in do_notify_resume(). On a tag check failure in user-space, whether synchronous or asynchronous, a SIGSEGV will be raised on the faulting thread. Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
-
- Sep 03, 2020
-
-
Vincenzo Frascino authored
Add the cpufeature and hwcap entries to detect the presence of MTE. Any secondary CPU not supporting the feature, if detected on the boot CPU, will be parked. Add the minimum SCTLR_EL1 and HCR_EL2 bits for enabling MTE. The Normal Tagged memory type is configured in MAIR_EL1 before the MMU is enabled in order to avoid disrupting other CPUs in the CnP domain. Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
-
Vincenzo Frascino authored
Add Memory Tagging Extension system register definitions together with the relevant bitfields. Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Co-developed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org>
-
- Aug 28, 2020
-
-
James Morse authored
KVM has a one instruction window where it will allow an SError exception to be consumed by the hypervisor without treating it as a hypervisor bug. This is used to consume asynchronous external abort that were caused by the guest. As we are about to add another location that survives unexpected exceptions, generalise this code to make it behave like the host's extable. KVM's version has to be mapped to EL2 to be accessible on nVHE systems. The SError vaxorcism code is a one instruction window, so has two entries in the extable. Because the KVM code is copied for VHE and nVHE, we end up with four entries, half of which correspond with code that isn't mapped. Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 27, 2020
-
-
Gustavo A. R. Silva authored
Fallthrough annotations for consecutive default and case labels are not necessary. Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Gustavo A. R. Silva <gustavoars@kernel.org>
-
- Aug 26, 2020
-
-
Peter Zijlstra authored
Remove trace_cpu_idle() from the arch_cpu_idle() implementations and put it in the generic code, right before disabling RCU. Gets rid of more trace_*_rcuidle() users. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Steven Rostedt (VMware) <rostedt@goodmis.org> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by:
Marco Elver <elver@google.com> Link: https://lkml.kernel.org/r/20200821085348.428433395@infradead.org
-
- Aug 23, 2020
-
-
Gustavo A. R. Silva authored
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by:
Gustavo A. R. Silva <gustavoars@kernel.org>
-
- Aug 21, 2020
-
-
Stephen Boyd authored
Add the 32-bit vdso Makefile to the vdso_install rule so that 'make vdso_install' installs the 32-bit compat vdso when it is compiled. Fixes: a7f71a2c ("arm64: compat: Add vDSO") Signed-off-by:
Stephen Boyd <swboyd@chromium.org> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Acked-by:
Will Deacon <will@kernel.org> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20200818014950.42492-1-swboyd@chromium.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
As we can now switch from a system that isn't affected by 1418040 to a system that globally is affected, let's allow affected CPUs to come in at a later time. Signed-off-by:
Marc Zyngier <maz@kernel.org> Tested-by:
Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by:
Stephen Boyd <swboyd@chromium.org> Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200731173824.107480-3-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Instead of dealing with erratum 1418040 on each entry and exit, let's move the handling to __switch_to() instead, which has several advantages: - It can be applied when it matters (switching between 32 and 64 bit tasks). - It is written in C (yay!) - It can rely on static keys rather than alternatives Signed-off-by:
Marc Zyngier <maz@kernel.org> Tested-by:
Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Reviewed-by:
Stephen Boyd <swboyd@chromium.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200731173824.107480-2-maz@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 12, 2020
-
-
Christoph Hellwig authored
Add helpers to wrap the get_fs/set_fs magic for undoing any damange done by set_fs(KERNEL_DS). There is no real functional benefit, but this documents the intent of these calls better, and will allow stubbing the functions out easily for kernels builds that do not allow address space overrides in the future. [hch@lst.de: drop two incorrect hunks, fix a commit log typo] Link: http://lkml.kernel.org/r/20200714105505.935079-6-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Linus Torvalds <torvalds@linux-foundation.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Greentime Hu <green.hu@gmail.com> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Link: http://lkml.kernel.org/r/20200710135706.537715-6-hch@lst.de Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 08, 2020
-
-
Kefeng Wang authored
The __cpu_logical_map undefined issue occued when the new tegra194-cpufreq drvier building as a module. ERROR: modpost: "__cpu_logical_map" [drivers/cpufreq/tegra194-cpufreq.ko] undefined! The driver using cpu_logical_map() macro which will expand to __cpu_logical_map, we can't access it in a drvier. Let's turn cpu_logical_map() into a C wrapper and export it to fix the build issue. Also create a function set_cpu_logical_map(cpu, hwid) when assign a value to cpu_logical_map(cpu). Reported-by:
Hulk Robot <hulkci@huawei.com> Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 07, 2020
-
-
Andrey Konovalov authored
This patch prepares Software Tag-Based KASAN for stack tagging support. With stack tagging enabled, KASAN tags stack variable in each function in its prologue. In start_kernel() stack variables get tagged before KASAN is enabled via setup_arch()->kasan_init(). As the result the tags for start_kernel()'s stack variables end up in the temporary shadow memory. Later when KASAN gets enabled, switched to normal shadow, and starts checking tags, this leads to false-positive reports, as proper tags are missing in normal shadow. Disable KASAN instrumentation for start_kernel(). Also disable it for arm64's setup_arch() as a precaution (it doesn't have any stack variables right now). [andreyknvl@google.com: reorder attributes for start_kernel()] Link: http://lkml.kernel.org/r/26fb6165a17abcf61222eda5184c030fb6b133d1.1596544734.git.andreyknvl@google.com Signed-off-by:
Andrey Konovalov <andreyknvl@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64] Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Elena Petrova <lenaptr@google.com> Cc: Marco Elver <elver@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Walter Wu <walter-zh.wu@mediatek.com> Cc: Ard Biesheuvel <ardb@kernel.org> Link: http://lkml.kernel.org/r/55d432671a92e931ab8234b03dc36b14d4c21bfb.1596199677.git.andreyknvl@google.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
Patch series "mm: cleanup usage of <asm/pgalloc.h>" Most architectures have very similar versions of pXd_alloc_one() and pXd_free_one() for intermediate levels of page table. These patches add generic versions of these functions in <asm-generic/pgalloc.h> and enable use of the generic functions where appropriate. In addition, functions declared and defined in <asm/pgalloc.h> headers are used mostly by core mm and early mm initialization in arch and there is no actual reason to have the <asm/pgalloc.h> included all over the place. The first patch in this series removes unneeded includes of <asm/pgalloc.h> In the end it didn't work out as neatly as I hoped and moving pXd_alloc_track() definitions to <asm-generic/pgalloc.h> would require unnecessary changes to arches that have custom page table allocations, so I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local to mm/. This patch (of 8): In most cases <asm/pgalloc.h> header is required only for allocations of page table memory. Most of the .c files that include that header do not use symbols declared in <asm/pgalloc.h> and do not require that header. As for the other header files that used to include <asm/pgalloc.h>, it is possible to move that include into the .c file that actually uses symbols from <asm/pgalloc.h> and drop the include from the header file. The process was somewhat automated using sed -i -E '/[<"]asm\/pgalloc\.h/d' \ $(grep -L -w -f /tmp/xx \ $(git grep -E -l '[<"]asm/pgalloc\.h')) where /tmp/xx contains all the symbols defined in arch/*/include/asm/pgalloc.h. [rppt@linux.ibm.com: fix powerpc warning] Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Pekka Enberg <penberg@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Joerg Roedel <joro@8bytes.org> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Satheesh Rajendran <sathnaga@linux.vnet.ibm.com> Cc: Stafford Horne <shorne@gmail.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Joerg Roedel <jroedel@suse.de> Cc: Matthew Wilcox <willy@infradead.org> Link: http://lkml.kernel.org/r/20200627143453.31835-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200627143453.31835-2-rppt@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Guenter Roeck authored
Commit 58552408 ("random: random.h should include archrandom.h, not the other way around") tries to fix a problem with recursive inclusion of linux/random.h and arch/archrandom.h for arm64. Unfortunately, this results in the following compile error if ARCH_RANDOM is disabled. arch/arm64/kernel/kaslr.c: In function 'kaslr_early_init': arch/arm64/kernel/kaslr.c:128:6: error: implicit declaration of function '__early_cpu_has_rndr'; did you mean '__early_pfn_to_nid'? [-Werror=implicit-function-declaration] if (__early_cpu_has_rndr()) { ^~~~~~~~~~~~~~~~~~~~ __early_pfn_to_nid arch/arm64/kernel/kaslr.c:131:7: error: implicit declaration of function '__arm64_rndr' [-Werror=implicit-function-declaration] if (__arm64_rndr(&raw)) ^~~~~~~~~~~~ The problem is that arch/archrandom.h is only included from linux/random.h if ARCH_RANDOM is enabled. If not, __arm64_rndr() and __early_cpu_has_rndr() are undeclared, causing the problem. Use arch_get_random_seed_long_early() instead of arm64 specific functions to solve the problem. Reported-by:
Qian Cai <cai@lca.pw> Fixes: 58552408 ("random: random.h should include archrandom.h, not the other way around") Cc: Qian Cai <cai@lca.pw> Cc: Mark Brown <broonie@kernel.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Mark Brown <broonie@kernel.org> Tested-by:
Mark Brown <broonie@kernel.org> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 05, 2020
-
-
Linus Torvalds authored
This is hopefully the final piece of the crazy puzzle with random.h dependencies. And by "hopefully" I obviously mean "Linus is a hopeless optimist". Reported-and-tested-by:
Daniel Díaz <daniel.diaz@linaro.org> Acked-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jul 31, 2020
-
-
Maninder Singh authored
IRQ_STACK_SIZE can be made different from THREAD_SIZE, and as IRQ_STACK_SIZE is used while irq stack allocation, same define should be used while printing information of irq stack. Signed-off-by:
Maninder Singh <maninder1.s@samsung.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1596196190-14141-1-git-send-email-maninder1.s@samsung.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jul 28, 2020
-
-
David Brazdil authored
The HARDEN_EL2_VECTORS config maps vectors at a fixed location on cores which are susceptible to Spector variant 3a (A57, A72) to prevent defeating hyp layout randomization by leaking the value of VBAR_EL2. Since this feature is only applicable when EL2 layout randomization is enabled, unify both behind the same RANDOMIZE_BASE Kconfig. Majority of code remains conditional on a capability selected for the affected cores. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200721094445.82184-3-dbrazdil@google.com
-
- Jul 27, 2020
-
-
Al Viro authored
not used anymore Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- Jul 24, 2020
-
-
Andrei Vagin authored
Forbid splitting VVAR VMA resulting in a stricter ABI and reducing the amount of corner-cases to consider while working further on VDSO time namespace support. As the offset from timens to VVAR page is computed compile-time, the pages in VVAR should stay together and not being partically mremap()'ed. Signed-off-by:
Andrei Vagin <avagin@gmail.com> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-6-avagin@gmail.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Andrei Vagin authored
If a task belongs to a time namespace then the VVAR page which contains the system wide VDSO data is replaced with a namespace specific page which has the same layout as the VVAR page. Signed-off-by:
Andrei Vagin <avagin@gmail.com> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-5-avagin@gmail.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Andrei Vagin authored
Allocate the time namespace page among VVAR pages. Provide __arch_get_timens_vdso_data() helper for VDSO code to get the code-relative position of VVARs on that special page. If a task belongs to a time namespace then the VVAR page which contains the system wide VDSO data is replaced with a namespace specific page which has the same layout as the VVAR page. That page has vdso_data->seq set to 1 to enforce the slow path and vdso_data->clock_mode set to VCLOCK_TIMENS to enforce the time namespace handling path. The extra check in the case that vdso_data->seq is odd, e.g. a concurrent update of the VDSO data is in progress, is not really affecting regular tasks which are not part of a time namespace as the task is spin waiting for the update to finish and vdso_data->seq to become even again. If a time namespace task hits that code path, it invokes the corresponding time getter function which retrieves the real VVAR page, reads host time and then adds the offset for the requested clock which is stored in the special VVAR page. The time-namespace page isn't allocated on !CONFIG_TIME_NAMESPACE, but vma is the same size, which simplifies criu/vdso migration between different kernel configs. Signed-off-by:
Andrei Vagin <avagin@gmail.com> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Dmitry Safonov <dima@arista.com> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20200624083321.144975-4-avagin@gmail.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Andrei Vagin authored
The order of vvar pages depends on whether a task belongs to the root time namespace or not. In the root time namespace, a task doesn't have a per-namespace page. In a non-root namespace, the VVAR page which contains the system-wide VDSO data is replaced with a namespace specific page that contains clock offsets. Whenever a task changes its namespace, the VVAR page tables are cleared and then they will be re-faulted with a corresponding layout. A task can switch its time namespace only if its ->mm isn't shared with another task. Signed-off-by:
Andrei Vagin <avagin@gmail.com> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Dmitry Safonov <dima@arista.com> Reviewed-by:
Christian Brauner <christian.brauner@ubuntu.com> Link: https://lore.kernel.org/r/20200624083321.144975-3-avagin@gmail.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Andrei Vagin authored
Currently the vdso has no awareness of time namespaces, which may apply distinct offsets to processes in different namespaces. To handle this within the vdso, we'll need to expose a per-namespace data page. As a preparatory step, this patch separates the vdso data page from the code pages, and has it faulted in via its own fault callback. Subsquent patches will extend this to support distinct pages per time namespace. The vvar vma has to be installed with the VM_PFNMAP flag to handle faults via its vma fault callback. Signed-off-by:
Andrei Vagin <avagin@gmail.com> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Reviewed-by:
Dmitry Safonov <dima@arista.com> Link: https://lore.kernel.org/r/20200624083321.144975-2-avagin@gmail.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
While MTE is not supported in the upstream kernel yet, add a comment that HWCAP2_MTE as (1 << 18) is reserved. Glibc makes use of it for the resolving (ifunc) of the MTE-safe string routines. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jul 23, 2020
-
-
Ard Biesheuvel authored
Factor the 12 copies of the SW PAN entry and exit code into callable subroutines, and use alternatives patching to either emit a 'bl' instruction to call them, or a NOP if h/w PAN is found to be available at runtime. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20200721083315.4816-1-ardb@kernel.org Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Nathan Chancellor authored
Newer versions of clang only look for $(COMPAT_GCC_TOOLCHAIN_DIR)as [1], rather than $(COMPAT_GCC_TOOLCHAIN_DIR)$(CROSS_COMPILE_COMPAT)as, resulting in the following build error: $ make -skj"$(nproc)" ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \ CROSS_COMPILE_COMPAT=arm-linux-gnueabi- LLVM=1 O=out/aarch64 distclean \ defconfig arch/arm64/kernel/vdso32/ ... /home/nathan/cbl/toolchains/llvm-binutils/bin/as: unrecognized option '-EL' clang-12: error: assembler command failed with exit code 1 (use -v to see invocation) make[3]: *** [arch/arm64/kernel/vdso32/Makefile:181: arch/arm64/kernel/vdso32/note.o] Error 1 ... Adding the value of CROSS_COMPILE_COMPAT (adding notdir to account for a full path for CROSS_COMPILE_COMPAT) fixes this issue, which matches the solution done for the main Makefile [2]. [1]: https://github.com/llvm/llvm-project/commit/3452a0d8c17f7166f479706b293caf6ac76ffd90 [2]: https://lore.kernel.org/lkml/20200721173125.1273884-1-maskray@google.com/ Signed-off-by:
Nathan Chancellor <natechancellor@gmail.com> Cc: stable@vger.kernel.org Link: https://github.com/ClangBuiltLinux/linux/issues/1099 Link: https://lore.kernel.org/r/20200723041509.400450-1-natechancellor@gmail.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jul 21, 2020
-
-
Shaokun Zhang authored
Some new PMU events can been detected by PMCEID1_EL0, but it can't be listed, Let's expose these through sysfs. Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/1595328573-12751-2-git-send-email-zhangshaokun@hisilicon.com Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
Although vmlinux.lds.S smells like an assembly file and is compiled with __ASSEMBLY__ defined, it's actually just fed to the preprocessor to create our linker script. This means that any assembly macros defined by headers that it includes will result in a helpful link error: | aarch64-linux-gnu-ld:./arch/arm64/kernel/vmlinux.lds:1: syntax error In preparation for an arm64-private asm/rwonce.h implementation, which will end up pulling assembly macros into linux/compiler.h, reduce the number of headers we include directly and transitively in vmlinux.lds.S Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jul 20, 2020
-
-
Peter Zijlstra authored
This completes the ARM64 cap_user_time support. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20200716051130.4359-7-leo.yan@linaro.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Peter Zijlstra authored
When sched_clock is running on anything other than arch_timer, don't advertise cap_user_time*. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20200716051130.4359-5-leo.yan@linaro.org Requested-by:
Will Deacon <will@kernel.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
Peter Zijlstra authored
As reported by Leo; the existing implementation is broken when the clock and counter don't intersect at 0. Use the sched_clock's struct clock_read_data information to correctly implement cap_user_time and cap_user_time_zero. Note that the ARM64 counter is architecturally only guaranteed to be 56bit wide (implementations are allowed to be wider) and the existing perf ABI cannot deal with wrap-around. This implementation should also be faster than the old; seeing how we don't need to recompute mult and shift all the time. [leoyan: Use mul_u64_u32_shr() to convert cyc to ns to avoid overflow] Reported-by:
Leo Yan <leo.yan@linaro.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Leo Yan <leo.yan@linaro.org> Link: https://lore.kernel.org/r/20200716051130.4359-4-leo.yan@linaro.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Shaokun Zhang authored
When PMU event ID is equal or greater than 0x4000, it will be reduced by 0x4000 and it is not the raw number in the sysfs. Let's correct it and obtain the raw event ID. Before this patch: cat /sys/bus/event_source/devices/armv8_pmuv3_0/events/sample_feed event=0x001 After this patch: cat /sys/bus/event_source/devices/armv8_pmuv3_0/events/sample_feed event=0x4001 Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/1592487344-30555-3-git-send-email-zhangshaokun@hisilicon.com [will: fixed formatting of 'if' condition] Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jul 16, 2020
-
-
Will Deacon authored
Rather than open-code test_tsk_thread_flag() at each callsite, simply replace the couple of offenders with calls to test_tsk_thread_flag() directly. Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
Setting a system call number of -1 is special, as it indicates that the current system call should be skipped. Use NO_SYSCALL instead of -1 when checking for this scenario, which is different from the -1 returned due to a seccomp failure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Cc: Luis Machado <luis.machado@linaro.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
Will Deacon authored
If a task executes syscall(-1), we intercept this early and force x0 to be -ENOSYS so that we don't need to distinguish this scenario from one where the scno is -1 because a tracer wants to skip the system call using ptrace. With the return value set, the return path is the same as the skip case. Although there is a one-line comment noting this in el0_svc_common(), it misses out most of the detail. Expand the comment to describe a bit more about what is going on. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Keno Fischer <keno@juliacomputing.com> Cc: Luis Machado <luis.machado@linaro.org> Signed-off-by:
Will Deacon <will@kernel.org>
-