- Apr 23, 2021
-
-
Stefano Stabellini authored
Newer Xen versions expose two Xen feature flags to tell us if the domain is directly mapped or not. Only when a domain is directly mapped it makes sense to enable swiotlb-xen on ARM. Introduce a function on ARM to check the new Xen feature flags and also to deal with the legacy case. Call the function xen_swiotlb_detect. Signed-off-by:
Stefano Stabellini <stefano.stabellini@xilinx.com> Reviewed-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Link: https://lore.kernel.org/r/20210319200140.12512-1-sstabellini@kernel.org Signed-off-by:
Juergen Gross <jgross@suse.com>
-
- Mar 22, 2021
-
-
Ingo Molnar authored
Fix ~16 single-word typos in locking code comments. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-kernel@vger.kernel.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Mar 11, 2021
-
-
Juergen Gross authored
The time pvops functions are the only ones left which might be used in 32-bit mode and which return a 64-bit value. Switch them to use the static_call() mechanism instead of pvops, as this allows quite some simplification of the pvops implementation. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210311142319.4723-5-jgross@suse.com
-
- Feb 05, 2021
-
-
Russell King authored
Giancarlo Ferrari reports the following oops while trying to use kexec: Unable to handle kernel paging request at virtual address 80112f38 pgd = fd7ef03e [80112f38] *pgd=0001141e(bad) Internal error: Oops: 80d [#1] PREEMPT SMP ARM ... This is caused by machine_kexec() trying to set the kernel text to be read/write, so it can poke values into the relocation code before copying it - and an interrupt occuring which changes the page tables. The subsequent writes then hit read-only sections that trigger a data abort resulting in the above oops. Fix this by copying the relocation code, and then writing the variables into the destination, thereby avoiding the need to make the kernel text read/write. Reported-by:
Giancarlo Ferrari <giancarlo.ferrari89@gmail.com> Tested-by:
Giancarlo Ferrari <giancarlo.ferrari89@gmail.com> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Feb 01, 2021
-
-
Uwe Kleine-König authored
The driver core ignores the return value of struct bus_type::remove because there is only little that can be done. To simplify the quest to make this function return void, let struct locomo_driver::remove return void, too. All users already unconditionally return 0, this commit makes it obvious that returning an error code is a bad idea and ensures future users behave accordingly. Link: https://lore.kernel.org/r/20201126110140.2021758-1-u.kleine-koenig@pengutronix.de Acked-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Acked-by:
Lee Jones <lee.jones@linaro.org> Signed-off-by:
Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
Uwe Kleine-König authored
The driver core ignores the return value of struct device_driver::remove because there is only little that can be done. To simplify the quest to make this function return void, let struct sa1111_driver::remove return void, too. All users already unconditionally return 0, this commit makes it obvious that returning an error code is a bad idea and ensures future users behave accordingly. Link: https://lore.kernel.org/r/20201126114724.2028511-1-u.kleine-koenig@pengutronix.de Acked-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
Alain VOLMAT authored
Update the sti platform LL_UART support to rely on CONFIG_DEBUG_UART_PHYS and CONFIG_DEBUG_UART_VIRT from Kconfig Signed-off-by:
Alain Volmat <avolmat@me.com> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
Ard Biesheuvel authored
Take the 4 instruction byte swapping sequence from the decompressor's head.S, and turn it into a rev_l GAS macro for general use. While at it, make it use the 'rev' instruction when compiling for v6 or later. Reviewed-by:
Geert Uytterhoeven <geert+renesas@glider.be> Tested-by:
Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by:
Nicolas Pitre <nico@fluxnic.net> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Jan 29, 2021
-
-
Dmitry Osipenko authored
The tegra_uart_config of the DEBUG_LL code is now placed right at the start of the .text section after commit which enabled debug output in the decompressor. Tegra devices are not booting anymore if DEBUG_LL is enabled since tegra_uart_config data is executes as a code. Fix the misplaced tegra_uart_config storage by embedding it into the code. Cc: stable@vger.kernel.org Fixes: 2596a72d ("ARM: 9009/1: uncompress: Enable debug in head.S") Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Dmitry Osipenko <digetx@gmail.com> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Jan 21, 2021
-
-
Florian Fainelli authored
72116 has the same memory map as 7255 and the same physical address for the UART, alias the definition accordingly. Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Florian Fainelli <f.fainelli@gmail.com>
-
Andre Przywara authored
The ARM DEN0098 document describe an SMCCC based firmware service to deliver hardware generated random numbers. Its existence is advertised according to the SMCCC v1.1 specification. Add a (dummy) call to probe functions implemented in each architecture (ARM and arm64), to determine the existence of this interface. For now this return false, but this will be overwritten by each architecture's support patch. Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jan 20, 2021
-
-
Arnd Bergmann authored
The SiRF Prima2 and Atlas platform code was contributed by Cambridge Silicon Radio (CSR) after aquiring the original SiRF company, and maintained by Barry Song. CSR was subsequently acquired by Qualcomm, who no longer have an interest in maintaining the SoC platform but instead have released more recent SoCs for the same market in the Snapdragon family. As Barry is no longer working for the company, nobody else there wants to maintain it, and there are no third-party users, the best way forward seems to be to completely remove it. Thanks to Barry for maintaining the platform for the past ten years. Cc: Barry Song <baohua@kernel.org> Link: https://lore.kernel.org/lkml/c969392572604b98bcb3be44048c3165@hisilicon.com/ Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Jan 15, 2021
-
-
Uwe Kleine-König authored
I didn't touch this code since it served as a platform to introduce ARMv7-M support to Linux. The only known machine that runs Linux has only 4 MiB of RAM (that originally only exists to hold the display's framebuffer). There are no known users and no further use foreseeable, so drop the code. Signed-off-by:
Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Link: https://lore.kernel.org/r/20210115155130.185010-2-u.kleine-koenig@pengutronix.de ' Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Dec 29, 2020
-
-
Randy Dunlap authored
Make <asm-generic/local64.h> mandatory in include/asm-generic/Kbuild and remove all arch/*/include/asm/local64.h arch-specific files since they only #include <asm-generic/local64.h>. This fixes build errors on arch/c6x/ and arch/nios2/ for block/blk-iocost.c. Build-tested on 21 of 25 arch-es. (tools problems on the others) Yes, we could even rename <asm-generic/local64.h> to <linux/local64.h> and change all #includes to use <linux/local64.h> instead. Link: https://lkml.kernel.org/r/20201227024446.17018-1-rdunlap@infradead.org Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Suggested-by:
Christoph Hellwig <hch@infradead.org> Reviewed-by:
Masahiro Yamada <masahiroy@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <jacquiot.aurelien@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Dec 21, 2020
-
-
Anshuman Khandual authored
Macros used as functions can be problematic from the compiler perspective. There was a build failure report caused primarily because of non reference of an argument variable. Hence convert PUD level pgtable helper macros into functions in order to avoid such problems in the future. In the process, it fixes the argument variables sequence in set_pud() which probably remained hidden for being a macro. https://lore.kernel.org/linux-mm/202011020749.5XQ3Hfzc-lkp@intel.com/ https://lore.kernel.org/linux-mm/5fa49698.Vu2O3r+dU20UoEJ+%25lkp@intel.com/ Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Dec 14, 2020
-
-
Ard Biesheuvel authored
Ensure that EFI_PHYS_ALIGN is an unsigned type, to prevent spurious warnings from the type checks in the definition of the max() macro. Link: https://lore.kernel.org/linux-efi/20201213151306.73558-1-ardb@kernel.org Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
- Dec 11, 2020
-
-
Palmer Dabbelt authored
This is exactly the same as the arm64 version, which I recently copied into lib/ for use by the RISC-V port. Reviewed-by:
Luis Chamberlain <mcgrof@kernel.org> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- Dec 09, 2020
-
-
Ard Biesheuvel authored
Now that ARM started following the example of arm64 and RISC-V, and no longer imposes any restrictions on the placement of the FDT in memory at boot, we no longer need per-arch implementations of efi_get_max_fdt_addr() to factor out the differences. So get rid of it. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Reviewed-by:
Atish Patra <atish.patra@wdc.com> Link: https://lore.kernel.org/r/20201029134901.9773-1-ardb@kernel.org
-
Ard Biesheuvel authored
Now that we reduced the minimum relative alignment between PHYS_OFFSET and PAGE_OFFSET to 2 MiB, we can take this into account when allocating memory for the decompressed kernel when booting via EFI. This minimizes the amount of unusable memory we may end up with due to the base of DRAM being occupied by firmware. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
Scatter-gather lists passed to UpdateCapsule() should be cleaned from the D-cache to ensure that they are visible to the CPU after a warm reboot before the MMU is enabled. On ARM and arm64 systems, this implies a D-cache clean by virtual address to the point of coherency. However, due to the fact that the firmware itself is not able to map physical addresses back to virtual addresses when running under the OS, this must be done by the caller. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
- Dec 08, 2020
-
-
Nicolas Pitre authored
The ARM version of __div64_32() encapsulates a call to __do_div64 with non-standard argument passing. In particular, __n is a 64-bit input argument assigned to r0-r1 and __rem is an output argument sharing half of that r0-r1 register pair. With __n being an input argument, the compiler is in its right to presume that r0-r1 would still hold the value of __n past the inline assembly statement. Normally, the compiler would have assigned non overlapping registers to __n and __rem if the value for __n is needed again. However, here we enforce our own register assignment and gcc fails to notice the conflict. In practice this doesn't cause any problem as __n is considered dead after the asm statement and *n is overwritten. However this is not always guaranteed and clang rightfully complains. Let's fix it properly by making __n into an input-output variable. This makes it clear that those registers representing __n have been modified. Then we can extract __rem as the high part of __n with plain C code. This asm constraint "abuse" was likely relied upon back when gcc didn't handle 64-bit values optimally. Turns out that gcc is now able to optimize things and produces the same code with this patch applied. Reported-by:
Antony Yu <swpenim@gmail.com> Signed-off-by:
Nicolas Pitre <nico@fluxnic.net> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Tested-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Dec 04, 2020
-
-
Arnd Bergmann authored
The reference to cache_is_vivt() was moved into a header file, which now causes a build failure in rare randconfig builds: arch/arm/include/asm/highmem.h:52:43: error: implicit declaration of function 'cache_is_vivt' [-Werror,-Wimplicit-function-declaration] Add an explicit include to make it build reliably. Fixes: 2a15ba82 ("ARM: highmem: Switch to generic kmap atomic") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ira Weiny <ira.weiny@intel.com> Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/r/20201204165930.3877571-1-arnd@kernel.org
-
- Nov 23, 2020
-
-
Peter Collingbourne authored
Previously we were not clearing non-uapi flag bits in sigaction.sa_flags when storing the userspace-provided sa_flags or when returning them via oldact. Start doing so. This allows userspace to detect missing support for flag bits and allows the kernel to use non-uapi bits internally, as we are already doing in arch/x86 for two flag bits. Now that this change is in place, we no longer need the code in arch/x86 that was hiding these bits from userspace, so remove it. This is technically a userspace-visible behavior change for sigaction, as the unknown bits returned via oldact.sa_flags are no longer set. However, we are free to define the behavior for unknown bits exactly because their behavior is currently undefined, so for now we can define the meaning of each of them to be "clear the bit in oldact.sa_flags unless the bit becomes known in the future". Furthermore, this behavior is consistent with OpenBSD [1], illumos [2] and XNU [3] (FreeBSD [4] and NetBSD [5] fail the syscall if unknown bits are set). So there is some precedent for this behavior in other kernels, and in particular in XNU, which is probably the most popular kernel among those that I looked at, which means that this change is less likely to be a compatibility issue. Link: [1] https://github.com/openbsd/src/blob/f634a6a4b5bf832e9c1de77f7894ae2625e74484/sys/kern/kern_sig.c#L278 Link: [2] https://github.com/illumos/illumos-gate/blob/76f19f5fdc974fe5be5c82a556e43a4df93f1de1/usr/src/uts/common/syscall/sigaction.c#L86 Link: [3] https://github.com/apple/darwin-xnu/blob/a449c6a3b8014d9406c2ddbdc81795da24aa7443/bsd/kern/kern_sig.c#L480 Link: [4] https://github.com/freebsd/freebsd/blob/eded70c37057857c6e23fae51f86b8f8f43cd2d0/sys/kern/kern_sig.c#L699 Link: [5] https://github.com/NetBSD/src/blob/3365779becdcedfca206091a645a0e8e22b2946e/sys/kern/sys_sig.c#L473 Signed-off-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Dave Martin <Dave.Martin@arm.com> Acked-by:
"Eric W. Biederman" <ebiederm@xmission.com> Link: https://linux-review.googlesource.com/id/I35aab6f5be932505d90f3b3450c083b4db1eca86 Link: https://lkml.kernel.org/r/878dbcb5f47bc9b11881c81f745c0bef5c23f97f.1605235762.git.pcc@google.com Signed-off-by:
Eric W. Biederman <ebiederm@xmission.com>
-
Peter Collingbourne authored
Most architectures with the exception of alpha, mips, parisc and sparc use the same values for these flags. Move their definitions into asm-generic/signal-defs.h and allow the architectures with non-standard values to override them. Also, document the non-standard flag values in order to make it easier to add new generic flags in the future. A consequence of this change is that on powerpc and x86, the constants' values aside from SA_RESETHAND change signedness from unsigned to signed. This is not expected to impact realistic use of these constants. In particular the typical use of the constants where they are or'ed together and assigned to sa_flags (or another int variable) would not be affected. Signed-off-by:
Peter Collingbourne <pcc@google.com> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
"Eric W. Biederman" <ebiederm@xmission.com> Reviewed-by:
Dave Martin <Dave.Martin@arm.com> Link: https://linux-review.googlesource.com/id/Ia3849f18b8009bf41faca374e701cdca36974528 Link: https://lkml.kernel.org/r/b6d0d1ec34f9ee93e1105f14f288fba5f89d1f24.1605235762.git.pcc@google.com Signed-off-by:
Eric W. Biederman <ebiederm@xmission.com>
-
Thomas Gleixner authored
irq_cpustat_t is exactly the same as the asm-generic one. Define ack_bad_irq so the generic header does not emit the generic version of it. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Frederic Weisbecker <frederic@kernel.org> Reviewed-by:
Valentin Schneider <valentin.schneider@arm.com> Link: https://lore.kernel.org/r/20201113141733.276505871@linutronix.de
-
- Nov 20, 2020
-
-
Kees Cook authored
To enable seccomp constant action bitmaps, we need to have a static mapping to the audit architecture and system call table size. Add these for arm. Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- Nov 16, 2020
-
-
Arnd Bergmann authored
Stefan Agner reported a bug when using zsram on 32-bit Arm machines with RAM above the 4GB address boundary: Unable to handle kernel NULL pointer dereference at virtual address 00000000 pgd = a27bd01c [00000000] *pgd=236a0003, *pmd=1ffa64003 Internal error: Oops: 207 [#1] SMP ARM Modules linked in: mdio_bcm_unimac(+) brcmfmac cfg80211 brcmutil raspberrypi_hwmon hci_uart crc32_arm_ce bcm2711_thermal phy_generic genet CPU: 0 PID: 123 Comm: mkfs.ext4 Not tainted 5.9.6 #1 Hardware name: BCM2711 PC is at zs_map_object+0x94/0x338 LR is at zram_bvec_rw.constprop.0+0x330/0xa64 pc : [<c0602b38>] lr : [<c0bda6a0>] psr: 60000013 sp : e376bbe0 ip : 00000000 fp : c1e2921c r10: 00000002 r9 : c1dda730 r8 : 00000000 r7 : e8ff7a00 r6 : 00000000 r5 : 02f9ffa0 r4 : e3710000 r3 : 000fdffe r2 : c1e0ce80 r1 : ebf979a0 r0 : 00000000 Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user Control: 30c5383d Table: 235c2a80 DAC: fffffffd Process mkfs.ext4 (pid: 123, stack limit = 0x495a22e6) Stack: (0xe376bbe0 to 0xe376c000) As it turns out, zsram needs to know the maximum memory size, which is defined in MAX_PHYSMEM_BITS when CONFIG_SPARSEMEM is set, or in MAX_POSSIBLE_PHYSMEM_BITS on the x86 architecture. The same problem will be hit on all 32-bit architectures that have a physical address space larger than 4GB and happen to not enable sparsemem and include asm/sparsemem.h from asm/pgtable.h. After the initial discussion, I suggested just always defining MAX_POSSIBLE_PHYSMEM_BITS whenever CONFIG_PHYS_ADDR_T_64BIT is set, or provoking a build error otherwise. This addresses all configurations that can currently have this runtime bug, but leaves all other configurations unchanged. I looked up the possible number of bits in source code and datasheets, here is what I found: - on ARC, CONFIG_ARC_HAS_PAE40 controls whether 32 or 40 bits are used - on ARM, CONFIG_LPAE enables 40 bit addressing, without it we never support more than 32 bits, even though supersections in theory allow up to 40 bits as well. - on MIPS, some MIPS32r1 or later chips support 36 bits, and MIPS32r5 XPA supports up to 60 bits in theory, but 40 bits are more than anyone will ever ship - On PowerPC, there are three different implementations of 36 bit addressing, but 32-bit is used without CONFIG_PTE_64BIT - On RISC-V, the normal page table format can support 34 bit addressing. There is no highmem support on RISC-V, so anything above 2GB is unused, but it might be useful to eventually support CONFIG_ZRAM for high pages. Fixes: 61989a80 ("staging: zsmalloc: zsmalloc memory allocation library") Fixes: 02390b87 ("mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS") Acked-by:
Thomas Bogendoerfer <tsbogend@alpha.franken.de> Reviewed-by:
Stefan Agner <stefan@agner.ch> Tested-by:
Stefan Agner <stefan@agner.ch> Acked-by:
Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/linux-mm/bdfa44bf1c570b05d6c70898e2bbb0acf234ecdf.1604762181.git.stefan@agner.ch/ Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Nov 12, 2020
-
-
Jens Axboe authored
Wire up TIF_NOTIFY_SIGNAL handling for arm. Cc: linux-arm-kernel@lists.infradead.org Acked-by:
Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Nov 06, 2020
-
-
Thomas Gleixner authored
No reason having the same code in every architecture. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@armlinux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20201103095857.582196476@linutronix.de
-
- Oct 30, 2020
-
-
Arnd Bergmann authored
rpc is the only user of the timer_tick() function now, and can just call the newly added generic version instead. Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Oct 28, 2020
-
-
Ard Biesheuvel authored
Currently, the .alt.smp.init section contains the virtual addresses of the patch sites. Since patching may occur both before and after switching into virtual mode, this requires some manual handling of the address when applying the UP alternative. Let's simplify this by using relative offsets in the table entries: this allows us to simply add each entry's address to its contents, regardless of whether we are running in virtual mode or not. Reviewed-by:
Nicolas Pitre <nico@fluxnic.net> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
The ARM kernel's linear map starts at PAGE_OFFSET, which maps to a physical address (PHYS_OFFSET) that is platform specific, and is discovered at boot. Since we don't want to slow down translations between physical and virtual addresses by keeping the offset in a variable in memory, we implement this by patching the code performing the translation, and putting the offset between PAGE_OFFSET and the start of physical RAM directly into the instruction opcodes. As we only patch up to 8 bits of offset, yielding 4 GiB >> 8 == 16 MiB of granularity, we have to round up PHYS_OFFSET to the next multiple if the start of physical RAM is not a multiple of 16 MiB. This wastes some physical RAM, since the memory that was skipped will now live below PAGE_OFFSET, making it inaccessible to the kernel. We can improve this by changing the patchable sequences and the patching logic to carry more bits of offset: 11 bits gives us 4 GiB >> 11 == 2 MiB of granularity, and so we will never waste more than that amount by rounding up the physical start of DRAM to the next multiple of 2 MiB. (Note that 2 MiB granularity guarantees that the linear mapping can be created efficiently, whereas less than 2 MiB may result in the linear mapping needing another level of page tables) This helps Zhen Lei's scenario, where the start of DRAM is known to be occupied. It also helps EFI boot, which relies on the firmware's page allocator to allocate space for the decompressed kernel as low as possible. And if the KASLR patches ever land for 32-bit, it will give us 3 more bits of randomization of the placement of the kernel inside the linear region. For the ARM code path, it simply comes down to using two add/sub instructions instead of one for the carryless version, and patching each of them with the correct immediate depending on the rotation field. For the LPAE calculation, which has to deal with a carry, it patches the MOVW instruction with up to 12 bits of offset (but we only need 11 bits anyway) For the Thumb2 code path, patching more than 11 bits of displacement would be somewhat cumbersome, but the 11 bits we need fit nicely into the second word of the u16[2] opcode, so we simply update the immediate assignment and the left shift to create an addend of the right magnitude. Suggested-by:
Zhen Lei <thunder.leizhen@huawei.com> Acked-by:
Nicolas Pitre <nico@fluxnic.net> Acked-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
In preparation for reducing the phys-to-virt minimum relative alignment from 16 MiB to 2 MiB, switch to patchable sequences involving MOVW instructions that can more easily be manipulated to carry a 12-bit immediate. Note that the non-LPAE ARM sequence is not updated: MOVW may not be supported on non-LPAE platforms, and the sequence itself can be updated more easily to apply the 12 bits of displacement. For Thumb2, which has many more versions of opcodes, switch to a sequence that can be patched by the same patching code for both versions. Note that the Thumb2 opcodes for MOVW and MVN are unambiguous, and have no rotation bits in their immediate fields, so there is no need to use placeholder constants in the asm blocks. While at it, drop the 'volatile' qualifiers from the asm blocks: the code does not have any side effects that are invisible to the compiler, so it is free to omit these sequences if the outputs are not used. Suggested-by:
Russell King <linux@armlinux.org.uk> Acked-by:
Nicolas Pitre <nico@fluxnic.net> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
Free up a register in the p2v patching code by switching to relative references, which don't require keeping the phys-to-virt displacement live in a register. Acked-by:
Nicolas Pitre <nico@fluxnic.net> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
We always pass the same value for 'type' so pull it into the __pv_stub macro itself. Acked-by:
Nicolas Pitre <nico@fluxnic.net> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
When using the new adr_l/ldr_l/str_l macros to refer to external symbols from modules, the linker may emit place relative ELF relocations that need to be fixed up by the module loader. So add support for these. Reviewed-by:
Nicolas Pitre <nico@fluxnic.net> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
Like arm64, ARM supports position independent code sequences that produce symbol references with a greater reach than the ordinary adr/ldr instructions. Since on ARM, the adrl pseudo-instruction is only supported in ARM mode (and not at all when using Clang), having a adr_l macro like we do on arm64 is useful, and increases symmetry as well. Currently, we use open coded instruction sequences involving literals and arithmetic operations. Instead, we can use movw/movt pairs on v7 CPUs, circumventing the D-cache entirely. E.g., on v7+ CPUs, we can emit a PC-relative reference as follows: movw <reg>, #:lower16:<sym> - (1f + 8) movt <reg>, #:upper16:<sym> - (1f + 8) 1: add <reg>, <reg>, pc For older CPUs, we can emit the literal into a subsection, allowing it to be emitted out of line while retaining the ability to perform arithmetic on label offsets. E.g., on pre-v7 CPUs, we can emit a PC-relative reference as follows: ldr <reg>, 2f 1: add <reg>, <reg>, pc .subsection 1 2: .long <sym> - (1b + 8) .previous This is allowed by the assembler because, unlike ordinary sections, subsections are combined into a single section in the object file, and so the label references are not true cross-section references that are visible as relocations. (Subsections have been available in binutils since 2004 at least, so they should not cause any issues with older toolchains.) So use the above to implement the macros mov_l, adr_l, ldr_l and str_l, all of which will use movw/movt pairs on v7 and later CPUs, and use PC-relative literals otherwise. Reviewed-by:
Nicolas Pitre <nico@fluxnic.net> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
Ard Biesheuvel authored
Commit 149a3ffe62b9dbc3 ("9012/1: move device tree mapping out of linear region") created a permanent, read-only section mapping of the device tree blob provided by the firmware, and added a set of macros to get the base and size of the virtually mapped FDT based on the physical address. However, while the mapping code uses the SECTION_SIZE macro correctly, the macros use PMD_SIZE instead, which means something entirely different on ARM when using short descriptors, and is therefore not the right quantity to use here. So replace PMD_SIZE with SECTION_SIZE. While at it, change the names of the macro and its parameter to clarify that it returns the virtual address of the start of the FDT, based on the physical address in memory. Tested-by:
Joel Stanley <joel@jms.id.au> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
- Oct 27, 2020
-
-
Andrew Jeffery authored
Setting both CONFIG_KPROBES=y and CONFIG_FORTIFY_SOURCE=y on ARM leads to a panic in memcpy() when injecting a kprobe despite the fixes found in commit e46daee5 ("ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE") and commit 0ac569bf ("ARM: 8834/1: Fix: kprobes: optimized kprobes illegal instruction"). arch/arm/include/asm/kprobes.h effectively declares the target type of the optprobe_template_entry assembly label as a u32 which leads memcpy()'s __builtin_object_size() call to determine that the pointed-to object is of size four. However, the symbol is used as a handle for the optimised probe assembly template that is at least 96 bytes in size. The symbol's use despite its type blows up the memcpy() in ARM's arch_prepare_optimized_kprobe() with a false-positive fortify_panic() when it should instead copy the optimised probe template into place: ``` $ sudo perf probe -a aspeed_g6_pinctrl_probe [ 158.457252] detected buffer overflow in memcpy [ 158.458069] ------------[ cut here ]------------ [ 158.458283] kernel BUG at lib/string.c:1153! [ 158.458436] Internal error: Oops - BUG: 0 [#1] SMP ARM [ 158.458768] Modules linked in: [ 158.459043] CPU: 1 PID: 99 Comm: perf Not tainted 5.9.0-rc7-00038-gc53ebf8167e9 #158 [ 158.459296] Hardware name: Generic DT based system [ 158.459529] PC is at fortify_panic+0x18/0x20 [ 158.459658] LR is at __irq_work_queue_local+0x3c/0x74 [ 158.459831] pc : [<8047451c>] lr : [<8020ecd4>] psr: 60000013 [ 158.460032] sp : be2d1d50 ip : be2d1c58 fp : be2d1d5c [ 158.460174] r10: 00000006 r9 : 00000000 r8 : 00000060 [ 158.460348] r7 : 8011e434 r6 : b9e0b800 r5 : 7f000000 r4 : b9fe4f0c [ 158.460557] r3 : 80c04cc8 r2 : 00000000 r1 : be7c03cc r0 : 00000022 [ 158.460801] Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none [ 158.461037] Control: 10c5387d Table: b9cd806a DAC: 00000051 [ 158.461251] Process perf (pid: 99, stack limit = 0x81c71a69) [ 158.461472] Stack: (0xbe2d1d50 to 0xbe2d2000) [ 158.461757] 1d40: be2d1d84 be2d1d60 8011e724 80474510 [ 158.462104] 1d60: b9e0b800 b9fe4f0c 00000000 b9fe4f14 80c8ec80 be235000 be2d1d9c be2d1d88 [ 158.462436] 1d80: 801cee44 8011e57c b9fe4f0c 00000000 be2d1dc4 be2d1da0 801d0ad0 801cedec [ 158.462742] 1da0: 00000000 00000000 b9fe4f00 ffffffea 00000000 be235000 be2d1de4 be2d1dc8 [ 158.463087] 1dc0: 80204604 801d0738 00000000 00000000 b9fe4004 ffffffea be2d1e94 be2d1de8 [ 158.463428] 1de0: 80205434 80204570 00385c00 00000000 00000000 00000000 be2d1e14 be2d1e08 [ 158.463880] 1e00: 802ba014 b9fe4f00 b9e718c0 b9fe4f84 b9e71ec8 be2d1e24 00000000 00385c00 [ 158.464365] 1e20: 00000000 626f7270 00000065 802b905c be2d1e94 0000002e 00000000 802b9914 [ 158.464829] 1e40: be2d1e84 be2d1e50 802b9914 8028ff78 804629d0 b9e71ec0 0000002e b9e71ec0 [ 158.465141] 1e60: be2d1ea8 80c04cc8 00000cc0 b9e713c4 00000002 80205834 80205834 0000002e [ 158.465488] 1e80: be235000 be235000 be2d1ea4 be2d1e98 80205854 80204e94 be2d1ecc be2d1ea8 [ 158.465806] 1ea0: 801ee4a0 80205840 00000002 80c04cc8 00000000 0000002e 0000002e 00000000 [ 158.466110] 1ec0: be2d1f0c be2d1ed0 801ee5c8 801ee428 00000000 be2d0000 006b1fd0 00000051 [ 158.466398] 1ee0: 00000000 b9eedf00 0000002e 80204410 006b1fd0 be2d1f60 00000000 00000004 [ 158.466763] 1f00: be2d1f24 be2d1f10 8020442c 801ee4c4 80205834 802c613c be2d1f5c be2d1f28 [ 158.467102] 1f20: 802c60ac 8020441c be2d1fac be2d1f38 8010c764 802e9888 be2d1f5c b9eedf00 [ 158.467447] 1f40: b9eedf00 006b1fd0 0000002e 00000000 be2d1f94 be2d1f60 802c634c 802c5fec [ 158.467812] 1f60: 00000000 00000000 00000000 80c04cc8 006b1fd0 00000003 76f7a610 00000004 [ 158.468155] 1f80: 80100284 be2d0000 be2d1fa4 be2d1f98 802c63ec 802c62e8 00000000 be2d1fa8 [ 158.468508] 1fa0: 80100080 802c63e0 006b1fd0 00000003 00000003 006b1fd0 0000002e 00000000 [ 158.468858] 1fc0: 006b1fd0 00000003 76f7a610 00000004 006b1fb0 0026d348 00000017 7ef2738c [ 158.469202] 1fe0: 76f3431c 7ef272d8 0014ec50 76f34338 60000010 00000003 00000000 00000000 [ 158.469461] Backtrace: [ 158.469683] [<80474504>] (fortify_panic) from [<8011e724>] (arch_prepare_optimized_kprobe+0x1b4/0x1f8) [ 158.470021] [<8011e570>] (arch_prepare_optimized_kprobe) from [<801cee44>] (alloc_aggr_kprobe+0x64/0x70) [ 158.470287] r9:be235000 r8:80c8ec80 r7:b9fe4f14 r6:00000000 r5:b9fe4f0c r4:b9e0b800 [ 158.470478] [<801cede0>] (alloc_aggr_kprobe) from [<801d0ad0>] (register_kprobe+0x3a4/0x5a0) [ 158.470685] r5:00000000 r4:b9fe4f0c [ 158.470790] [<801d072c>] (register_kprobe) from [<80204604>] (__register_trace_kprobe+0xa0/0xa4) [ 158.471001] r9:be235000 r8:00000000 r7:ffffffea r6:b9fe4f00 r5:00000000 r4:00000000 [ 158.471188] [<80204564>] (__register_trace_kprobe) from [<80205434>] (trace_kprobe_create+0x5ac/0x9ac) [ 158.471408] r7:ffffffea r6:b9fe4004 r5:00000000 r4:00000000 [ 158.471553] [<80204e88>] (trace_kprobe_create) from [<80205854>] (create_or_delete_trace_kprobe+0x20/0x3c) [ 158.471766] r10:be235000 r9:be235000 r8:0000002e r7:80205834 r6:80205834 r5:00000002 [ 158.471949] r4:b9e713c4 [ 158.472027] [<80205834>] (create_or_delete_trace_kprobe) from [<801ee4a0>] (trace_run_command+0x84/0x9c) [ 158.472255] [<801ee41c>] (trace_run_command) from [<801ee5c8>] (trace_parse_run_command+0x110/0x1f8) [ 158.472471] r6:00000000 r5:0000002e r4:0000002e [ 158.472594] [<801ee4b8>] (trace_parse_run_command) from [<8020442c>] (probes_write+0x1c/0x28) [ 158.472800] r10:00000004 r9:00000000 r8:be2d1f60 r7:006b1fd0 r6:80204410 r5:0000002e [ 158.472968] r4:b9eedf00 [ 158.473046] [<80204410>] (probes_write) from [<802c60ac>] (vfs_write+0xcc/0x1e8) [ 158.473226] [<802c5fe0>] (vfs_write) from [<802c634c>] (ksys_write+0x70/0xf8) [ 158.473400] r8:00000000 r7:0000002e r6:006b1fd0 r5:b9eedf00 r4:b9eedf00 [ 158.473567] [<802c62dc>] (ksys_write) from [<802c63ec>] (sys_write+0x18/0x1c) [ 158.473745] r9:be2d0000 r8:80100284 r7:00000004 r6:76f7a610 r5:00000003 r4:006b1fd0 [ 158.473932] [<802c63d4>] (sys_write) from [<80100080>] (ret_fast_syscall+0x0/0x54) [ 158.474126] Exception stack(0xbe2d1fa8 to 0xbe2d1ff0) [ 158.474305] 1fa0: 006b1fd0 00000003 00000003 006b1fd0 0000002e 00000000 [ 158.474573] 1fc0: 006b1fd0 00000003 76f7a610 00000004 006b1fb0 0026d348 00000017 7ef2738c [ 158.474811] 1fe0: 76f3431c 7ef272d8 0014ec50 76f34338 [ 158.475171] Code: e24cb004 e1a01000 e59f0004 ebf40dd3 (e7f001f2) [ 158.475847] ---[ end trace 55a5b31c08a29f00 ]--- [ 158.476088] Kernel panic - not syncing: Fatal exception [ 158.476375] CPU0: stopping [ 158.476709] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G D 5.9.0-rc7-00038-gc53ebf8167e9 #158 [ 158.477176] Hardware name: Generic DT based system [ 158.477411] Backtrace: [ 158.477604] [<8010dd28>] (dump_backtrace) from [<8010dfd4>] (show_stack+0x20/0x24) [ 158.477990] r7:00000000 r6:60000193 r5:00000000 r4:80c2f634 [ 158.478323] [<8010dfb4>] (show_stack) from [<8046390c>] (dump_stack+0xcc/0xe8) [ 158.478686] [<80463840>] (dump_stack) from [<80110750>] (handle_IPI+0x334/0x3a0) [ 158.479063] r7:00000000 r6:00000004 r5:80b65cc8 r4:80c78278 [ 158.479352] [<8011041c>] (handle_IPI) from [<801013f8>] (gic_handle_irq+0x88/0x94) [ 158.479757] r10:10c5387d r9:80c01ed8 r8:00000000 r7:c0802000 r6:80c0537c r5:000003ff [ 158.480146] r4:c080200c r3:fffffff4 [ 158.480364] [<80101370>] (gic_handle_irq) from [<80100b6c>] (__irq_svc+0x6c/0x90) [ 158.480748] Exception stack(0x80c01ed8 to 0x80c01f20) [ 158.481031] 1ec0: 000128bc 00000000 [ 158.481499] 1ee0: be7b8174 8011d3a0 80c00000 00000000 80c04cec 80c04d28 80c5d7c2 80a026d4 [ 158.482091] 1f00: 10c5387d 80c01f34 80c01f38 80c01f28 80109554 80109558 60000013 ffffffff [ 158.482621] r9:80c00000 r8:80c5d7c2 r7:80c01f0c r6:ffffffff r5:60000013 r4:80109558 [ 158.482983] [<80109518>] (arch_cpu_idle) from [<80818780>] (default_idle_call+0x38/0x120) [ 158.483360] [<80818748>] (default_idle_call) from [<801585a8>] (do_idle+0xd4/0x158) [ 158.483945] r5:00000000 r4:80c00000 [ 158.484237] [<801584d4>] (do_idle) from [<801588f4>] (cpu_startup_entry+0x28/0x2c) [ 158.484784] r9:80c78000 r8:00000000 r7:80c78000 r6:80c78040 r5:80c04cc0 r4:000000d6 [ 158.485328] [<801588cc>] (cpu_startup_entry) from [<80810a78>] (rest_init+0x9c/0xbc) [ 158.485930] [<808109dc>] (rest_init) from [<80b00ae4>] (arch_call_rest_init+0x18/0x1c) [ 158.486503] r5:80c04cc0 r4:00000001 [ 158.486857] [<80b00acc>] (arch_call_rest_init) from [<80b00fcc>] (start_kernel+0x46c/0x548) [ 158.487589] [<80b00b60>] (start_kernel) from [<00000000>] (0x0) ``` Fixes: e46daee5 ("ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE") Fixes: 0ac569bf ("ARM: 8834/1: Fix: kprobes: optimized kprobes illegal instruction") Suggested-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Andrew Jeffery <andrew@aj.id.au> Tested-by:
Luka Oreskovic <luka.oreskovic@sartura.hr> Tested-by:
Joel Stanley <joel@jms.id.au> Reviewed-by:
Joel Stanley <joel@jms.id.au> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Cc: Luka Oreskovic <luka.oreskovic@sartura.hr> Cc: Juraj Vijtiuk <juraj.vijtiuk@sartura.hr> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
Linus Walleij authored
This patch initializes KASan shadow region's page table and memory. There are two stage for KASan initializing: 1. At early boot stage the whole shadow region is mapped to just one physical page (kasan_zero_page). It is finished by the function kasan_early_init which is called by __mmap_switched(arch/arm/kernel/ head-common.S) 2. After the calling of paging_init, we use kasan_zero_page as zero shadow for some memory that KASan does not need to track, and we allocate a new shadow space for the other memory that KASan need to track. These issues are finished by the function kasan_init which is call by setup_arch. When using KASan we also need to increase the THREAD_SIZE_ORDER from 1 to 2 as the extra calls for shadow memory uses quite a bit of stack. As we need to make a temporary copy of the PGD when setting up shadow memory we create a helpful PGD_SIZE definition for both LPAE and non-LPAE setups. The KASan core code unconditionally calls pud_populate() so this needs to be changed from BUG() to do {} while (0) when building with KASan enabled. After the initial development by Andre Ryabinin several modifications have been made to this code: Abbott Liu <liuwenliang@huawei.com> - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's mapping table need be copied in the pgd_alloc() function. - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate, kasan_pgd_populate from .meminit.text section to .init.text section. Reported by Florian Fainelli <f.fainelli@gmail.com> Linus Walleij <linus.walleij@linaro.org>: - Drop the custom mainpulation of TTBR0 and just use cpu_switch_mm() to switch the pgd table. - Adopt to handle 4th level page tabel folding. - Rewrite the entire page directory and page entry initialization sequence to be recursive based on ARM64:s kasan_init.c. Ard Biesheuvel <ardb@kernel.org>: - Necessary underlying fixes. - Crucial bug fixes to the memory set-up code. Co-developed-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Co-developed-by:
Abbott Liu <liuwenliang@huawei.com> Co-developed-by:
Ard Biesheuvel <ardb@kernel.org> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: kasan-dev@googlegroups.com Cc: Mike Rapoport <rppt@linux.ibm.com> Acked-by:
Mike Rapoport <rppt@linux.ibm.com> Reviewed-by:
Ard Biesheuvel <ardb@kernel.org> Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q Reported-by:
Russell King - ARM Linux <rmk+kernel@armlinux.org.uk> Reported-by:
Florian Fainelli <f.fainelli@gmail.com> Signed-off-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by:
Abbott Liu <liuwenliang@huawei.com> Signed-off-by:
Florian Fainelli <f.fainelli@gmail.com> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-