Skip to content
  1. Apr 23, 2021
  2. Mar 22, 2021
    • Ingo Molnar's avatar
      locking: Fix typos in comments · e2db7592
      Ingo Molnar authored
      
      
      Fix ~16 single-word typos in locking code comments.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e2db7592
  3. Mar 11, 2021
  4. Feb 05, 2021
    • Russell King's avatar
      ARM: kexec: fix oops after TLB are invalidated · 4d62e81b
      Russell King authored
      
      
      Giancarlo Ferrari reports the following oops while trying to use kexec:
      
       Unable to handle kernel paging request at virtual address 80112f38
       pgd = fd7ef03e
       [80112f38] *pgd=0001141e(bad)
       Internal error: Oops: 80d [#1] PREEMPT SMP ARM
       ...
      
      This is caused by machine_kexec() trying to set the kernel text to be
      read/write, so it can poke values into the relocation code before
      copying it - and an interrupt occuring which changes the page tables.
      The subsequent writes then hit read-only sections that trigger a
      data abort resulting in the above oops.
      
      Fix this by copying the relocation code, and then writing the variables
      into the destination, thereby avoiding the need to make the kernel text
      read/write.
      
      Reported-by: default avatarGiancarlo Ferrari <giancarlo.ferrari89@gmail.com>
      Tested-by: default avatarGiancarlo Ferrari <giancarlo.ferrari89@gmail.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      4d62e81b
  5. Feb 01, 2021
  6. Jan 29, 2021
  7. Jan 21, 2021
  8. Jan 20, 2021
    • Arnd Bergmann's avatar
      ARM: remove sirf prima2/atlas platforms · f3a73284
      Arnd Bergmann authored
      The SiRF Prima2 and Atlas platform code was contributed by Cambridge
      Silicon Radio (CSR) after aquiring the original SiRF company, and
      maintained by Barry Song. CSR was subsequently acquired by Qualcomm,
      who no longer have an interest in maintaining the SoC platform but
      instead have released more recent SoCs for the same market in the
      Snapdragon family.
      
      As Barry is no longer working for the company, nobody else there
      wants to maintain it, and there are no third-party users, the
      best way forward seems to be to completely remove it.
      
      Thanks to Barry for maintaining the platform for the past ten years.
      
      Cc: Barry Song <baohua@kernel.org>
      Link: https://lore.kernel.org/lkml/c969392572604b98bcb3be44048c3165@hisilicon.com/
      
      
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      f3a73284
  9. Jan 15, 2021
  10. Dec 29, 2020
  11. Dec 21, 2020
  12. Dec 14, 2020
  13. Dec 11, 2020
  14. Dec 09, 2020
  15. Dec 08, 2020
    • Nicolas Pitre's avatar
      ARM: 9034/1: __div64_32(): straighten up inline asm constraints · e64ab473
      Nicolas Pitre authored
      
      
      The ARM version of __div64_32() encapsulates a call to __do_div64 with
      non-standard argument passing. In particular, __n is a 64-bit input
      argument assigned to r0-r1 and __rem is an output argument sharing half
      of that r0-r1 register pair.
      
      With __n being an input argument, the compiler is in its right to
      presume that r0-r1 would still hold the value of __n past the inline
      assembly statement. Normally, the compiler would have assigned non
      overlapping registers to __n and __rem if the value for __n is needed
      again.
      
      However, here we enforce our own register assignment and gcc fails to
      notice the conflict. In practice this doesn't cause any problem as __n
      is considered dead after the asm statement and *n is overwritten.
      However this is not always guaranteed and clang rightfully complains.
      
      Let's fix it properly by making __n into an input-output variable. This
      makes it clear that those registers representing __n have been modified.
      Then we can extract __rem as the high part of __n with plain C code.
      
      This asm constraint "abuse" was likely relied upon back when gcc didn't
      handle 64-bit values optimally. Turns out that gcc is now able to
      optimize things and produces the same code with this patch applied.
      
      Reported-by: default avatarAntony Yu <swpenim@gmail.com>
      Signed-off-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      e64ab473
  16. Dec 04, 2020
  17. Nov 23, 2020
  18. Nov 20, 2020
  19. Nov 16, 2020
    • Arnd Bergmann's avatar
      arch: pgtable: define MAX_POSSIBLE_PHYSMEM_BITS where needed · cef39703
      Arnd Bergmann authored
      
      
      Stefan Agner reported a bug when using zsram on 32-bit Arm machines
      with RAM above the 4GB address boundary:
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        pgd = a27bd01c
        [00000000] *pgd=236a0003, *pmd=1ffa64003
        Internal error: Oops: 207 [#1] SMP ARM
        Modules linked in: mdio_bcm_unimac(+) brcmfmac cfg80211 brcmutil raspberrypi_hwmon hci_uart crc32_arm_ce bcm2711_thermal phy_generic genet
        CPU: 0 PID: 123 Comm: mkfs.ext4 Not tainted 5.9.6 #1
        Hardware name: BCM2711
        PC is at zs_map_object+0x94/0x338
        LR is at zram_bvec_rw.constprop.0+0x330/0xa64
        pc : [<c0602b38>]    lr : [<c0bda6a0>]    psr: 60000013
        sp : e376bbe0  ip : 00000000  fp : c1e2921c
        r10: 00000002  r9 : c1dda730  r8 : 00000000
        r7 : e8ff7a00  r6 : 00000000  r5 : 02f9ffa0  r4 : e3710000
        r3 : 000fdffe  r2 : c1e0ce80  r1 : ebf979a0  r0 : 00000000
        Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
        Control: 30c5383d  Table: 235c2a80  DAC: fffffffd
        Process mkfs.ext4 (pid: 123, stack limit = 0x495a22e6)
        Stack: (0xe376bbe0 to 0xe376c000)
      
      As it turns out, zsram needs to know the maximum memory size, which
      is defined in MAX_PHYSMEM_BITS when CONFIG_SPARSEMEM is set, or in
      MAX_POSSIBLE_PHYSMEM_BITS on the x86 architecture.
      
      The same problem will be hit on all 32-bit architectures that have a
      physical address space larger than 4GB and happen to not enable sparsemem
      and include asm/sparsemem.h from asm/pgtable.h.
      
      After the initial discussion, I suggested just always defining
      MAX_POSSIBLE_PHYSMEM_BITS whenever CONFIG_PHYS_ADDR_T_64BIT is
      set, or provoking a build error otherwise. This addresses all
      configurations that can currently have this runtime bug, but
      leaves all other configurations unchanged.
      
      I looked up the possible number of bits in source code and
      datasheets, here is what I found:
      
       - on ARC, CONFIG_ARC_HAS_PAE40 controls whether 32 or 40 bits are used
       - on ARM, CONFIG_LPAE enables 40 bit addressing, without it we never
         support more than 32 bits, even though supersections in theory allow
         up to 40 bits as well.
       - on MIPS, some MIPS32r1 or later chips support 36 bits, and MIPS32r5
         XPA supports up to 60 bits in theory, but 40 bits are more than
         anyone will ever ship
       - On PowerPC, there are three different implementations of 36 bit
         addressing, but 32-bit is used without CONFIG_PTE_64BIT
       - On RISC-V, the normal page table format can support 34 bit
         addressing. There is no highmem support on RISC-V, so anything
         above 2GB is unused, but it might be useful to eventually support
         CONFIG_ZRAM for high pages.
      
      Fixes: 61989a80 ("staging: zsmalloc: zsmalloc memory allocation library")
      Fixes: 02390b87 ("mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS")
      Acked-by: default avatarThomas Bogendoerfer <tsbogend@alpha.franken.de>
      Reviewed-by: default avatarStefan Agner <stefan@agner.ch>
      Tested-by: default avatarStefan Agner <stefan@agner.ch>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Link: https://lore.kernel.org/linux-mm/bdfa44bf1c570b05d6c70898e2bbb0acf234ecdf.1604762181.git.stefan@agner.ch/
      
      
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      cef39703
  20. Nov 12, 2020
  21. Nov 06, 2020
  22. Oct 30, 2020
  23. Oct 28, 2020
    • Ard Biesheuvel's avatar
      ARM: kernel: use relative references for UP/SMP alternatives · 450abd38
      Ard Biesheuvel authored
      
      
      Currently, the .alt.smp.init section contains the virtual addresses
      of the patch sites. Since patching may occur both before and after
      switching into virtual mode, this requires some manual handling of
      the address when applying the UP alternative.
      
      Let's simplify this by using relative offsets in the table entries:
      this allows us to simply add each entry's address to its contents,
      regardless of whether we are running in virtual mode or not.
      
      Reviewed-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      450abd38
    • Ard Biesheuvel's avatar
      ARM: p2v: reduce p2v alignment requirement to 2 MiB · 9443076e
      Ard Biesheuvel authored
      
      
      The ARM kernel's linear map starts at PAGE_OFFSET, which maps to a
      physical address (PHYS_OFFSET) that is platform specific, and is
      discovered at boot. Since we don't want to slow down translations
      between physical and virtual addresses by keeping the offset in a
      variable in memory, we implement this by patching the code performing
      the translation, and putting the offset between PAGE_OFFSET and the
      start of physical RAM directly into the instruction opcodes.
      
      As we only patch up to 8 bits of offset, yielding 4 GiB >> 8 == 16 MiB
      of granularity, we have to round up PHYS_OFFSET to the next multiple if
      the start of physical RAM is not a multiple of 16 MiB. This wastes some
      physical RAM, since the memory that was skipped will now live below
      PAGE_OFFSET, making it inaccessible to the kernel.
      
      We can improve this by changing the patchable sequences and the patching
      logic to carry more bits of offset: 11 bits gives us 4 GiB >> 11 == 2 MiB
      of granularity, and so we will never waste more than that amount by
      rounding up the physical start of DRAM to the next multiple of 2 MiB.
      (Note that 2 MiB granularity guarantees that the linear mapping can be
      created efficiently, whereas less than 2 MiB may result in the linear
      mapping needing another level of page tables)
      
      This helps Zhen Lei's scenario, where the start of DRAM is known to be
      occupied. It also helps EFI boot, which relies on the firmware's page
      allocator to allocate space for the decompressed kernel as low as
      possible. And if the KASLR patches ever land for 32-bit, it will give
      us 3 more bits of randomization of the placement of the kernel inside
      the linear region.
      
      For the ARM code path, it simply comes down to using two add/sub
      instructions instead of one for the carryless version, and patching
      each of them with the correct immediate depending on the rotation
      field. For the LPAE calculation, which has to deal with a carry, it
      patches the MOVW instruction with up to 12 bits of offset (but we only
      need 11 bits anyway)
      
      For the Thumb2 code path, patching more than 11 bits of displacement
      would be somewhat cumbersome, but the 11 bits we need fit nicely into
      the second word of the u16[2] opcode, so we simply update the immediate
      assignment and the left shift to create an addend of the right magnitude.
      
      Suggested-by: default avatarZhen Lei <thunder.leizhen@huawei.com>
      Acked-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Acked-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      9443076e
    • Ard Biesheuvel's avatar
      ARM: p2v: switch to MOVW for Thumb2 and ARM/LPAE · e8e00f5a
      Ard Biesheuvel authored
      
      
      In preparation for reducing the phys-to-virt minimum relative alignment
      from 16 MiB to 2 MiB, switch to patchable sequences involving MOVW
      instructions that can more easily be manipulated to carry a 12-bit
      immediate. Note that the non-LPAE ARM sequence is not updated: MOVW
      may not be supported on non-LPAE platforms, and the sequence itself
      can be updated more easily to apply the 12 bits of displacement.
      
      For Thumb2, which has many more versions of opcodes, switch to a sequence
      that can be patched by the same patching code for both versions. Note
      that the Thumb2 opcodes for MOVW and MVN are unambiguous, and have no
      rotation bits in their immediate fields, so there is no need to use
      placeholder constants in the asm blocks.
      
      While at it, drop the 'volatile' qualifiers from the asm blocks: the
      code does not have any side effects that are invisible to the compiler,
      so it is free to omit these sequences if the outputs are not used.
      
      Suggested-by: default avatarRussell King <linux@armlinux.org.uk>
      Acked-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      e8e00f5a
    • Ard Biesheuvel's avatar
      ARM: p2v: use relative references in patch site arrays · 2730e8ea
      Ard Biesheuvel authored
      
      
      Free up a register in the p2v patching code by switching to relative
      references, which don't require keeping the phys-to-virt displacement
      live in a register.
      
      Acked-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      2730e8ea
    • Ard Biesheuvel's avatar
      ARM: p2v: drop redundant 'type' argument from __pv_stub · 0869f3b9
      Ard Biesheuvel authored
      
      
      We always pass the same value for 'type' so pull it into the __pv_stub
      macro itself.
      
      Acked-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      0869f3b9
    • Ard Biesheuvel's avatar
      ARM: module: add support for place relative relocations · 22f2d230
      Ard Biesheuvel authored
      
      
      When using the new adr_l/ldr_l/str_l macros to refer to external symbols
      from modules, the linker may emit place relative ELF relocations that
      need to be fixed up by the module loader. So add support for these.
      
      Reviewed-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      22f2d230
    • Ard Biesheuvel's avatar
      ARM: assembler: introduce adr_l, ldr_l and str_l macros · 0b167463
      Ard Biesheuvel authored
      
      
      Like arm64, ARM supports position independent code sequences that
      produce symbol references with a greater reach than the ordinary
      adr/ldr instructions. Since on ARM, the adrl pseudo-instruction is
      only supported in ARM mode (and not at all when using Clang), having
      a adr_l macro like we do on arm64 is useful, and increases symmetry
      as well.
      
      Currently, we use open coded instruction sequences involving literals
      and arithmetic operations. Instead, we can use movw/movt pairs on v7
      CPUs, circumventing the D-cache entirely.
      
      E.g., on v7+ CPUs, we can emit a PC-relative reference as follows:
      
             movw         <reg>, #:lower16:<sym> - (1f + 8)
             movt         <reg>, #:upper16:<sym> - (1f + 8)
        1:   add          <reg>, <reg>, pc
      
      For older CPUs, we can emit the literal into a subsection, allowing it
      to be emitted out of line while retaining the ability to perform
      arithmetic on label offsets.
      
      E.g., on pre-v7 CPUs, we can emit a PC-relative reference as follows:
      
             ldr          <reg>, 2f
        1:   add          <reg>, <reg>, pc
             .subsection  1
        2:   .long        <sym> - (1b + 8)
             .previous
      
      This is allowed by the assembler because, unlike ordinary sections,
      subsections are combined into a single section in the object file, and
      so the label references are not true cross-section references that are
      visible as relocations. (Subsections have been available in binutils
      since 2004 at least, so they should not cause any issues with older
      toolchains.)
      
      So use the above to implement the macros mov_l, adr_l, ldr_l and str_l,
      all of which will use movw/movt pairs on v7 and later CPUs, and use
      PC-relative literals otherwise.
      
      Reviewed-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      0b167463
    • Ard Biesheuvel's avatar
      ARM: 9020/1: mm: use correct section size macro to describe the FDT virtual address · fc2933c1
      Ard Biesheuvel authored
      
      
      Commit
      
        149a3ffe62b9dbc3 ("9012/1: move device tree mapping out of linear region")
      
      created a permanent, read-only section mapping of the device tree blob
      provided by the firmware, and added a set of macros to get the base and
      size of the virtually mapped FDT based on the physical address. However,
      while the mapping code uses the SECTION_SIZE macro correctly, the macros
      use PMD_SIZE instead, which means something entirely different on ARM when
      using short descriptors, and is therefore not the right quantity to use
      here. So replace PMD_SIZE with SECTION_SIZE. While at it, change the names
      of the macro and its parameter to clarify that it returns the virtual
      address of the start of the FDT, based on the physical address in memory.
      
      Tested-by: default avatarJoel Stanley <joel@jms.id.au>
      Tested-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      fc2933c1
  24. Oct 27, 2020
    • Andrew Jeffery's avatar
      ARM: 9019/1: kprobes: Avoid fortify_panic() when copying optprobe template · 9fa2e7af
      Andrew Jeffery authored
      
      
      Setting both CONFIG_KPROBES=y and CONFIG_FORTIFY_SOURCE=y on ARM leads
      to a panic in memcpy() when injecting a kprobe despite the fixes found
      in commit e46daee5 ("ARM: 8806/1: kprobes: Fix false positive with
      FORTIFY_SOURCE") and commit 0ac569bf ("ARM: 8834/1: Fix: kprobes:
      optimized kprobes illegal instruction").
      
      arch/arm/include/asm/kprobes.h effectively declares
      the target type of the optprobe_template_entry assembly label as a u32
      which leads memcpy()'s __builtin_object_size() call to determine that
      the pointed-to object is of size four. However, the symbol is used as a handle
      for the optimised probe assembly template that is at least 96 bytes in size.
      The symbol's use despite its type blows up the memcpy() in ARM's
      arch_prepare_optimized_kprobe() with a false-positive fortify_panic() when it
      should instead copy the optimised probe template into place:
      
      ```
      $ sudo perf probe -a aspeed_g6_pinctrl_probe
      [  158.457252] detected buffer overflow in memcpy
      [  158.458069] ------------[ cut here ]------------
      [  158.458283] kernel BUG at lib/string.c:1153!
      [  158.458436] Internal error: Oops - BUG: 0 [#1] SMP ARM
      [  158.458768] Modules linked in:
      [  158.459043] CPU: 1 PID: 99 Comm: perf Not tainted 5.9.0-rc7-00038-gc53ebf8167e9 #158
      [  158.459296] Hardware name: Generic DT based system
      [  158.459529] PC is at fortify_panic+0x18/0x20
      [  158.459658] LR is at __irq_work_queue_local+0x3c/0x74
      [  158.459831] pc : [<8047451c>]    lr : [<8020ecd4>]    psr: 60000013
      [  158.460032] sp : be2d1d50  ip : be2d1c58  fp : be2d1d5c
      [  158.460174] r10: 00000006  r9 : 00000000  r8 : 00000060
      [  158.460348] r7 : 8011e434  r6 : b9e0b800  r5 : 7f000000  r4 : b9fe4f0c
      [  158.460557] r3 : 80c04cc8  r2 : 00000000  r1 : be7c03cc  r0 : 00000022
      [  158.460801] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
      [  158.461037] Control: 10c5387d  Table: b9cd806a  DAC: 00000051
      [  158.461251] Process perf (pid: 99, stack limit = 0x81c71a69)
      [  158.461472] Stack: (0xbe2d1d50 to 0xbe2d2000)
      [  158.461757] 1d40:                                     be2d1d84 be2d1d60 8011e724 80474510
      [  158.462104] 1d60: b9e0b800 b9fe4f0c 00000000 b9fe4f14 80c8ec80 be235000 be2d1d9c be2d1d88
      [  158.462436] 1d80: 801cee44 8011e57c b9fe4f0c 00000000 be2d1dc4 be2d1da0 801d0ad0 801cedec
      [  158.462742] 1da0: 00000000 00000000 b9fe4f00 ffffffea 00000000 be235000 be2d1de4 be2d1dc8
      [  158.463087] 1dc0: 80204604 801d0738 00000000 00000000 b9fe4004 ffffffea be2d1e94 be2d1de8
      [  158.463428] 1de0: 80205434 80204570 00385c00 00000000 00000000 00000000 be2d1e14 be2d1e08
      [  158.463880] 1e00: 802ba014 b9fe4f00 b9e718c0 b9fe4f84 b9e71ec8 be2d1e24 00000000 00385c00
      [  158.464365] 1e20: 00000000 626f7270 00000065 802b905c be2d1e94 0000002e 00000000 802b9914
      [  158.464829] 1e40: be2d1e84 be2d1e50 802b9914 8028ff78 804629d0 b9e71ec0 0000002e b9e71ec0
      [  158.465141] 1e60: be2d1ea8 80c04cc8 00000cc0 b9e713c4 00000002 80205834 80205834 0000002e
      [  158.465488] 1e80: be235000 be235000 be2d1ea4 be2d1e98 80205854 80204e94 be2d1ecc be2d1ea8
      [  158.465806] 1ea0: 801ee4a0 80205840 00000002 80c04cc8 00000000 0000002e 0000002e 00000000
      [  158.466110] 1ec0: be2d1f0c be2d1ed0 801ee5c8 801ee428 00000000 be2d0000 006b1fd0 00000051
      [  158.466398] 1ee0: 00000000 b9eedf00 0000002e 80204410 006b1fd0 be2d1f60 00000000 00000004
      [  158.466763] 1f00: be2d1f24 be2d1f10 8020442c 801ee4c4 80205834 802c613c be2d1f5c be2d1f28
      [  158.467102] 1f20: 802c60ac 8020441c be2d1fac be2d1f38 8010c764 802e9888 be2d1f5c b9eedf00
      [  158.467447] 1f40: b9eedf00 006b1fd0 0000002e 00000000 be2d1f94 be2d1f60 802c634c 802c5fec
      [  158.467812] 1f60: 00000000 00000000 00000000 80c04cc8 006b1fd0 00000003 76f7a610 00000004
      [  158.468155] 1f80: 80100284 be2d0000 be2d1fa4 be2d1f98 802c63ec 802c62e8 00000000 be2d1fa8
      [  158.468508] 1fa0: 80100080 802c63e0 006b1fd0 00000003 00000003 006b1fd0 0000002e 00000000
      [  158.468858] 1fc0: 006b1fd0 00000003 76f7a610 00000004 006b1fb0 0026d348 00000017 7ef2738c
      [  158.469202] 1fe0: 76f3431c 7ef272d8 0014ec50 76f34338 60000010 00000003 00000000 00000000
      [  158.469461] Backtrace:
      [  158.469683] [<80474504>] (fortify_panic) from [<8011e724>] (arch_prepare_optimized_kprobe+0x1b4/0x1f8)
      [  158.470021] [<8011e570>] (arch_prepare_optimized_kprobe) from [<801cee44>] (alloc_aggr_kprobe+0x64/0x70)
      [  158.470287]  r9:be235000 r8:80c8ec80 r7:b9fe4f14 r6:00000000 r5:b9fe4f0c r4:b9e0b800
      [  158.470478] [<801cede0>] (alloc_aggr_kprobe) from [<801d0ad0>] (register_kprobe+0x3a4/0x5a0)
      [  158.470685]  r5:00000000 r4:b9fe4f0c
      [  158.470790] [<801d072c>] (register_kprobe) from [<80204604>] (__register_trace_kprobe+0xa0/0xa4)
      [  158.471001]  r9:be235000 r8:00000000 r7:ffffffea r6:b9fe4f00 r5:00000000 r4:00000000
      [  158.471188] [<80204564>] (__register_trace_kprobe) from [<80205434>] (trace_kprobe_create+0x5ac/0x9ac)
      [  158.471408]  r7:ffffffea r6:b9fe4004 r5:00000000 r4:00000000
      [  158.471553] [<80204e88>] (trace_kprobe_create) from [<80205854>] (create_or_delete_trace_kprobe+0x20/0x3c)
      [  158.471766]  r10:be235000 r9:be235000 r8:0000002e r7:80205834 r6:80205834 r5:00000002
      [  158.471949]  r4:b9e713c4
      [  158.472027] [<80205834>] (create_or_delete_trace_kprobe) from [<801ee4a0>] (trace_run_command+0x84/0x9c)
      [  158.472255] [<801ee41c>] (trace_run_command) from [<801ee5c8>] (trace_parse_run_command+0x110/0x1f8)
      [  158.472471]  r6:00000000 r5:0000002e r4:0000002e
      [  158.472594] [<801ee4b8>] (trace_parse_run_command) from [<8020442c>] (probes_write+0x1c/0x28)
      [  158.472800]  r10:00000004 r9:00000000 r8:be2d1f60 r7:006b1fd0 r6:80204410 r5:0000002e
      [  158.472968]  r4:b9eedf00
      [  158.473046] [<80204410>] (probes_write) from [<802c60ac>] (vfs_write+0xcc/0x1e8)
      [  158.473226] [<802c5fe0>] (vfs_write) from [<802c634c>] (ksys_write+0x70/0xf8)
      [  158.473400]  r8:00000000 r7:0000002e r6:006b1fd0 r5:b9eedf00 r4:b9eedf00
      [  158.473567] [<802c62dc>] (ksys_write) from [<802c63ec>] (sys_write+0x18/0x1c)
      [  158.473745]  r9:be2d0000 r8:80100284 r7:00000004 r6:76f7a610 r5:00000003 r4:006b1fd0
      [  158.473932] [<802c63d4>] (sys_write) from [<80100080>] (ret_fast_syscall+0x0/0x54)
      [  158.474126] Exception stack(0xbe2d1fa8 to 0xbe2d1ff0)
      [  158.474305] 1fa0:                   006b1fd0 00000003 00000003 006b1fd0 0000002e 00000000
      [  158.474573] 1fc0: 006b1fd0 00000003 76f7a610 00000004 006b1fb0 0026d348 00000017 7ef2738c
      [  158.474811] 1fe0: 76f3431c 7ef272d8 0014ec50 76f34338
      [  158.475171] Code: e24cb004 e1a01000 e59f0004 ebf40dd3 (e7f001f2)
      [  158.475847] ---[ end trace 55a5b31c08a29f00 ]---
      [  158.476088] Kernel panic - not syncing: Fatal exception
      [  158.476375] CPU0: stopping
      [  158.476709] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G      D           5.9.0-rc7-00038-gc53ebf8167e9 #158
      [  158.477176] Hardware name: Generic DT based system
      [  158.477411] Backtrace:
      [  158.477604] [<8010dd28>] (dump_backtrace) from [<8010dfd4>] (show_stack+0x20/0x24)
      [  158.477990]  r7:00000000 r6:60000193 r5:00000000 r4:80c2f634
      [  158.478323] [<8010dfb4>] (show_stack) from [<8046390c>] (dump_stack+0xcc/0xe8)
      [  158.478686] [<80463840>] (dump_stack) from [<80110750>] (handle_IPI+0x334/0x3a0)
      [  158.479063]  r7:00000000 r6:00000004 r5:80b65cc8 r4:80c78278
      [  158.479352] [<8011041c>] (handle_IPI) from [<801013f8>] (gic_handle_irq+0x88/0x94)
      [  158.479757]  r10:10c5387d r9:80c01ed8 r8:00000000 r7:c0802000 r6:80c0537c r5:000003ff
      [  158.480146]  r4:c080200c r3:fffffff4
      [  158.480364] [<80101370>] (gic_handle_irq) from [<80100b6c>] (__irq_svc+0x6c/0x90)
      [  158.480748] Exception stack(0x80c01ed8 to 0x80c01f20)
      [  158.481031] 1ec0:                                                       000128bc 00000000
      [  158.481499] 1ee0: be7b8174 8011d3a0 80c00000 00000000 80c04cec 80c04d28 80c5d7c2 80a026d4
      [  158.482091] 1f00: 10c5387d 80c01f34 80c01f38 80c01f28 80109554 80109558 60000013 ffffffff
      [  158.482621]  r9:80c00000 r8:80c5d7c2 r7:80c01f0c r6:ffffffff r5:60000013 r4:80109558
      [  158.482983] [<80109518>] (arch_cpu_idle) from [<80818780>] (default_idle_call+0x38/0x120)
      [  158.483360] [<80818748>] (default_idle_call) from [<801585a8>] (do_idle+0xd4/0x158)
      [  158.483945]  r5:00000000 r4:80c00000
      [  158.484237] [<801584d4>] (do_idle) from [<801588f4>] (cpu_startup_entry+0x28/0x2c)
      [  158.484784]  r9:80c78000 r8:00000000 r7:80c78000 r6:80c78040 r5:80c04cc0 r4:000000d6
      [  158.485328] [<801588cc>] (cpu_startup_entry) from [<80810a78>] (rest_init+0x9c/0xbc)
      [  158.485930] [<808109dc>] (rest_init) from [<80b00ae4>] (arch_call_rest_init+0x18/0x1c)
      [  158.486503]  r5:80c04cc0 r4:00000001
      [  158.486857] [<80b00acc>] (arch_call_rest_init) from [<80b00fcc>] (start_kernel+0x46c/0x548)
      [  158.487589] [<80b00b60>] (start_kernel) from [<00000000>] (0x0)
      ```
      
      Fixes: e46daee5 ("ARM: 8806/1: kprobes: Fix false positive with FORTIFY_SOURCE")
      Fixes: 0ac569bf ("ARM: 8834/1: Fix: kprobes: optimized kprobes illegal instruction")
      Suggested-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarAndrew Jeffery <andrew@aj.id.au>
      Tested-by: default avatarLuka Oreskovic <luka.oreskovic@sartura.hr>
      Tested-by: default avatarJoel Stanley <joel@jms.id.au>
      Reviewed-by: default avatarJoel Stanley <joel@jms.id.au>
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Luka Oreskovic <luka.oreskovic@sartura.hr>
      Cc: Juraj Vijtiuk <juraj.vijtiuk@sartura.hr>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      9fa2e7af
    • Linus Walleij's avatar
      ARM: 9016/2: Initialize the mapping of KASan shadow memory · 5615f69b
      Linus Walleij authored
      
      
      This patch initializes KASan shadow region's page table and memory.
      There are two stage for KASan initializing:
      
      1. At early boot stage the whole shadow region is mapped to just
         one physical page (kasan_zero_page). It is finished by the function
         kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
         head-common.S)
      
      2. After the calling of paging_init, we use kasan_zero_page as zero
         shadow for some memory that KASan does not need to track, and we
         allocate a new shadow space for the other memory that KASan need to
         track. These issues are finished by the function kasan_init which is
         call by setup_arch.
      
      When using KASan we also need to increase the THREAD_SIZE_ORDER
      from 1 to 2 as the extra calls for shadow memory uses quite a bit
      of stack.
      
      As we need to make a temporary copy of the PGD when setting up
      shadow memory we create a helpful PGD_SIZE definition for both
      LPAE and non-LPAE setups.
      
      The KASan core code unconditionally calls pud_populate() so this
      needs to be changed from BUG() to do {} while (0) when building
      with KASan enabled.
      
      After the initial development by Andre Ryabinin several modifications
      have been made to this code:
      
      Abbott Liu <liuwenliang@huawei.com>
      - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
        mapping table need be copied in the pgd_alloc() function.
      - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
        kasan_pgd_populate from .meminit.text section to .init.text section.
        Reported by Florian Fainelli <f.fainelli@gmail.com>
      
      Linus Walleij <linus.walleij@linaro.org>:
      - Drop the custom mainpulation of TTBR0 and just use
        cpu_switch_mm() to switch the pgd table.
      - Adopt to handle 4th level page tabel folding.
      - Rewrite the entire page directory and page entry initialization
        sequence to be recursive based on ARM64:s kasan_init.c.
      
      Ard Biesheuvel <ardb@kernel.org>:
      - Necessary underlying fixes.
      - Crucial bug fixes to the memory set-up code.
      
      Co-developed-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Co-developed-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Co-developed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Reported-by: default avatarRussell King - ARM Linux <rmk+kernel@armlinux.org.uk>
      Reported-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      5615f69b
Loading