Skip to content
  1. Apr 13, 2016
  2. Mar 30, 2016
  3. Mar 25, 2016
  4. Mar 24, 2016
    • Mark Rutland's avatar
      arm64: mm: allow preemption in copy_to_user_page · 691b1e2e
      Mark Rutland authored
      
      
      Currently we disable preemption in copy_to_user_page; a behaviour that
      we inherited from the 32-bit arm code. This was necessary for older
      cores without broadcast data cache maintenance, and ensured that cache
      lines were dirtied and cleaned by the same CPU. On these systems dirty
      cache line migration was not possible, so this was sufficient to
      guarantee coherency.
      
      On contemporary systems, cache coherence protocols permit (dirty) cache
      lines to migrate between CPUs as a result of speculation, prefetching,
      and other behaviours. To account for this, in ARMv8 data cache
      maintenance operations are broadcast and affect all data caches in the
      domain associated with the VA (i.e. ISH for kernel and user mappings).
      
      In __switch_to we ensure that tasks can be safely migrated in the middle
      of a maintenance sequence, using a dsb(ish) to ensure prior explicit
      memory accesses are observed and cache maintenance operations are
      completed before a task can be run on another CPU.
      
      Given the above, it is not necessary to disable preemption in
      copy_to_user_page. This patch removes the preempt_{disable,enable}
      calls, permitting preemption.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      691b1e2e
    • Mark Rutland's avatar
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland authored
      
      
      Commit 324420bf ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
    • Ard Biesheuvel's avatar
      arm64: kaslr: use callee saved register to preserve SCTLR across C call · d5e57437
      Ard Biesheuvel authored
      
      
      The KASLR code incorrectly expects the contents of x18 to be preserved
      across a call into C code, and uses it to stash the contents of SCTLR_EL1
      before enabling the MMU. If the MMU needs to be disabled again to create
      the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is
      likely to crash the system if x18 has been clobbered by kasan_early_init()
      or kaslr_early_init(). So use x22 instead, which is not in use so far in
      head.S
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d5e57437
  5. Mar 23, 2016
  6. Mar 21, 2016
  7. Mar 18, 2016
  8. Mar 17, 2016
  9. Mar 13, 2016
  10. Mar 11, 2016
    • Catalin Marinas's avatar
      arm64: kasan: Fix zero shadow mapping overriding kernel image shadow · 2776e0e8
      Catalin Marinas authored
      
      
      With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
      PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
      kimg_shadow_end is not page aligned (_end shifted by
      KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
      shadow via vmemmap_populate() may be overridden by subsequent calls to
      kasan_populate_zero_shadow(), leading to kernel panics like below:
      
      ------------------------------------------------------------------------------
      Unable to handle kernel paging request at virtual address fffffc100135068c
      pgd = fffffc8009ac0000
      [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
      Internal error: Oops: 9600004f [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
      Hardware name: Juno (DT)
      task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
      PC is at __memset+0x4c/0x200
      LR is at kasan_unpoison_shadow+0x34/0x50
      pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
      sp : fffffe0900203db0
      x29: fffffe0900203db0 x28: 0000000000000000
      x27: 0000000000000000 x26: 0000000000000000
      x25: fffffc80099b69d0 x24: 0000000000000001
      x23: 0000000000000000 x22: 0000000000002000
      x21: dffffc8000000000 x20: 1fffff9001350a8c
      x19: 0000000000002000 x18: 0000000000000008
      x17: 0000000000000147 x16: ffffffffffffffff
      x15: 79746972100e041d x14: ffffff0000000000
      x13: ffff000000000000 x12: 0000000000000000
      x11: 0101010101010101 x10: 1fffffc11c000000
      x9 : 0000000000000000 x8 : fffffc100135068c
      x7 : 0000000000000000 x6 : 000000000000003f
      x5 : 0000000000000040 x4 : 0000000000000004
      x3 : fffffc100134f651 x2 : 0000000000000400
      x1 : 0000000000000000 x0 : fffffc100135068c
      
      Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
      Call trace:
      [<fffffc800846f1cc>] __memset+0x4c/0x200
      [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
      [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
      [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
      [<fffffc80089e1948>] kernel_init+0x10/0xf8
      [<fffffc8008093a00>] ret_from_fork+0x10/0x50
      ------------------------------------------------------------------------------
      
      This patch aligns kimg_shadow_start and kimg_shadow_end to
      SWAPPER_BLOCK_SIZE in all configurations.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      2776e0e8
    • Catalin Marinas's avatar
      arm64: kasan: Use actual memory node when populating the kernel image shadow · 2f76969f
      Catalin Marinas authored
      
      
      With the 16KB or 64KB page configurations, the generic
      vmemmap_populate() implementation warns on potential offnode
      page_structs via vmemmap_verify() because the arm64 kasan_init() passes
      NUMA_NO_NODE instead of the actual node for the kernel image memory.
      
      Fixes: f9040773 ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      2f76969f
    • Catalin Marinas's avatar
      arm64: Update PTE_RDONLY in set_pte_at() for PROT_NONE permission · fdc69e7d
      Catalin Marinas authored
      
      
      The set_pte_at() function must update the hardware PTE_RDONLY bit
      depending on the state of the PTE_WRITE and PTE_DIRTY bits of the given
      entry value. However, it currently only performs this for pte_valid()
      entries, ignoring PTE_PROT_NONE. The side-effect is that PROT_NONE
      mappings would not have the PTE_RDONLY bit set. Without
      CONFIG_ARM64_HW_AFDBM, this is not an issue since such PROT_NONE pages
      are not accessible anyway.
      
      With commit 2f4b829c ("arm64: Add support for hardware updates of
      the access and dirty pte bits"), the ptep_set_wrprotect() function was
      re-written to cope with automatic hardware updates of the dirty state.
      As an optimisation, only PTE_RDONLY is checked to assess the "dirty"
      status. Since set_pte_at() does not set this bit for PROT_NONE mappings,
      such pages may be considered "dirty" as a result of
      ptep_set_wrprotect().
      
      This patch updates the pte_valid() check to pte_present() in
      set_pte_at(). It also adds PTE_PROT_NONE to the swap entry bits comment.
      
      Fixes: 2f4b829c ("arm64: Add support for hardware updates of the access and dirty pte bits")
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarGanapatrao Kulkarni <gkulkarni@caviumnetworks.com>
      Tested-by: default avatarGanapatrao Kulkarni <gkulkarni@cavium.com>
      Cc: <stable@vger.kernel.org>
      fdc69e7d
  11. Mar 09, 2016
  12. Mar 08, 2016
  13. Mar 07, 2016
Loading