Skip to content
  1. Jul 12, 2017
  2. Jul 10, 2017
  3. Jul 08, 2017
  4. Jul 06, 2017
    • Punit Agrawal's avatar
      mm/hugetlb: add size parameter to huge_pte_offset() · 7868a208
      Punit Agrawal authored
      A poisoned or migrated hugepage is stored as a swap entry in the page
      tables.  On architectures that support hugepages consisting of
      contiguous page table entries (such as on arm64) this leads to ambiguity
      in determining the page table entry to return in huge_pte_offset() when
      a poisoned entry is encountered.
      
      Let's remove the ambiguity by adding a size parameter to convey
      additional information about the requested address.  Also fixup the
      definition/usage of huge_pte_offset() throughout the tree.
      
      Link: http://lkml.kernel.org/r/20170522133604.11392-4-punit.agrawal@arm.com
      
      
      Signed-off-by: default avatarPunit Agrawal <punit.agrawal@arm.com>
      Acked-by: default avatarSteve Capper <steve.capper@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com> (odd fixer:METAG ARCHITECTURE)
      Cc: Ralf Baechle <ralf@linux-mips.org> (supporter:MIPS)
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7868a208
    • Aneesh Kumar K.V's avatar
      powerpc/mm/hugetlb: add support for 1G huge pages · 40692eb5
      Aneesh Kumar K.V authored
      POWER9 supports hugepages of size 2M and 1G in radix MMU mode.  This
      patch enables the usage of 1G page size for hugetlbfs.  This also update
      the helper such we can do 1G page allocation at runtime.
      
      We still don't enable 1G page size on DD1 version.  This is to avoid
      doing workaround mentioned in commit 6d3a0379 ("powerpc/mm: Add
      radix__tlb_flush_pte_p9_dd1()").
      
      Link: http://lkml.kernel.org/r/1494995292-4443-2-git-send-email-aneesh.kumar@linux.vnet.ibm.com
      
      
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      40692eb5
    • Aneesh Kumar K.V's avatar
      powerpc/hugetlb: enable hugetlb migration for ppc64 · f7fb506f
      Aneesh Kumar K.V authored
      Link: http://lkml.kernel.org/r/1494926612-23928-10-git-send-email-aneesh.kumar@linux.vnet.ibm.com
      
      
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Mike Kravetz <kravetz@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f7fb506f
    • Aneesh Kumar K.V's avatar
      powerpc/mm/hugetlb: remove follow_huge_addr for powerpc · 28c05716
      Aneesh Kumar K.V authored
      With generic code now handling hugetlb entries at pgd level and also
      supporting hugepage directory format, we can now remove the powerpc
      sepcific follow_huge_addr implementation.
      
      Link: http://lkml.kernel.org/r/1494926612-23928-9-git-send-email-aneesh.kumar@linux.vnet.ibm.com
      
      
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Mike Kravetz <kravetz@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      28c05716
    • Aneesh Kumar K.V's avatar
      powerpc/hugetlb: add follow_huge_pd implementation for ppc64 · 50791e6d
      Aneesh Kumar K.V authored
      Link: http://lkml.kernel.org/r/1494926612-23928-8-git-send-email-aneesh.kumar@linux.vnet.ibm.com
      
      
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Mike Kravetz <kravetz@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      50791e6d
    • Michal Hocko's avatar
      mm, memory_hotplug: replace for_device by want_memblock in arch_add_memory · 3d79a728
      Michal Hocko authored
      arch_add_memory gets for_device argument which then controls whether we
      want to create memblocks for created memory sections.  Simplify the
      logic by telling whether we want memblocks directly rather than going
      through pointless negation.  This also makes the api easier to
      understand because it is clear what we want rather than nothing telling
      for_device which can mean anything.
      
      This shouldn't introduce any functional change.
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-13-mhocko@kernel.org
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Tested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3d79a728
    • Michal Hocko's avatar
      mm, memory_hotplug: do not associate hotadded memory to zones until online · f1dd2cd1
      Michal Hocko authored
      The current memory hotplug implementation relies on having all the
      struct pages associate with a zone/node during the physical hotplug
      phase (arch_add_memory->__add_pages->__add_section->__add_zone).  In the
      vast majority of cases this means that they are added to ZONE_NORMAL.
      This has been so since 9d99aaa3 ("[PATCH] x86_64: Support memory
      hotadd without sparsemem") and it wasn't a big deal back then because
      movable onlining didn't exist yet.
      
      Much later memory hotplug wanted to (ab)use ZONE_MOVABLE for movable
      onlining 511c2aba ("mm, memory-hotplug: dynamic configure movable
      memory and portion memory") and then things got more complicated.
      Rather than reconsidering the zone association which was no longer
      needed (because the memory hotplug already depended on SPARSEMEM) a
      convoluted semantic of zone shifting has been developed.  Only the
      currently last memblock or the one adjacent to the zone_movable can be
      onlined movable.  This essentially means that the online type changes as
      the new memblocks are added.
      
      Let's simulate memory hot online manually
        $ echo 0x100000000 > /sys/devices/system/memory/probe
        $ grep . /sys/devices/system/memory/memory32/valid_zones
        Normal Movable
      
        $ echo $((0x100000000+(128<<20))) > /sys/devices/system/memory/probe
        $ grep . /sys/devices/system/memory/memory3?/valid_zones
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      
        $ echo $((0x100000000+2*(128<<20))) > /sys/devices/system/memory/probe
        $ grep . /sys/devices/system/memory/memory3?/valid_zones
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal
        /sys/devices/system/memory/memory34/valid_zones:Normal Movable
      
        $ echo online_movable > /sys/devices/system/memory/memory34/state
        $ grep . /sys/devices/system/memory/memory3?/valid_zones
        /sys/devices/system/memory/memory32/valid_zones:Normal
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable Normal
      
      This is an awkward semantic because an udev event is sent as soon as the
      block is onlined and an udev handler might want to online it based on
      some policy (e.g.  association with a node) but it will inherently race
      with new blocks showing up.
      
      This patch changes the physical online phase to not associate pages with
      any zone at all.  All the pages are just marked reserved and wait for
      the onlining phase to be associated with the zone as per the online
      request.  There are only two requirements
      
      	- existing ZONE_NORMAL and ZONE_MOVABLE cannot overlap
      
      	- ZONE_NORMAL precedes ZONE_MOVABLE in physical addresses
      
      the latter one is not an inherent requirement and can be changed in the
      future.  It preserves the current behavior and made the code slightly
      simpler.  This is subject to change in future.
      
      This means that the same physical online steps as above will lead to the
      following state: Normal Movable
      
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
      
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Normal Movable
      
        /sys/devices/system/memory/memory32/valid_zones:Normal Movable
        /sys/devices/system/memory/memory33/valid_zones:Normal Movable
        /sys/devices/system/memory/memory34/valid_zones:Movable
      
      Implementation:
      The current move_pfn_range is reimplemented to check the above
      requirements (allow_online_pfn_range) and then updates the respective
      zone (move_pfn_range_to_zone), the pgdat and links all the pages in the
      pfn range with the zone/node.  __add_pages is updated to not require the
      zone and only initializes sections in the range.  This allowed to
      simplify the arch_add_memory code (s390 could get rid of quite some of
      code).
      
      devm_memremap_pages is the only user of arch_add_memory which relies on
      the zone association because it only hooks into the memory hotplug only
      half way.  It uses it to associate the new memory with ZONE_DEVICE but
      doesn't allow it to be {on,off}lined via sysfs.  This means that this
      particular code path has to call move_pfn_range_to_zone explicitly.
      
      The original zone shifting code is kept in place and will be removed in
      the follow up patch for an easier review.
      
      Please note that this patch also changes the original behavior when
      offlining a memory block adjacent to another zone (Normal vs.  Movable)
      used to allow to change its movable type.  This will be handled later.
      
      [richard.weiyang@gmail.com: simplify zone_intersects()]
        Link: http://lkml.kernel.org/r/20170616092335.5177-1-richard.weiyang@gmail.com
      [richard.weiyang@gmail.com: remove duplicate call for set_page_links]
        Link: http://lkml.kernel.org/r/20170616092335.5177-2-richard.weiyang@gmail.com
      [akpm@linux-foundation.org: remove unused local `i']
      Link: http://lkml.kernel.org/r/20170515085827.16474-12-mhocko@kernel.org
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarWei Yang <richard.weiyang@gmail.com>
      Tested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Tested-by: default avatarReza Arbab <arbab@linux.vnet.ibm.com>
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # For s390 bits
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f1dd2cd1
    • Michal Hocko's avatar
      mm, memory_hotplug: get rid of is_zone_device_section · 1b862aec
      Michal Hocko authored
      Device memory hotplug hooks into regular memory hotplug only half way.
      It needs memory sections to track struct pages but there is no
      need/desire to associate those sections with memory blocks and export
      them to the userspace via sysfs because they cannot be onlined anyway.
      
      This is currently expressed by for_device argument to arch_add_memory
      which then makes sure to associate the given memory range with
      ZONE_DEVICE.  register_new_memory then relies on is_zone_device_section
      to distinguish special memory hotplug from the regular one.  While this
      works now, later patches in this series want to move __add_zone outside
      of arch_add_memory path so we have to come up with something else.
      
      Add want_memblock down the __add_pages path and use it to control
      whether the section->memblock association should be done.
      arch_add_memory then just trivially want memblock for everything but
      for_device hotplug.
      
      remove_memory_section doesn't need is_zone_device_section either.  We
      can simply skip all the memblock specific cleanup if there is no
      memblock for the given section.
      
      This shouldn't introduce any functional change.
      
      Link: http://lkml.kernel.org/r/20170515085827.16474-5-mhocko@kernel.org
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Tested-by: default avatarDan Williams <dan.j.williams@intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Tobias Regnery <tobias.regnery@gmail.com>
      Cc: Toshi Kani <toshi.kani@hpe.com>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1b862aec
  5. Jul 04, 2017
    • Balbir Singh's avatar
      powerpc/Kconfig: Enable STRICT_KERNEL_RWX for some configs · 1e0fc9d1
      Balbir Singh authored
      
      
      All code that patches kernel text has been moved over to using
      patch_instruction() and patch_instruction() is able to cope with the
      kernel text being read only.
      
      The linker script has been updated to ensure the read only data ends
      on a large page boundary, so it and the preceding kernel text can be
      marked R_X. We also have implementations of mark_rodata_ro() for Hash
      and Radix MMU modes.
      
      There are some corner-cases missing when the kernel is built
      relocatable, so for now make it depend on !RELOCATABLE.
      
      There's also a temporary workaround to depend on !HIBERNATION to avoid
      a build failure, that will be removed once we've merged with the PM
      tree.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Make it depend on !RELOCATABLE, munge change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      1e0fc9d1
    • Balbir Singh's avatar
      powerpc/mm/radix: Implement STRICT_RWX/mark_rodata_ro() for Radix · 7614ff32
      Balbir Singh authored
      
      
      The Radix linear mapping code (create_physical_mapping()) tries to use
      the largest page size it can at each step. Currently the only reason
      it steps down to a smaller page size is if the start addr is
      unaligned (never happens in practice), or the end of memory is not
      aligned to a huge page boundary.
      
      To support STRICT_RWX we need to break the mapping at __init_begin,
      so that the text and rodata prior to that can be marked R_X and the
      regular pages after can be marked RW.
      
      Having done that we can now implement mark_rodata_ro() for Radix,
      knowing that we won't need to split any mappings.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Split down to PAGE_SIZE, not 2MB, rewrite change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      7614ff32
    • Balbir Singh's avatar
      powerpc/mm/hash: Implement mark_rodata_ro() for hash · cd65d697
      Balbir Singh authored
      
      
      With hash we update the bolted pte to mark it read-only. We rely
      on the MMU_FTR_KERNEL_RO to generate the correct permissions
      for read-only text. The radix implementation just prints a warning
      in this implementation
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Make the warning louder when we don't have MMU_FTR_KERNEL_RO]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      cd65d697
  6. Jul 03, 2017
    • Balbir Singh's avatar
      powerpc/vmlinux.lds: Align __init_begin to 16M · d924cc3f
      Balbir Singh authored
      
      
      For CONFIG_STRICT_KERNEL_RWX align __init_begin to 16M. We use 16M
      since its the larger of 2M on radix and 16M on hash for our linear
      mapping. The plan is to have .text, .rodata and everything upto
      __init_begin marked as RX. Note we still have executable read only
      data. We could further align rodata to another 16M boundary. I've used
      keeping text plus rodata as read-only-executable as a trade-off to
      doing read-only-executable for text and read-only for rodata.
      
      We don't use multi PT_LOAD in PHDRS because we are not sure if all
      bootloaders support them. This patch keeps PHDRS in vmlinux.lds.S as
      the same they are with just one PT_LOAD for all of the kernel marked
      as RWX (7).
      
      mpe: What this means is the added alignment bloats the resulting
      binary on disk, a powernv kernel goes from 17M to 22M.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      d924cc3f
    • Balbir Singh's avatar
      powerpc/lib/code-patching: Use alternate map for patch_instruction() · 37bc3e5f
      Balbir Singh authored
      
      
      This patch creates the window using text_poke_area, allocated via
      get_vm_area(). text_poke_area is per CPU to avoid locking.
      text_poke_area for each cpu is setup using late_initcall, prior to
      setup of these alternate mapping areas, we continue to use direct
      write to change/modify kernel text. With the ability to use alternate
      mappings to write to kernel text, it provides us the freedom to then
      turn text read-only and implement CONFIG_STRICT_KERNEL_RWX.
      
      This code is CPU hotplug aware to ensure that the we have mappings for
      any new cpus as they come online and tear down mappings for any CPUs
      that go offline.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      37bc3e5f
    • Balbir Singh's avatar
      powerpc/xmon: Add patch_instruction() support for xmon · efe4fbb1
      Balbir Singh authored
      
      
      Move from mwrite() to patch_instruction() for xmon for
      breakpoint addition and removal.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      efe4fbb1
    • Balbir Singh's avatar
      powerpc/kprobes/optprobes: Use patch_instruction() · f3eca956
      Balbir Singh authored
      
      
      So that we can implement STRICT_RWX, use patch_instruction() in
      optprobes.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f3eca956
    • Balbir Singh's avatar
      powerpc/kprobes: Move kprobes over to patch_instruction() · d07df82c
      Balbir Singh authored
      
      
      arch_arm/disarm_probe() use direct assignment for copying
      instructions, replace them with patch_instruction(). We don't need to
      call flush_icache_range() because patch_instruction() does it for us.
      
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      d07df82c
    • Balbir Singh's avatar
      powerpc/mm/radix: Fix execute permissions for interrupt_vectors · 7f6d498e
      Balbir Singh authored
      
      
      Commit 9abcc981 ("powerpc/mm/radix: Only add X for pages
      overlapping kernel text") changed the linear mapping on Radix to only
      mark the kernel text executable.
      
      However if the kernel is run relocated, for example as a kdump kernel,
      then the exception vectors are split from the kernel text, ie. they
      remain at real address 0.
      
      We tend to get away with it, because the kernel itself will usually be
      below 1G, which means the 1G page at 0-1G is marked executable and
      everything works OK. However if the kernel is loaded above 1G, or the
      system has less than 1G in total (meaning we can't use a 1G page),
      then the exception vectors will not be marked executable and the
      kernel will fail to boot.
      
      Fix it by also checking if the address range overlaps the exception
      vectors when deciding if we should add PAGE_KERNEL_X.
      
      Fixes: 9abcc981 ("powerpc/mm/radix: Only add X for pages overlapping kernel text")
      Cc: stable@vger.kernel.org # v4.7+
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Combine with the existing check, rewrite change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      7f6d498e
    • Balbir Singh's avatar
      powerpc/pseries: Fix passing of pp0 in updatepp() and updateboltedpp() · e71ff982
      Balbir Singh authored
      
      
      Once upon a time there were only two PP (page protection) bits. In ISA
      2.03 an additional PP bit was added, but because of the layout of the
      HPTE it could not be made contiguous with the existing PP bits.
      
      The result is that we now have three PP bits, named pp0, pp1, pp2,
      where pp0 occupies bit 63 of dword 1 of the HPTE and pp1 and pp2
      occupy bits 1 and 0 respectively. Until recently Linux hasn't used
      pp0, however with the addition of _PAGE_KERNEL_RO we started using it.
      
      The problem arises in the LPAR code, where we need to translate the PP
      bits into the argument for the H_PROTECT hypercall. Currently the code
      only passes bits 0-2 of newpp, which covers pp1, pp2 and N (no
      execute), meaning pp0 is not passed to the hypervisor at all.
      
      We can't simply pass it through in bit 63, as that would collide with a
      different field in the flags argument, as defined in PAPR. Instead we
      have to shift it down to bit 8 (IBM bit 55).
      
      Fixes: e58e87ad ("powerpc/mm: Update _PAGE_KERNEL_RO")
      Cc: stable@vger.kernel.org # v4.7+
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Simplify the test, rework change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e71ff982
    • Naveen N. Rao's avatar
      powerpc/64s: Blacklist rtas entry/exit from kprobes · 90653a84
      Naveen N. Rao authored
      
      
      We can't take traps with relocation off, so blacklist enter_rtas() and
      rtas_return_loc(). However, instead of blacklisting all of enter_rtas(),
      introduce a new symbol __enter_rtas from where on we can't take a trap
      and blacklist that.
      
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      90653a84
    • Naveen N. Rao's avatar
      powerpc/64s: Blacklist functions invoked on a trap · 15770a13
      Naveen N. Rao authored
      
      
      Blacklist all functions involved while handling a trap. We:
      - convert some of the symbols into private symbols, and
      - blacklist most functions involved while handling a trap.
      
      Reviewed-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      15770a13
    • Naveen N. Rao's avatar
      powerpc/64s: Un-blacklist system_call() from kprobes · 3639d661
      Naveen N. Rao authored
      
      
      It is actually safe to probe system_call() in entry_64.S, but only till
      we unset MSR_RI. To allow this, add a new symbol system_call_exit()
      after the mtmsrd and blacklist that.
      
      Suggested-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      3639d661
    • Naveen N. Rao's avatar
      powerpc/64s: Move system_call() symbol to just after setting MSR_EE · 266de3a8
      Naveen N. Rao authored
      
      
      It is common to get a PMU interrupt right after the mtmsr instruction that
      enables interrupts. Due to this, the stack trace profile gets needlessly split
      across system_call_common() and system_call().
      
      Previously, system_call() symbol was at the current place to hide a few
      earlier symbols which have since been made private or removed entirely.
      
      So, let's move system_call() slightly higher up, right after the mtmsr
      instruction that enables interrupts. Convert existing references to
      system_call to a local syscall symbol.
      
      Suggested-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      266de3a8
    • Naveen N. Rao's avatar
      powerpc/64s: Blacklist system_call() and system_call_common() from kprobes · cf7d6fb0
      Naveen N. Rao authored
      
      
      Convert some of the symbols into private symbols and blacklist
      system_call_common() and system_call() from kprobes. We can't take a
      trap at parts of these functions as either MSR_RI is unset or the
      kernel stack pointer is not yet setup.
      
      Reviewed-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      [mpe: Don't convert system_call_common to _GLOBAL()]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      cf7d6fb0
    • Naveen N. Rao's avatar
      powerpc/64s: Convert .L__replay_interrupt_return to a local label · 9d6c4523
      Naveen N. Rao authored
      
      
      Commit b48bbb82 ("powerpc/64s: Don't unbalance the return branch
      predictor in __replay_interrupt()") introduced __replay_interrupt_return
      symbol with '.L' prefix in hopes of keeping it private. However, due to
      the use of LOAD_REG_ADDR(), the assembler kept this symbol visible. Fix
      the same by instead using the local label '1'.
      
      Fixes: Commit b48bbb82 ("powerpc/64s: Don't unbalance the return branch
      predictor in __replay_interrupt()")
      Suggested-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      9d6c4523
    • Naveen N. Rao's avatar
      powerpc64/elfv1: Only dereference function descriptor for non-text symbols · 83e840c7
      Naveen N. Rao authored
      
      
      Currently, we assume that the function pointer we receive in
      ppc_function_entry() points to a function descriptor. However, this is
      not always the case. In particular, assembly symbols without the right
      annotation do not have an associated function descriptor. Some of these
      symbols are added to the kprobe blacklist using _ASM_NOKPROBE_SYMBOL().
      
      When such addresses are subsequently processed through
      arch_deref_entry_point() in populate_kprobe_blacklist(), we see the
      below errors during bootup:
          [    0.663963] Failed to find blacklist at 7d9b02a648029b6c
          [    0.663970] Failed to find blacklist at a14d03d0394a0001
          [    0.663972] Failed to find blacklist at 7d5302a6f94d0388
          [    0.663973] Failed to find blacklist at 48027d11e8610178
          [    0.663974] Failed to find blacklist at f8010070f8410080
          [    0.663976] Failed to find blacklist at 386100704801f89d
          [    0.663977] Failed to find blacklist at 7d5302a6f94d00b0
      
      Fix this by checking if the function pointer we receive in
      ppc_function_entry() already points to kernel text. If so, we just
      return it as is. If not, we assume that this is a function descriptor
      and proceed to dereference it.
      
      Suggested-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Reviewed-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      83e840c7
    • Christophe Lombard's avatar
      cxl: Export library to support IBM XSL · 3ced8d73
      Christophe Lombard authored
      
      
      This patch exports a in-kernel 'library' API which can be called by
      other drivers to help interacting with an IBM XSL on a POWER9 system.
      
      The XSL (Translation Service Layer) is a stripped down version of the
      PSL (Power Service Layer) used in some cards such as the Mellanox CX5.
      Like the PSL, it implements the CAIA architecture, but has a number
      of differences, mostly in it's implementation dependent registers.
      
      The XSL also uses a special DMA cxl mode, which uses a slightly
      different init sequence for the CAPP and PHB.
      
      Signed-off-by: default avatarAndrew Donnellan <andrew.donnellan@au1.ibm.com>
      Signed-off-by: default avatarChristophe Lombard <clombard@linux.vnet.ibm.com>
      Acked-by: default avatarFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      3ced8d73
    • Masahiro Yamada's avatar
      powerpc/dts: Use #include "..." to include local DT · 5405c92b
      Masahiro Yamada authored
      
      
      Most of DT files in PowerPC use #include "..." to make pre-processor
      include DT in the same directory, but we have 3 exceptional files
      that use #include <...> for that.
      
      Fix them to remove -I$(srctree)/arch/$(SRCARCH)/boot/dts path from
      dtc_cpp_flags.
      
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      5405c92b
    • Paul Mackerras's avatar
      KVM: PPC: Book3S: Fix typo in XICS-on-XIVE state saving code · 00c14757
      Paul Mackerras authored
      
      
      This fixes a typo where the wrong loop index was used to index
      the kvmppc_xive_vcpu.queues[] array in xive_pre_save_scan().
      The variable i contains the vcpu number; we need to index queues[]
      using j, which iterates from 0 to KVMPPC_XIVE_Q_COUNT-1.
      
      The effect of this bug is that things that save the interrupt
      controller state, such as "virsh dump", on a VM with more than
      8 vCPUs, result in xive_pre_save_queue() getting called on a
      bogus queue structure, usually resulting in a crash like this:
      
      [  501.821107] Unable to handle kernel paging request for data at address 0x00000084
      [  501.821212] Faulting instruction address: 0xc008000004c7c6f8
      [  501.821234] Oops: Kernel access of bad area, sig: 11 [#1]
      [  501.821305] SMP NR_CPUS=1024
      [  501.821307] NUMA
      [  501.821376] PowerNV
      [  501.821470] Modules linked in: vhost_net vhost tap xt_CHECKSUM ipt_MASQUERADE nf_nat_masquerade_ipv4 ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack libcrc32c iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables ses enclosure scsi_transport_sas ipmi_powernv ipmi_devintf ipmi_msghandler powernv_op_panel kvm_hv nfsd auth_rpcgss oid_registry nfs_acl lockd grace sunrpc kvm tg3 ptp pps_core
      [  501.822477] CPU: 3 PID: 3934 Comm: live_migration Not tainted 4.11.0-4.git8caa70f.el7.centos.ppc64le #1
      [  501.822633] task: c0000003f9e3ae80 task.stack: c0000003f9ed4000
      [  501.822745] NIP: c008000004c7c6f8 LR: c008000004c7c628 CTR: 0000000030058018
      [  501.822877] REGS: c0000003f9ed7980 TRAP: 0300   Not tainted  (4.11.0-4.git8caa70f.el7.centos.ppc64le)
      [  501.823030] MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE>
      [  501.823047]   CR: 28022244  XER: 00000000
      [  501.823203] CFAR: c008000004c7c77c DAR: 0000000000000084 DSISR: 40000000 SOFTE: 1
      [  501.823203] GPR00: c008000004c7c628 c0000003f9ed7c00 c008000004c91450 00000000000000ff
      [  501.823203] GPR04: c0000003f5580000 c0000003f559bf98 9000000000009033 0000000000000000
      [  501.823203] GPR08: 0000000000000084 0000000000000000 00000000000001e0 9000000000001003
      [  501.823203] GPR12: c00000000008a7d0 c00000000fdc1b00 000000000a9a0000 0000000000000000
      [  501.823203] GPR16: 00000000402954e8 000000000a9a0000 0000000000000004 0000000000000000
      [  501.823203] GPR20: 0000000000000008 c000000002e8f180 c000000002e8f1e0 0000000000000001
      [  501.823203] GPR24: 0000000000000008 c0000003f5580008 c0000003f4564018 c000000002e8f1e8
      [  501.823203] GPR28: 00003ff6e58bdc28 c0000003f4564000 0000000000000000 0000000000000000
      [  501.825441] NIP [c008000004c7c6f8] xive_get_attr+0x3b8/0x5b0 [kvm]
      [  501.825671] LR [c008000004c7c628] xive_get_attr+0x2e8/0x5b0 [kvm]
      [  501.825887] Call Trace:
      [  501.825991] [c0000003f9ed7c00] [c008000004c7c628] xive_get_attr+0x2e8/0x5b0 [kvm] (unreliable)
      [  501.826312] [c0000003f9ed7cd0] [c008000004c62ec4] kvm_device_ioctl_attr+0x64/0xa0 [kvm]
      [  501.826581] [c0000003f9ed7d20] [c008000004c62fcc] kvm_device_ioctl+0xcc/0xf0 [kvm]
      [  501.826843] [c0000003f9ed7d40] [c000000000350c70] do_vfs_ioctl+0xd0/0x8c0
      [  501.827060] [c0000003f9ed7de0] [c000000000351534] SyS_ioctl+0xd4/0xf0
      [  501.827282] [c0000003f9ed7e30] [c00000000000b8e0] system_call+0x38/0xfc
      [  501.827496] Instruction dump:
      [  501.827632] 419e0078 3b760008 e9160008 83fb000c 83db0010 80fb0008 2f280000 60000000
      [  501.827901] 60000000 60420000 419a0050 7be91764 <7d284c2c> 552a0ffe 7f8af040 419e003c
      [  501.828176] ---[ end trace 2d0529a5bbbbafed ]---
      
      Cc: stable@vger.kernel.org
      Fixes: 5af50993 ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller")
      Acked-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      00c14757
  7. Jul 02, 2017
Loading