Skip to content
  1. Apr 11, 2021
  2. Mar 12, 2021
    • Marc Zyngier's avatar
      KVM: arm64: Fix exclusive limit for IPA size · 262b003d
      Marc Zyngier authored
      
      
      When registering a memslot, we check the size and location of that
      memslot against the IPA size to ensure that we can provide guest
      access to the whole of the memory.
      
      Unfortunately, this check rejects memslot that end-up at the exact
      limit of the addressing capability for a given IPA size. For example,
      it refuses the creation of a 2GB memslot at 0x8000000 with a 32bit
      IPA space.
      
      Fix it by relaxing the check to accept a memslot reaching the
      limit of the IPA space.
      
      Fixes: c3058d5d ("arm/arm64: KVM: Ensure memslots are within KVM_PHYS_SIZE")
      Reviewed-by: default avatarEric Auger <eric.auger@redhat.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Link: https://lore.kernel.org/r/20210311100016.3830038-3-maz@kernel.org
      262b003d
    • Marc Zyngier's avatar
      KVM: arm64: Reject VM creation when the default IPA size is unsupported · 7d717558
      Marc Zyngier authored
      
      
      KVM/arm64 has forever used a 40bit default IPA space, partially
      due to its 32bit heritage (where the only choice is 40bit).
      
      However, there are implementations in the wild that have a *cough*
      much smaller *cough* IPA space, which leads to a misprogramming of
      VTCR_EL2, and a guest that is stuck on its first memory access
      if userspace dares to ask for the default IPA setting (which most
      VMMs do).
      
      Instead, blundly reject the creation of such VM, as we can't
      satisfy the requirements from userspace (with a one-off warning).
      Also clarify the boot warning, and document that the VM creation
      will fail when an unsupported IPA size is provided.
      
      Although this is an ABI change, it doesn't really change much
      for userspace:
      
      - the guest couldn't run before this change, but no error was
        returned. At least userspace knows what is happening.
      
      - a memory slot that was accepted because it did fit the default
        IPA space now doesn't even get a chance to be registered.
      
      The other thing that is left doing is to convince userspace to
      actually use the IPA space setting instead of relying on the
      antiquated default.
      
      Fixes: 233a7cb2 ("kvm: arm64: Allow tuning the physical address size for VM")
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Reviewed-by: default avatarEric Auger <eric.auger@redhat.com>
      Link: https://lore.kernel.org/r/20210311100016.3830038-2-maz@kernel.org
      7d717558
  3. Mar 11, 2021
  4. Mar 10, 2021
  5. Mar 09, 2021
  6. Mar 08, 2021
    • Anshuman Khandual's avatar
      arm64/mm: Reorganize pfn_valid() · 093bbe21
      Anshuman Khandual authored
      
      
      There are multiple instances of pfn_to_section_nr() and __pfn_to_section()
      when CONFIG_SPARSEMEM is enabled. This can be optimized if memory section
      is fetched earlier. This replaces the open coded PFN and ADDR conversion
      with PFN_PHYS() and PHYS_PFN() helpers. While there, also add a comment.
      This does not cause any functional change.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/1614921898-4099-3-git-send-email-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      093bbe21
    • Anshuman Khandual's avatar
      arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory · eeb0753b
      Anshuman Khandual authored
      
      
      pfn_valid() validates a pfn but basically it checks for a valid struct page
      backing for that pfn. It should always return positive for memory ranges
      backed with struct page mapping. But currently pfn_valid() fails for all
      ZONE_DEVICE based memory types even though they have struct page mapping.
      
      pfn_valid() asserts that there is a memblock entry for a given pfn without
      MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is
      that they do not have memblock entries. Hence memblock_is_map_memory() will
      invariably fail via memblock_search() for a ZONE_DEVICE based address. This
      eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs
      to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged
      into the system via memremap_pages() called from a driver, their respective
      memory sections will not have SECTION_IS_EARLY set.
      
      Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock
      regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set
      for firmware reserved memory regions. memblock_is_map_memory() can just be
      skipped as its always going to be positive and that will be an optimization
      for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal
      hotplugged memory too will not have SECTION_IS_EARLY set for their sections
      
      Skipping memblock_is_map_memory() for all non early memory sections would
      fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its
      performance for normal hotplug memory as well.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Fixes: 73b20c84 ("arm64: mm: implement pte_devmap support")
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/1614921898-4099-2-git-send-email-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      eeb0753b
    • Anshuman Khandual's avatar
      arm64/mm: Drop THP conditionality from FORCE_MAX_ZONEORDER · 79cc2ed5
      Anshuman Khandual authored
      
      
      Currently without THP being enabled, MAX_ORDER via FORCE_MAX_ZONEORDER gets
      reduced to 11, which falls below HUGETLB_PAGE_ORDER for certain 16K and 64K
      page size configurations. This is problematic which throws up the following
      warning during boot as pageblock_order via HUGETLB_PAGE_ORDER order exceeds
      MAX_ORDER.
      
      WARNING: CPU: 7 PID: 127 at mm/vmstat.c:1092 __fragmentation_index+0x58/0x70
      Modules linked in:
      CPU: 7 PID: 127 Comm: kswapd0 Not tainted 5.12.0-rc1-00005-g0221e3101a1 #237
      Hardware name: linux,dummy-virt (DT)
      pstate: 20400005 (nzCv daif +PAN -UAO -TCO BTYPE=--)
      pc : __fragmentation_index+0x58/0x70
      lr : fragmentation_index+0x88/0xa8
      sp : ffff800016ccfc00
      x29: ffff800016ccfc00 x28: 0000000000000000
      x27: ffff800011fd4000 x26: 0000000000000002
      x25: ffff800016ccfda0 x24: 0000000000000002
      x23: 0000000000000640 x22: ffff0005ffcb5b18
      x21: 0000000000000002 x20: 000000000000000d
      x19: ffff0005ffcb3980 x18: 0000000000000004
      x17: 0000000000000001 x16: 0000000000000019
      x15: ffff800011ca7fb8 x14: 00000000000002b3
      x13: 0000000000000000 x12: 00000000000005e0
      x11: 0000000000000003 x10: 0000000000000080
      x9 : ffff800011c93948 x8 : 0000000000000000
      x7 : 0000000000000000 x6 : 0000000000007000
      x5 : 0000000000007944 x4 : 0000000000000032
      x3 : 000000000000001c x2 : 000000000000000b
      x1 : ffff800016ccfc10 x0 : 000000000000000d
      Call trace:
      __fragmentation_index+0x58/0x70
      compaction_suitable+0x58/0x78
      wakeup_kcompactd+0x8c/0xd8
      balance_pgdat+0x570/0x5d0
      kswapd+0x1e0/0x388
      kthread+0x154/0x158
      ret_from_fork+0x10/0x30
      
      This solves the problem via keeping FORCE_MAX_ZONEORDER unchanged with or
      without THP on 16K and 64K page size configurations, making sure that the
      HUGETLB_PAGE_ORDER (and pageblock_order) would never exceed MAX_ORDER.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/1614597914-28565-1-git-send-email-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      79cc2ed5
    • Anshuman Khandual's avatar
      arm64/mm: Drop redundant ARCH_WANT_HUGE_PMD_SHARE · 07fb6dc3
      Anshuman Khandual authored
      
      
      There is already an ARCH_WANT_HUGE_PMD_SHARE which is being selected for
      applicable configurations. Hence just drop the other redundant entry.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Link: https://lore.kernel.org/r/1614575192-21307-1-git-send-email-anshuman.khandual@arm.com
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      07fb6dc3
    • Will Deacon's avatar
      arm64: Drop support for CMDLINE_EXTEND · cae118b6
      Will Deacon authored
      The documented behaviour for CMDLINE_EXTEND is that the arguments from
      the bootloader are appended to the built-in kernel command line. This
      also matches the option parsing behaviour for the EFI stub and early ID
      register overrides.
      
      Bizarrely, the fdt behaviour is the other way around: appending the
      built-in command line to the bootloader arguments, resulting in a
      command-line that doesn't necessarily line-up with the parsing order and
      definitely doesn't line-up with the documented behaviour.
      
      As it turns out, there is a proposal [1] to replace CMDLINE_EXTEND with
      CMDLINE_PREPEND and CMDLINE_APPEND options which should hopefully make
      the intended behaviour much clearer. While we wait for those to land,
      drop CMDLINE_EXTEND for now as there appears to be little enthusiasm for
      changing the current FDT behaviour.
      
      [1] https://lore.kernel.org/lkml/20190319232448.45964-2-danielwa@cisco.com/
      
      Cc: Max Uvarov <muvarov@gmail.com>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Ard Biesheuvel <ardb@kernel.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Doug Anderson <dianders@chromium.org>
      Cc: Tyler Hicks <tyhicks@linux.microsoft.com>
      Cc: Frank Rowand <frowand.list@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Link: https://lore.kernel.org/r/CAL_JsqJX=TCCs7=gg486r9TN4NYscMTCLNfqJF9crskKPq-bTg@mail.gmail.com
      Link: https://lore.kernel.org/r/20210303134927.18975-3-will@kernel.org
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      cae118b6
    • Will Deacon's avatar
      arm64: cpufeatures: Fix handling of CONFIG_CMDLINE for idreg overrides · df304c2d
      Will Deacon authored
      
      
      The built-in kernel commandline (CONFIG_CMDLINE) can be configured in
      three different ways:
      
        1. CMDLINE_FORCE: Use CONFIG_CMDLINE instead of any bootloader args
        2. CMDLINE_EXTEND: Append the bootloader args to CONFIG_CMDLINE
        3. CMDLINE_FROM_BOOTLOADER: Only use CONFIG_CMDLINE if there aren't
           any bootloader args.
      
      The early cmdline parsing to detect idreg overrides gets (2) and (3)
      slightly wrong: in the case of (2) the bootloader args are parsed first
      and in the case of (3) the CMDLINE is always parsed.
      
      Fix these issues by moving the bootargs parsing out into a helper
      function and following the same logic as that used by the EFI stub.
      
      Reviewed-by: default avatarMarc Zyngier <maz@kernel.org>
      Fixes: 33200303 ("arm64: cpufeature: Add an early command-line cpufeature override facility")
      Link: https://lore.kernel.org/r/20210303134927.18975-2-will@kernel.org
      
      
      Signed-off-by: default avatarWill Deacon <will@kernel.org>
      df304c2d
  7. Mar 06, 2021
  8. Feb 26, 2021
  9. Feb 25, 2021
  10. Feb 24, 2021
Loading