Skip to content
  1. Oct 15, 2022
  2. Oct 13, 2022
    • Gavin Shan's avatar
      KVM: selftests: Fix number of pages for memory slot in memslot_modification_stress_test · 05c2224d
      Gavin Shan authored
      
      
      It's required by vm_userspace_mem_region_add() that memory size
      should be aligned to host page size. However, one guest page is
      provided by memslot_modification_stress_test. It triggers failure
      in the scenario of 64KB-page-size-host and 4KB-page-size-guest,
      as the following messages indicate.
      
       # ./memslot_modification_stress_test
       Testing guest mode: PA-bits:40,  VA-bits:48,  4K pages
       guest physical test memory: [0xffbfff0000, 0xffffff0000)
       Finished creating vCPUs
       Started all vCPUs
       ==== Test Assertion Failure ====
         lib/kvm_util.c:824: vm_adjust_num_guest_pages(vm->mode, npages) == npages
         pid=5712 tid=5712 errno=0 - Success
            1	0x0000000000404eeb: vm_userspace_mem_region_add at kvm_util.c:822
            2	0x0000000000401a5b: add_remove_memslot at memslot_modification_stress_test.c:82
            3	 (inlined by) run_test at memslot_modification_stress_test.c:110
            4	0x0000000000402417: for_each_guest_mode at guest_modes.c:100
            5	0x00000000004016a7: main at memslot_modification_stress_test.c:187
            6	0x0000ffffb8cd4383: ?? ??:0
            7	0x0000000000401827: _start at :?
         Number of guest pages is not compatible with the host. Try npages=16
      
      Fix the issue by providing 16 guest pages to the memory slot for this
      particular combination of 64KB-page-size-host and 4KB-page-size-guest
      on aarch64.
      
      Fixes: ef4c9f4f ("KVM: selftests: Fix 32-bit truncation of vm_get_max_gfn()")
      Signed-off-by: default avatarGavin Shan <gshan@redhat.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221013063020.201856-1-gshan@redhat.com
      05c2224d
  3. Oct 10, 2022
  4. Oct 09, 2022
    • Vincent Donnefort's avatar
      KVM: arm64: Enable stack protection and branch profiling for VHE · 837d632a
      Vincent Donnefort authored
      
      
      For historical reasons, the VHE code inherited the build configuration from
      nVHE. Now those two parts have their own folder and makefile, we can
      enable stack protection and branch profiling for VHE.
      
      Signed-off-by: default avatarVincent Donnefort <vdonnefort@google.com>
      Reviewed-by: default avatarQuentin Perret <qperret@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221004154216.2833636-1-vdonnefort@google.com
      837d632a
    • Oliver Upton's avatar
      KVM: arm64: Limit stage2_apply_range() batch size to largest block · 5994bc9e
      Oliver Upton authored
      
      
      Presently stage2_apply_range() works on a batch of memory addressed by a
      stage 2 root table entry for the VM. Depending on the IPA limit of the
      VM and PAGE_SIZE of the host, this could address a massive range of
      memory. Some examples:
      
        4 level, 4K paging -> 512 GB batch size
      
        3 level, 64K paging -> 4TB batch size
      
      Unsurprisingly, working on such a large range of memory can lead to soft
      lockups. When running dirty_log_perf_test:
      
        ./dirty_log_perf_test -m -2 -s anonymous_thp -b 4G -v 48
      
        watchdog: BUG: soft lockup - CPU#0 stuck for 45s! [dirty_log_perf_:16703]
        Modules linked in: vfat fat cdc_ether usbnet mii xhci_pci xhci_hcd sha3_generic gq(O)
        CPU: 0 PID: 16703 Comm: dirty_log_perf_ Tainted: G           O       6.0.0-smp-DEV #1
        pstate: 80400009 (Nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
        pc : dcache_clean_inval_poc+0x24/0x38
        lr : clean_dcache_guest_page+0x28/0x4c
        sp : ffff800021763990
        pmr_save: 000000e0
        x29: ffff800021763990 x28: 0000000000000005 x27: 0000000000000de0
        x26: 0000000000000001 x25: 00400830b13bc77f x24: ffffad4f91ead9c0
        x23: 0000000000000000 x22: ffff8000082ad9c8 x21: 0000fffafa7bc000
        x20: ffffad4f9066ce50 x19: 0000000000000003 x18: ffffad4f92402000
        x17: 000000000000011b x16: 000000000000011b x15: 0000000000000124
        x14: ffff07ff8301d280 x13: 0000000000000000 x12: 00000000ffffffff
        x11: 0000000000010001 x10: fffffc0000000000 x9 : ffffad4f9069e580
        x8 : 000000000000000c x7 : 0000000000000000 x6 : 000000000000003f
        x5 : ffff07ffa2076980 x4 : 0000000000000001 x3 : 000000000000003f
        x2 : 0000000000000040 x1 : ffff0830313bd000 x0 : ffff0830313bcc40
        Call trace:
         dcache_clean_inval_poc+0x24/0x38
         stage2_unmap_walker+0x138/0x1ec
         __kvm_pgtable_walk+0x130/0x1d4
         __kvm_pgtable_walk+0x170/0x1d4
         __kvm_pgtable_walk+0x170/0x1d4
         __kvm_pgtable_walk+0x170/0x1d4
         kvm_pgtable_stage2_unmap+0xc4/0xf8
         kvm_arch_flush_shadow_memslot+0xa4/0x10c
         kvm_set_memslot+0xb8/0x454
         __kvm_set_memory_region+0x194/0x244
         kvm_vm_ioctl_set_memory_region+0x58/0x7c
         kvm_vm_ioctl+0x49c/0x560
         __arm64_sys_ioctl+0x9c/0xd4
         invoke_syscall+0x4c/0x124
         el0_svc_common+0xc8/0x194
         do_el0_svc+0x38/0xc0
         el0_svc+0x2c/0xa4
         el0t_64_sync_handler+0x84/0xf0
         el0t_64_sync+0x1a0/0x1a4
      
      Use the largest supported block mapping for the configured page size as
      the batch granularity. In so doing the walker is guaranteed to visit a
      leaf only once.
      
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221007234151.461779-3-oliver.upton@linux.dev
      5994bc9e
    • Oliver Upton's avatar
      KVM: arm64: Work out supported block level at compile time · 3b5c082b
      Oliver Upton authored
      
      
      Work out the minimum page table level where KVM supports block mappings
      at compile time. While at it, rewrite the comment around supported block
      mappings to directly describe what KVM supports instead of phrasing in
      terms of what it does not.
      
      Signed-off-by: default avatarOliver Upton <oliver.upton@linux.dev>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221007234151.461779-2-oliver.upton@linux.dev
      3b5c082b
  5. Oct 01, 2022
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/misc-6.1 into kvmarm-master/next · b302ca52
      Marc Zyngier authored
      
      
      * kvm-arm64/misc-6.1:
        : .
        : Misc KVM/arm64 fixes and improvement for v6.1
        :
        : - Simplify the affinity check when moving a GICv3 collection
        :
        : - Tone down the shouting when kvm-arm.mode=protected is passed
        :   to a guest
        :
        : - Fix various comments
        :
        : - Advertise the new kvmarm@lists.linux.dev and deprecate the
        :   old Columbia list
        : .
        KVM: arm64: Advertise new kvmarm mailing list
        KVM: arm64: Fix comment typo in nvhe/switch.c
        KVM: selftests: Update top-of-file comment in psci_test
        KVM: arm64: Ignore kvm-arm.mode if !is_hyp_mode_available()
        KVM: arm64: vgic: Remove duplicate check in update_affinity_collection()
      
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      b302ca52
    • Marc Zyngier's avatar
      Merge branch kvm-arm64/dirty-log-ordered into kvmarm-master/next · 250012dd
      Marc Zyngier authored
      
      
      * kvm-arm64/dirty-log-ordered:
        : .
        : Retrofit some ordering into the existing API dirty-ring by:
        :
        : - relying on acquire/release semantics which are the default on x86,
        :   but need to be explicit on arm64
        :
        : - adding a new capability that indicate which flavor is supported, either
        :   with explicit ordering (arm64) or both implicit and explicit (x86),
        :   as suggested by Paolo at KVM Forum
        :
        : - documenting the requirements for this new capability on weakly ordered
        :   architectures
        :
        : - updating the selftests to do the right thing
        : .
        KVM: selftests: dirty-log: Use KVM_CAP_DIRTY_LOG_RING_ACQ_REL if available
        KVM: selftests: dirty-log: Upgrade flag accesses to acquire/release semantics
        KVM: Document weakly ordered architecture requirements for dirty ring
        KVM: x86: Select CONFIG_HAVE_KVM_DIRTY_RING_ACQ_REL
        KVM: Add KVM_CAP_DIRTY_LOG_RING_ACQ_REL capability and config option
        KVM: Use acquire/release semantics when accessing dirty ring GFN state
      
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      250012dd
    • Marc Zyngier's avatar
      KVM: arm64: Advertise new kvmarm mailing list · ac107abe
      Marc Zyngier authored
      As announced on the kvmarm list, we're moving the mailing list over
      to kvmarm@lists.linux.dev:
      
      <quote>
      As you probably all know, the kvmarm mailing has been hosted on
      Columbia's machines for as long as the project existed (over 13
      years). After all this time, the university has decided to retire the
      list infrastructure and asked us to find a new hosting.
      
      A new mailing list has been created on lists.linux.dev[1], and I'm
      kindly asking everyone interested in following the KVM/arm64
      developments to start subscribing to it (and start posting your
      patches there). I hope that people will move over to it quickly enough
      that we can soon give Columbia the green light to turn their systems
      off.
      
      Note that the new list will only get archived automatically once we
      fully switch over, but I'll make sure we fill any gap and not lose any
      message. In the meantime, please Cc both lists.
      
      [...]
      
      [1] https://subspace.kernel.org/lists.linux.dev.html
      
      
      </quote>
      
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20221001091245.3900668-1-maz@kernel.org
      ac107abe
  6. Sep 29, 2022
  7. Sep 28, 2022
  8. Sep 26, 2022
  9. Sep 19, 2022
  10. Sep 16, 2022
  11. Sep 14, 2022
Loading