Skip to content
  1. May 17, 2018
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Add 'online' register to ONE_REG interface · a1f15826
      Paul Mackerras authored
      
      
      This adds a new KVM_REG_PPC_ONLINE register which userspace can set
      to 0 or 1 via the GET/SET_ONE_REG interface to indicate whether it
      considers the VCPU to be offline (0), that is, not currently running,
      or online (1).  This will be used in a later patch to configure the
      register which controls PURR and SPURR accumulation.
      
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      a1f15826
    • Paul Mackerras's avatar
      KVM: PPC: Book 3S HV: Do ptesync in radix guest exit path · df158189
      Paul Mackerras authored
      
      
      A radix guest can execute tlbie instructions to invalidate TLB entries.
      After a tlbie or a group of tlbies, it must then do the architected
      sequence eieio; tlbsync; ptesync to ensure that the TLB invalidation
      has been processed by all CPUs in the system before it can rely on
      no CPU using any translation that it just invalidated.
      
      In fact it is the ptesync which does the actual synchronization in
      this sequence, and hardware has a requirement that the ptesync must
      be executed on the same CPU thread as the tlbies which it is expected
      to order.  Thus, if a vCPU gets moved from one physical CPU to
      another after it has done some tlbies but before it can get to do the
      ptesync, the ptesync will not have the desired effect when it is
      executed on the second physical CPU.
      
      To fix this, we do a ptesync in the exit path for radix guests.  If
      there are any pending tlbies, this will wait for them to complete.
      If there aren't, then ptesync will just do the same as sync.
      
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      df158189
    • Benjamin Herrenschmidt's avatar
      KVM: PPC: Book3S HV: XIVE: Resend re-routed interrupts on CPU priority change · 9dc81d6b
      Benjamin Herrenschmidt authored
      
      
      When a vcpu priority (CPPR) is set to a lower value (masking more
      interrupts), we stop processing interrupts already in the queue
      for the priorities that have now been masked.
      
      If those interrupts were previously re-routed to a different
      CPU, they might still be stuck until the older one that has
      them in its queue processes them. In the case of guest CPU
      unplug, that can be never.
      
      To address that without creating additional overhead for
      the normal interrupt processing path, this changes H_CPPR
      handling so that when such a priority change occurs, we
      scan the interrupt queue for that vCPU, and for any
      interrupt in there that has been re-routed, we replace it
      with a dummy and force a re-trigger.
      
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: default avatarAlexey Kardashevskiy <aik@ozlabs.ru>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      9dc81d6b
    • Nicholas Piggin's avatar
      KVM: PPC: Book3S HV: Make radix clear pte when unmapping · 7e3d9a1d
      Nicholas Piggin authored
      
      
      The current partition table unmap code clears the _PAGE_PRESENT bit
      out of the pte, which leaves pud_huge/pmd_huge true and does not
      clear pud_present/pmd_present.  This can confuse subsequent page
      faults and possibly lead to the guest looping doing continual
      hypervisor page faults.
      
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      7e3d9a1d
    • Nicholas Piggin's avatar
      KVM: PPC: Book3S HV: Make radix use correct tlbie sequence in kvmppc_radix_tlbie_page · e2560b10
      Nicholas Piggin authored
      
      
      The standard eieio ; tlbsync ; ptesync must follow tlbie to ensure it
      is ordered with respect to subsequent operations.
      
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      e2560b10
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Snapshot timebase offset on guest entry · 57b8daa7
      Paul Mackerras authored
      
      
      Currently, the HV KVM guest entry/exit code adds the timebase offset
      from the vcore struct to the timebase on guest entry, and subtracts
      it on guest exit.  Which is fine, except that it is possible for
      userspace to change the offset using the SET_ONE_REG interface while
      the vcore is running, as there is only one timebase offset per vcore
      but potentially multiple VCPUs in the vcore.  If that were to happen,
      KVM would subtract a different offset on guest exit from that which
      it had added on guest entry, leading to the timebase being out of sync
      between cores in the host, which then leads to bad things happening
      such as hangs and spurious watchdog timeouts.
      
      To fix this, we add a new field 'tb_offset_applied' to the vcore struct
      which stores the offset that is currently applied to the timebase.
      This value is set from the vcore tb_offset field on guest entry, and
      is what is subtracted from the timebase on guest exit.  Since it is
      zero when the timebase offset is not applied, we can simplify the
      logic in kvmhv_start_timing and kvmhv_accumulate_time.
      
      In addition, we had secondary threads reading the timebase while
      running concurrently with code on the primary thread which would
      eventually add or subtract the timebase offset from the timebase.
      This occurred while saving or restoring the DEC register value on
      the secondary threads.  Although no specific incorrect behaviour has
      been observed, this is a race which should be fixed.  To fix it, we
      move the DEC saving code to just before we call kvmhv_commence_exit,
      and the DEC restoring code to after the point where we have waited
      for the primary thread to switch the MMU context and add the timebase
      offset.  That way we are sure that the timebase contains the guest
      timebase value in both cases.
      
      Signed-off-by: default avatarPaul Mackerras <paulus@ozlabs.org>
      57b8daa7
  2. May 05, 2018
    • Anthoine Bourgeois's avatar
      KVM: x86: remove APIC Timer periodic/oneshot spikes · ecf08dad
      Anthoine Bourgeois authored
      
      
      Since the commit "8003c9ae: add APIC Timer periodic/oneshot mode VMX
      preemption timer support", a Windows 10 guest has some erratic timer
      spikes.
      
      Here the results on a 150000 times 1ms timer without any load:
      	  Before 8003c9ae | After 8003c9ae
      Max           1834us          |  86000us
      Mean          1100us          |   1021us
      Deviation       59us          |    149us
      Here the results on a 150000 times 1ms timer with a cpu-z stress test:
      	  Before 8003c9ae | After 8003c9ae
      Max          32000us          | 140000us
      Mean          1006us          |   1997us
      Deviation      140us          |  11095us
      
      The root cause of the problem is starting hrtimer with an expiry time
      already in the past can take more than 20 milliseconds to trigger the
      timer function.  It can be solved by forward such past timers
      immediately, rather than submitting them to hrtimer_start().
      In case the timer is periodic, update the target expiration and call
      hrtimer_start with it.
      
      v2: Check if the tsc deadline is already expired. Thank you Mika.
      v3: Execute the past timers immediately rather than submitting them to
      hrtimer_start().
      v4: Rearm the periodic timer with advance_periodic_target_expiration() a
      simpler version of set_target_expiration(). Thank you Paolo.
      
      Cc: Mika Penttilä <mika.penttila@nextfour.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAnthoine Bourgeois <anthoine.bourgeois@blade-group.com>
      8003c9ae ("KVM: LAPIC: add APIC Timer periodic/oneshot mode VMX preemption timer support")
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      ecf08dad
  3. May 04, 2018
    • James Morse's avatar
      arm64: vgic-v2: Fix proxying of cpuif access · b220244d
      James Morse authored
      
      
      Proxying the cpuif accesses at EL2 makes use of vcpu_data_guest_to_host
      and co, which check the endianness, which call into vcpu_read_sys_reg...
      which isn't mapped at EL2 (it was inlined before, and got moved OoL
      with the VHE optimizations).
      
      The result is of course a nice panic. Let's add some specialized
      cruft to keep the broken platforms that require this hack alive.
      
      But, this code used vcpu_data_guest_to_host(), which expected us to
      write the value to host memory, instead we have trapped the guest's
      read or write to an mmio-device, and are about to replay it using the
      host's readl()/writel() which also perform swabbing based on the host
      endianness. This goes wrong when both host and guest are big-endian,
      as readl()/writel() will undo the guest's swabbing, causing the
      big-endian value to be written to device-memory.
      
      What needs doing?
      A big-endian guest will have pre-swabbed data before storing, undo this.
      If its necessary for the host, writel() will re-swab it.
      
      For a read a big-endian guest expects to swab the data after the load.
      The hosts's readl() will correct for host endianness, giving us the
      device-memory's value in the register. For a big-endian guest, swab it
      as if we'd only done the load.
      
      For a little-endian guest, nothing needs doing as readl()/writel() leave
      the correct device-memory value in registers.
      
      Tested on Juno with that rarest of things: a big-endian 64K host.
      Based on a patch from Marc Zyngier.
      
      Reported-by: default avatarSuzuki K Poulose <suzuki.poulose@arm.com>
      Fixes: bf8feb39 ("arm64: KVM: vgic-v2: Add GICV access from HYP")
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      b220244d
    • James Morse's avatar
      KVM: arm64: Fix order of vcpu_write_sys_reg() arguments · 1975fa56
      James Morse authored
      
      
      A typo in kvm_vcpu_set_be()'s call:
      | vcpu_write_sys_reg(vcpu, SCTLR_EL1, sctlr)
      causes us to use the 32bit register value as an index into the sys_reg[]
      array, and sail off the end of the linear map when we try to bring up
      big-endian secondaries.
      
      | Unable to handle kernel paging request at virtual address ffff80098b982c00
      | Mem abort info:
      |  ESR = 0x96000045
      |  Exception class = DABT (current EL), IL = 32 bits
      |   SET = 0, FnV = 0
      |   EA = 0, S1PTW = 0
      | Data abort info:
      |   ISV = 0, ISS = 0x00000045
      |   CM = 0, WnR = 1
      | swapper pgtable: 4k pages, 48-bit VAs, pgdp = 000000002ea0571a
      | [ffff80098b982c00] pgd=00000009ffff8803, pud=0000000000000000
      | Internal error: Oops: 96000045 [#1] PREEMPT SMP
      | Modules linked in:
      | CPU: 2 PID: 1561 Comm: kvm-vcpu-0 Not tainted 4.17.0-rc3-00001-ga912e2261ca6-dirty #1323
      | Hardware name: ARM Juno development board (r1) (DT)
      | pstate: 60000005 (nZCv daif -PAN -UAO)
      | pc : vcpu_write_sys_reg+0x50/0x134
      | lr : vcpu_write_sys_reg+0x50/0x134
      
      | Process kvm-vcpu-0 (pid: 1561, stack limit = 0x000000006df4728b)
      | Call trace:
      |  vcpu_write_sys_reg+0x50/0x134
      |  kvm_psci_vcpu_on+0x14c/0x150
      |  kvm_psci_0_2_call+0x244/0x2a4
      |  kvm_hvc_call_handler+0x1cc/0x258
      |  handle_hvc+0x20/0x3c
      |  handle_exit+0x130/0x1ec
      |  kvm_arch_vcpu_ioctl_run+0x340/0x614
      |  kvm_vcpu_ioctl+0x4d0/0x840
      |  do_vfs_ioctl+0xc8/0x8d0
      |  ksys_ioctl+0x78/0xa8
      |  sys_ioctl+0xc/0x18
      |  el0_svc_naked+0x30/0x34
      | Code: 73620291 604d00b0 00201891 1ab10194 (957a33f8)
      |---[ end trace 4b4a4f9628596602 ]---
      
      Fix the order of the arguments.
      
      Fixes: 8d404c4c ("KVM: arm64: Rewrite system register accessors to read/write functions")
      CC: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      1975fa56
  4. May 02, 2018
  5. May 01, 2018
  6. Apr 30, 2018
  7. Apr 27, 2018
    • KarimAllah Ahmed's avatar
      x86/headers/UAPI: Move DISABLE_EXITS KVM capability bits to the UAPI · 5e62493f
      KarimAllah Ahmed authored
      
      
      Move DISABLE_EXITS KVM capability bits to the UAPI just like the rest of
      capabilities.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: x86@kernel.org
      Cc: kvm@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarKarimAllah Ahmed <karahmed@amazon.de>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      5e62493f
    • Junaid Shahid's avatar
      kvm: apic: Flush TLB after APIC mode/address change if VPIDs are in use · a468f2db
      Junaid Shahid authored
      
      
      Currently, KVM flushes the TLB after a change to the APIC access page
      address or the APIC mode when EPT mode is enabled. However, even in
      shadow paging mode, a TLB flush is needed if VPIDs are being used, as
      specified in the Intel SDM Section 29.4.5.
      
      So replace vmx_flush_tlb_ept_only() with vmx_flush_tlb(), which will
      flush if either EPT or VPIDs are in use.
      
      Signed-off-by: default avatarJunaid Shahid <junaids@google.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      a468f2db
    • Andy Lutomirski's avatar
      x86/entry/64/compat: Preserve r8-r11 in int $0x80 · 8bb2610b
      Andy Lutomirski authored
      
      
      32-bit user code that uses int $80 doesn't care about r8-r11.  There is,
      however, some 64-bit user code that intentionally uses int $0x80 to invoke
      32-bit system calls.  From what I've seen, basically all such code assumes
      that r8-r15 are all preserved, but the kernel clobbers r8-r11.  Since I
      doubt that there's any code that depends on int $0x80 zeroing r8-r11,
      change the kernel to preserve them.
      
      I suspect that very little user code is broken by the old clobber, since
      r8-r11 are only rarely allocated by gcc, and they're clobbered by function
      calls, so they only way we'd see a problem is if the same function that
      invokes int $0x80 also spills something important to one of these
      registers.
      
      The current behavior seems to date back to the historical commit
      "[PATCH] x86-64 merge for 2.6.4".  Before that, all regs were
      preserved.  I can't find any explanation of why this change was made.
      
      Update the test_syscall_vdso_32 testcase as well to verify the new
      behavior, and it strengthens the test to make sure that the kernel doesn't
      accidentally permute r8..r15.
      
      Suggested-by: default avatarDenys Vlasenko <dvlasenk@redhat.com>
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dominik Brodowski <linux@dominikbrodowski.net>
      Link: https://lkml.kernel.org/r/d4c4d9985fbe64f8c9e19291886453914b48caee.1523975710.git.luto@kernel.org
      8bb2610b
    • Arnd Bergmann's avatar
      x86/ipc: Fix x32 version of shmid64_ds and msqid64_ds · 1a512c08
      Arnd Bergmann authored
      A bugfix broke the x32 shmid64_ds and msqid64_ds data structure layout
      (as seen from user space)  a few years ago: Originally, __BITS_PER_LONG
      was defined as 64 on x32, so we did not have padding after the 64-bit
      __kernel_time_t fields, After __BITS_PER_LONG got changed to 32,
      applications would observe extra padding.
      
      In other parts of the uapi headers we seem to have a mix of those
      expecting either 32 or 64 on x32 applications, so we can't easily revert
      the path that broke these two structures.
      
      Instead, this patch decouples x32 from the other architectures and moves
      it back into arch specific headers, partially reverting the even older
      commit 73a2d096 ("x86: remove all now-duplicate header files").
      
      It's not clear whether this ever made any difference, since at least
      glibc carries its own (correct) copy of both of these header files,
      so possibly no application has ever observed the definitions here.
      
      Based on a suggestion from H.J. Lu, I tried out the tool from
      https://github.com/hjl-tools/linux-header
      
       to find other such
      bugs, which pointed out the same bug in statfs(), which also has
      a separate (correct) copy in glibc.
      
      Fixes: f4b4aae1 ("x86/headers/uapi: Fix __BITS_PER_LONG value for x32 builds")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: "H . J . Lu" <hjl.tools@gmail.com>
      Cc: Jeffrey Walton <noloader@gmail.com>
      Cc: stable@vger.kernel.org
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Link: https://lkml.kernel.org/r/20180424212013.3967461-1-arnd@arndb.de
      1a512c08
    • Petr Tesarik's avatar
      x86/setup: Do not reserve a crash kernel region if booted on Xen PV · 3db3eb28
      Petr Tesarik authored
      
      
      Xen PV domains cannot shut down and start a crash kernel. Instead,
      the crashing kernel makes a SCHEDOP_shutdown hypercall with the
      reason code SHUTDOWN_crash, cf. xen_crash_shutdown() machine op in
      arch/x86/xen/enlighten_pv.c.
      
      A crash kernel reservation is merely a waste of RAM in this case. It
      may also confuse users of kexec_load(2) and/or kexec_file_load(2).
      When flags include KEXEC_ON_CRASH or KEXEC_FILE_ON_CRASH,
      respectively, these syscalls return success, which is technically
      correct, but the crash kexec image will never be actually used.
      
      Signed-off-by: default avatarPetr Tesarik <ptesarik@suse.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Dou Liyang <douly.fnst@cn.fujitsu.com>
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: xen-devel@lists.xenproject.org
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Jean Delvare <jdelvare@suse.de>
      Link: https://lkml.kernel.org/r/20180425120835.23cef60c@ezekiel.suse.cz
      3db3eb28
    • Mark Rutland's avatar
      arm64: avoid instrumenting atomic_ll_sc.o · 3789c122
      Mark Rutland authored
      
      
      Our out-of-line atomics are built with a special calling convention,
      preventing pointless stack spilling, and allowing us to patch call sites
      with ARMv8.1 atomic instructions.
      
      Instrumentation inserted by the compiler may result in calls to
      functions not following this special calling convention, resulting in
      registers being unexpectedly clobbered, and various problems resulting
      from this.
      
      For example, if a kernel is built with KCOV and ARM64_LSE_ATOMICS, the
      compiler inserts calls to __sanitizer_cov_trace_pc in the prologues of
      the atomic functions. This has been observed to result in spurious
      cmpxchg failures, leading to a hang early on in the boot process.
      
      This patch avoids such issues by preventing instrumentation of our
      out-of-line atomics.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      3789c122
    • Laurentiu Tudor's avatar
      powerpc/kvm/booke: Fix altivec related build break · b2d7ecbe
      Laurentiu Tudor authored
      
      
      Add missing "altivec unavailable" interrupt injection helper
      thus fixing the linker error below:
      
        arch/powerpc/kvm/emulate_loadstore.o: In function `kvmppc_check_altivec_disabled':
        arch/powerpc/kvm/emulate_loadstore.c: undefined reference to `.kvmppc_core_queue_vec_unavail'
      
      Fixes: 09f98496 ("KVM: PPC: Book3S: Add MMIO emulation for VMX instructions")
      Signed-off-by: default avatarLaurentiu Tudor <laurentiu.tudor@nxp.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      b2d7ecbe
    • Nicholas Piggin's avatar
      powerpc: Fix deadlock with multiple calls to smp_send_stop · 6029755e
      Nicholas Piggin authored
      
      
      smp_send_stop can lock up the IPI path for any subsequent calls,
      because the receiving CPUs spin in their handler function. This
      started becoming a problem with the addition of an smp_send_stop
      call in the reboot path, because panics can reboot after doing
      their own smp_send_stop.
      
      The NMI IPI variant was fixed with ac61c115 ("powerpc: Fix
      smp_send_stop NMI IPI handling"), which leaves the smp_call_function
      variant.
      
      This is fixed by having smp_send_stop only ever do the
      smp_call_function once. This is a bit less robust than the NMI IPI
      fix, because any other call to smp_call_function after smp_send_stop
      could deadlock, but that has always been the case, and it was not
      been a problem before.
      
      Fixes: f2748bdf ("powerpc/powernv: Always stop secondaries before reboot/shutdown")
      Reported-by: default avatarAbdul Haleem <abdhalee@linux.vnet.ibm.com>
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      6029755e
  8. Apr 26, 2018
  9. Apr 25, 2018
    • Kan Liang's avatar
      perf/x86/intel: Don't enable freeze-on-smi for PerfMon V1 · 4e949e9b
      Kan Liang authored
      
      
      The SMM freeze feature was introduced since PerfMon V2. But the current
      code unconditionally enables the feature for all platforms. It can
      generate #GP exception, if the related FREEZE_WHILE_SMM bit is set for
      the machine with PerfMon V1.
      
      To disable the feature for PerfMon V1, perf needs to
      - Remove the freeze_on_smi sysfs entry by moving intel_pmu_attrs to
        intel_pmu, which is only applied to PerfMon V2 and later.
      - Check the PerfMon version before flipping the SMM bit when starting CPU
      
      Fixes: 6089327f ("perf/x86: Add sysfs entry to freeze counters on SMI")
      Signed-off-by: default avatarKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ak@linux.intel.com
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Link: https://lkml.kernel.org/r/1524682637-63219-1-git-send-email-kan.liang@linux.intel.com
      4e949e9b
    • Gianluca Borello's avatar
      bpf, x64: fix JIT emission for dead code · 1612a981
      Gianluca Borello authored
      
      
      Commit 2a5418a1 ("bpf: improve dead code sanitizing") replaced dead
      code with a series of ja-1 instructions, for safety. That made JIT
      compilation much more complex for some BPF programs. One instance of such
      programs is, for example:
      
      bool flag = false
      ...
      /* A bunch of other code */
      ...
      if (flag)
              do_something()
      
      In some cases llvm is not able to remove at compile time the code for
      do_something(), so the generated BPF program ends up with a large amount
      of dead instructions. In one specific real life example, there are two
      series of ~500 and ~1000 dead instructions in the program. When the
      verifier replaces them with a series of ja-1 instructions, it causes an
      interesting behavior at JIT time.
      
      During the first pass, since all the instructions are estimated at 64
      bytes, the ja-1 instructions end up being translated as 5 bytes JMP
      instructions (0xE9), since the jump offsets become increasingly large (>
      127) as each instruction gets discovered to be 5 bytes instead of the
      estimated 64.
      
      Starting from the second pass, the first N instructions of the ja-1
      sequence get translated into 2 bytes JMPs (0xEB) because the jump offsets
      become <= 127 this time. In particular, N is defined as roughly 127 / (5
      - 2) ~= 42. So, each further pass will make the subsequent N JMP
      instructions shrink from 5 to 2 bytes, making the image shrink every time.
      This means that in order to have the entire program converge, there need
      to be, in the real example above, at least ~1000 / 42 ~= 24 passes just
      for translating the dead code. If we add this number to the passes needed
      to translate the other non dead code, it brings such program to 40+
      passes, and JIT doesn't complete. Ultimately the userspace loader fails
      because such BPF program was supposed to be part of a prog array owner
      being JITed.
      
      While it is certainly possible to try to refactor such programs to help
      the compiler remove dead code, the behavior is not really intuitive and it
      puts further burden on the BPF developer who is not expecting such
      behavior. To make things worse, such programs are working just fine in all
      the kernel releases prior to the ja-1 fix.
      
      A possible approach to mitigate this behavior consists into noticing that
      for ja-1 instructions we don't really need to rely on the estimated size
      of the previous and current instructions, we know that a -1 BPF jump
      offset can be safely translated into a 0xEB instruction with a jump offset
      of -2.
      
      Such fix brings the BPF program in the previous example to complete again
      in ~9 passes.
      
      Fixes: 2a5418a1 ("bpf: improve dead code sanitizing")
      Signed-off-by: default avatarGianluca Borello <g.borello@gmail.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      1612a981
Loading