- May 23, 2019
-
-
Ard Biesheuvel authored
The R_AARCH64_PREL16 and R_AARCH64_PREL32 relocations are documented as permitting a range of [-2^15 .. 2^16), resp. [-2^31 .. 2^32). It is also documented that this means we cannot detect overflow in some cases, which is bad. Since we always interpret the targets of these relocations as signed quantities (e.g., in the ksymtab handling code), let's tighten the overflow checks so that targets that are out of range for our signed interpretation of the relocated quantity get flagged. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The following commit 7290d580 ("module: use relative references for __ksymtab entries") updated the ksymtab handling of some KASLR capable architectures so that ksymtab entries are emitted as pairs of 32-bit relative references. This reduces the size of the entries, but more importantly, it gets rid of statically assigned absolute addresses, which require fixing up at boot time if the kernel is self relocating (which takes a 24 byte RELA entry for each member of the ksymtab struct). Since ksymtab entries are always part of the same module as the symbol they export, it was assumed at the time that a 32-bit relative reference is always sufficient to capture the offset between a ksymtab entry and its target symbol. Unfortunately, this is not always true: in the case of per-CPU variables, a per-CPU variable's base address (which usually differs from the actual address of any of its per-CPU copies) is allocated in the vicinity of the ..data.percpu section in the core kernel (i.e., in the per-CPU reserved region which follows the section containing the core kernel's statically allocated per-CPU variables). Since we randomize the module space over a 4 GB window covering the core kernel (based on the -/+ 4 GB range of an ADRP/ADD pair), we may end up putting the core kernel out of the -/+ 2 GB range of 32-bit relative references of module ksymtab entries that refer to per-CPU variables. So reduce the module randomization range a bit further. We lose 1 bit of randomization this way, but this is something we can tolerate. Cc: <stable@vger.kernel.org> # v4.19+ Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Revisions of the Cortex-A76 CPU prior to r4p0 are affected by an erratum that can prevent interrupts from being taken when single-stepping. This patch implements a software workaround to prevent userspace from effectively being able to disable interrupts. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
During an oops, we print the name of the current task and its pid twice. We also helpfully advertise its stack limit as "0x(____ptrval____)". Drop these useless messages. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- May 18, 2019
-
-
Feng Tang authored
Currently on panic, kernel will lower the loglevel and print out pending printk msg only with console_flush_on_panic(). Add an option for users to configure the "panic_print" to replay all dmesg in buffer, some of which they may have never seen due to the loglevel setting, which will help panic debugging . [feng.tang@intel.com: keep the original console_flush_on_panic() inside panic()] Link: http://lkml.kernel.org/r/1556199137-14163-1-git-send-email-feng.tang@intel.com [feng.tang@intel.com: use logbuf lock to protect the console log index] Link: http://lkml.kernel.org/r/1556269868-22654-1-git-send-email-feng.tang@intel.com Link: http://lkml.kernel.org/r/1556095872-36838-1-git-send-email-feng.tang@intel.com Signed-off-by:
Feng Tang <feng.tang@intel.com> Reviewed-by:
Petr Mladek <pmladek@suse.com> Cc: Aaro Koskinen <aaro.koskinen@nokia.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Kees Cook <keescook@chromium.org> Cc: Borislav Petkov <bp@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Masahiro Yamada authored
Currently, the Kbuild core manipulates header search paths in a crazy way [1]. To fix this mess, I want all Makefiles to add explicit $(srctree)/ to the search paths in the srctree. Some Makefiles are already written in that way, but not all. The goal of this work is to make the notation consistent, and finally get rid of the gross hacks. Having whitespaces after -I does not matter since commit 48f6e3cf ("kbuild: do not drop -I without parameter"). [1]: https://patchwork.kernel.org/patch/9632347/ Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
As of Linux 5.1, alpha and s390 are the last architectures that have defconfig in arch/*/ instead of arch/*/configs/. $ find arch -name defconfig | sort arch/alpha/defconfig arch/arm64/configs/defconfig arch/csky/configs/defconfig arch/nds32/configs/defconfig arch/riscv/configs/defconfig arch/s390/defconfig The arch/$(ARCH)/defconfig is the hard-coded default in Kconfig, and I want to deprecate it after evacuating the remaining defconfig into the standard location, arch/*/configs/. Define KBUILD_DEFCONFIG like other architectures, and move defconfig into the configs/ subdirectory. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by:
Paul Walmsley <paul@pwsan.com>
-
Masahiro Yamada authored
These generic-y defines do not have the corresponding generic header in include/asm-generic/, so they are definitely invalid. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
arch/sh/boot/.gitignore has the pattern "vmlinux*"; this is effective not only for the current directory, but also for any sub-directories. So, from the point of .gitignore grammar, the following check-in files are also considered to be ignored: arch/sh/boot/compressed/vmlinux.scr arch/sh/boot/romimage/vmlinux.scr As the manual gitignore(5) says "Files already tracked by Git are not affected", this is not a problem as far as Git is concerned. However, Git is not the only program that parses .gitignore because .gitignore is useful to distinguish build artifacts from source files. For example, tar(1) supports the --exclude-vcs-ignore option. As of writing, this option does not work perfectly, but it intends to create a tarball excluding files specified by .gitignore. So, I believe it is better to fix this issue. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Nick Desaulniers authored
Towards the goal of removing cc-ldoption, it seems that --hash-style= was added to binutils 2.17.50.0.2 in 2006. The minimal required version of binutils for the kernel according to Documentation/process/changes.rst is 2.20. Link: https://gcc.gnu.org/ml/gcc/2007-01/msg01141.html Cc: clang-built-linux@googlegroups.com Suggested-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Acked-by:
Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Nick Desaulniers authored
Towards the goal of removing cc-ldoption, it seems that --hash-style= was added to binutils 2.17.50.0.2 in 2006. The minimal required version of binutils for the kernel according to Documentation/process/changes.rst is 2.20. Link: https://gcc.gnu.org/ml/gcc/2007-01/msg01141.html Cc: clang-built-linux@googlegroups.com Suggested-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
Having a symbolic link arch/*/boot/dts/include/dt-bindings was deprecated by commit d5d332d3 ("devicetree: Move include prefixes from arch to separate directory"). Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
- May 17, 2019
-
-
Tobin C. Harding authored
kfree() after kobject_put(). Who ever wrote this was on crack. Fixes: 7e803979 ("powerpc/cacheinfo: Fix kobject memleak") Signed-off-by:
Tobin C. Harding <tobin@kernel.org> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
Accesses by userspace to random addresses outside the user or kernel address range will generate an SLB fault. When we handle that fault we classify the effective address into several classes, eg. user, kernel linear, kernel virtual etc. For addresses that are completely outside of any valid range, we should not insert an SLB entry at all, and instead immediately an exception. In the past this was handled in two ways. Firstly we would check the top nibble of the address (using REGION_ID(ea)) and that would tell us if the address was user (0), kernel linear (c), kernel virtual (d), or vmemmap (f). If the address didn't match any of these it was invalid. Then for each type of address we would do a secondary check. For the user region we check against H_PGTABLE_RANGE, for kernel linear we would mask the top nibble of the address and then check the address against MAX_PHYSMEM_BITS. As part of commit 0034d395 ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") we replaced REGION_ID() with get_region_id() and changed the masking of the top nibble to only mask the top two bits, which introduced a bug. Addresses less than (4 << 60) are still handled correctly, they are either less than (1 << 60) in which case they are subject to the H_PGTABLE_RANGE check, or they are correctly checked against MAX_PHYSMEM_BITS. However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly treated as kernel linear addresses in get_region_id(). Then the top two bits are cleared by EA_MASK in slb_allocate_kernel() and the address is checked against MAX_PHYSMEM_BITS, which it passes due to the masking. The end result is we incorrectly insert SLB entries for those addresses. That is not actually catastrophic, having inserted the SLB entry we will then go on to take a page fault for the address and at that point we detect the problem and report it as a bad fault. Still we should not be inserting those entries, or treating them as kernel linear addresses in the first place. So fix get_region_id() to detect addresses in that range and return an invalid region id, which we cause use to not insert an SLB entry and directly report an exception. Fixes: 0034d395 ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Drop change to EA_MASK for now, rewrite change log] Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Andreas Schwab authored
When a user mode process accesses an address in the vmalloc area do_page_fault tries to unlock the mmap semaphore when it isn't locked. Signed-off-by:
Andreas Schwab <schwab@suse.de> [Palmer: Duplicated code instead of a goto] Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Yash Shah authored
The driver currently supports only SiFive FU540-C000 platform. The initial version of L2 cache controller driver includes: - Initial configuration reporting at boot up. - Support for ECC related functionality. Signed-off-by:
Yash Shah <yash.shah@sifive.com> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Palmer Dabbelt authored
This is almost entirely a comment. Signed-off-by:
Palmer Dabbelt <palmer@sifive.com> Reviewed-by:
Anup Patel <anup@brainfault.org>
-
Vincent Chen authored
The kernel module is loaded into vmalloc region which is located below to the PAGE_OFFSET. Hence the condition, pc < PAGE_OFFSET, in the is_valid_bugaddr() will filter out all trap exceptions triggered by kernel module. To support BUG() in kernel module, the condition is changed to pc < VMALLOC_START. Signed-off-by:
Vincent Chen <vincentc@andestech.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Vincent Chen authored
The macro __BUG_INSN currently is defined as the "ebreak" opcode. The is_valid_bugaddr() function compares the instruction pointed to by $sepc with macro __BUG_INSN to check whether the current trap exception is caused by an "ebreak" instruction. However, this check flow is possibly erroneous because if C extension is supported, the expected trap instruction "ebreak" is possibly translated to "c.ebreak" by the assembler. Therefore, it requires a mechanism to distinguish the length of the instruction in $spec and compare it to the correct trap instruction. Signed-off-by:
Vincent Chen <vincentc@andestech.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Vincent Chen authored
The WARN() related function will trigger a debug exception. This can help developers to analyze the cause of WARN() because if the debugger is connected, the control flow will be transferred to debugging environment. Signed-off-by:
Vincent Chen <vincentc@andestech.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Gary Guo authored
Currently sbi_remote_sfence_vma{,_asid} does not pass their arguments to SBI at all, which is semantically incorrect. Neither BBL nor OpenSBI is using these arguments at the moment, and they just do a global flush instead. However we still need to provide correct arguments. Signed-off-by:
Gary Guo <gary@garyguo.net> Reviewed-by:
Anup Patel <anup@brainfault.org> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Gary Guo authored
switch_mm is an expensive operations that has two users. flush_icache_deferred is only called within switch_mm and can be moved together. The function is expected to be more complicated when ASID support is added, so clean up eagerly. By moving them to a separate file we also removes some excessive dependency of tlbflush.h and cacheflush.h. Signed-off-by:
Gary Guo <gary@garyguo.net> Reviewed-by:
Anup Patel <anup@brainfault.org> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Gary Guo authored
Currently, flush_icache_all is macro-expanded into a SBI call, yet no asm/sbi.h is included in asm/cacheflush.h. This could be moved to mm/cacheflush.c instead (SBI call will dominate performance-wise and there is no worry to not have it inlined. Currently, flush_icache_mm stays in kernel/smp.c, which looks like a hack to prevent it from being compiled when CONFIG_SMP=n. It should also be in mm/cacheflush.c. Signed-off-by:
Gary Guo <gary@garyguo.net> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Anup Patel authored
We should prefer accessing CSRs using their CSR numbers because: 1. It compiles fine with older toolchains. 2. We can use latest CSR names in #define macro names of CSR numbers as-per RISC-V spec. 3. We can access newly added CSRs even if toolchain does not recognize newly addes CSRs by name. Signed-off-by:
Anup Patel <anup.patel@wdc.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Anup Patel authored
This patch adds SCAUSE interrupt flag and SCAUSE interrupt related defines to asm/csr.h. We also use these defines in kernel/irq.c and express SIE/SIP flags in-terms of SCAUSE interrupt causes. Signed-off-by:
Anup Patel <anup.patel@wdc.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Anup Patel authored
The spacing between macro name and value is not consistent in asm/csr.h. This patch beautifies asm/csr.h by using tabs to align macro values instead of spaces. Signed-off-by:
Anup Patel <anup.patel@wdc.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Atish Patra authored
While working on the patches, I found some minor checkpatch issues. Signed-off-by:
Atish Patra <atish.patra@wdc.com> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
Atish Patra authored
If nr_cpus command line option is set, maximum possible cpu should be set to that value. Signed-off-by:
Atish Patra <atish.patra@wdc.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Palmer Dabbelt <palmer@sifive.com>
-
- May 16, 2019
-
-
Baolin Wang authored
We've introduced power management logics for the Spreadtrum serial controller by commit 062ec2774c8a ("serial: sprd: Add power management for the Spreadtrum serial controller"), thus add related clock properties to support this feature. Signed-off-by:
Baolin Wang <baolin.wang@linaro.org> Signed-off-by:
Olof Johansson <olof@lixom.net>
-
YueHaibing authored
Remove duplicated include. Signed-off-by:
YueHaibing <yuehaibing@huawei.com> Signed-off-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Olof Johansson <olof@lixom.net>
-
David Howells authored
Wire up the mount API syscalls on non-x86 arches. Reported-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
David Howells authored
Fix the syscall numbering of the mount API syscalls so that the numbers match between i386 and x86_64 and that they're in the common numbering scheme space. Fixes: a07b2000 ("vfs: syscall: Add open_tree(2) to reference or clone a mount") Fixes: 2db154b3 ("vfs: syscall: Add move_mount(2) to move mounts around") Fixes: 24dcb3d9 ("vfs: syscall: Add fsopen() to prepare for superblock creation") Fixes: ecdab150 ("vfs: syscall: Add fsconfig() for configuring and managing a context") Fixes: 93766fbd ("vfs: syscall: Add fsmount() to create a mount for a superblock") Fixes: cf3cba4a ("vfs: syscall: Add fspick() to select a superblock for reconfiguration") Reported-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Aneesh Kumar K.V authored
We call get_region_id() without validating the ea value. That means with a wrong ea value we hit the BUG as below. kernel BUG at arch/powerpc/include/asm/book3s/64/hash.h:129! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries CPU: 0 PID: 3937 Comm: access_tests Not tainted 5.1.0 .... NIP [c00000000007ba20] do_slb_fault+0x70/0x320 LR [c00000000000896c] data_access_slb_common+0x15c/0x1a0 Fix this by removing the VM_BUG_ON. All callers make sure the returned region id is valid and error out otherwise. Fixes: 0034d395 ("powerpc/mm/hash64: Map all the kernel regions in the same 0xc range") Reported-by:
Andrew Donnellan <ajd@linux.ibm.com> Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Vincenzo Frascino authored
clock_getres in the vDSO library has to preserve the same behaviour of posix_get_hrtimer_res(). In particular, posix_get_hrtimer_res() does: sec = 0; ns = hrtimer_resolution; and hrtimer_resolution depends on the enablement of the high resolution timers that can happen either at compile or at run time. Fix the nds32 vdso implementation of clock_getres keeping a copy of hrtimer_resolution in vdso data and using that directly. Cc: Greentime Hu <green.hu@gmail.com> Cc: Vincent Chen <deanbo422@gmail.com> Signed-off-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by:
Greentime Hu <greentime@andestech.com>
-
Andy Lutomirski authored
The double fault ESPFIX path doesn't return to user mode at all -- it returns back to the kernel by simulating a #GP fault. prepare_exit_to_usermode() will run on the way out of general_protection before running user code. Signed-off-by:
Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@suse.de> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jon Masters <jcm@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 04dcbdb8 ("x86/speculation/mds: Clear CPU buffers on exit to user") Link: http://lkml.kernel.org/r/ac97612445c0a44ee10374f6ea79c222fe22a5c4.1557865329.git.luto@kernel.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
Christoph Hellwig authored
None of these is used by modules. Nor should they as we have better highlevel primitives. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Greentime Hu <greentime@andestech.com> Signed-off-by:
Greentime Hu <greentime@andestech.com>
-
Tony Luck authored
Generic kernels feed many operation through the "machvec" logic to get the correct form of the operation for the current system. "mmiowb()" is one of those operations. Although machvec is initialized very early in boot, it isn't early enough for a recent upstream kernel change that added mmiowb to the spin_unlock() path. Statically initialize the mmiowb field of machvec so that we won't die with a call through a NULL pointer. This should be safe because we do the real initialization of machvec before bringing up any addtional CPUs or doing any I/O. Fixes: 49ca6462 ("ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock()") Signed-off-by:
Tony Luck <tony.luck@intel.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- May 15, 2019
-
-
Russell King authored
This reverts commit e8c24bbd. GCC 4.7, which is still permitted, emits code using the original syntax. This means we end up with lots of assembler warnings when building with a currently-supported version of gcc. Revert the commit (with fixups to keep the follow-on -mauto-it change) to avoid these warnings. Signed-off-by:
Russell King <rmk+kernel@armlinux.org.uk>
-
Manuel Lauss authored
Makes au1000-eth work again, tested on DB1500. Signed-off-by:
Manuel Lauss <manuel.lauss@gmail.com> Signed-off-by:
Paul Burton <paul.burton@mips.com> Cc: Linux-MIPS <linux-mips@linux-mips.org>
-
Sean Christopherson authored
The RDPMC-exiting control is dependent on the existence of the RDPMC instruction itself, i.e. is not tied to the "Architectural Performance Monitoring" feature. For all intents and purposes, the control exists on all CPUs with VMX support since RDPMC also exists on all VCPUs with VMX supported. Per Intel's SDM: The RDPMC instruction was introduced into the IA-32 Architecture in the Pentium Pro processor and the Pentium processor with MMX technology. The earlier Pentium processors have performance-monitoring counters, but they must be read with the RDMSR instruction. Because RDPMC-exiting always exists, KVM requires the control and refuses to load if it's not available. As a result, hiding the PMU from a guest breaks nested virtualization if the guest attemts to use KVM. While it's not explicitly stated in the RDPMC pseudocode, the VM-Exit check for RDPMC-exiting follows standard fault vs. VM-Exit prioritization for privileged instructions, e.g. occurs after the CPL/CR0.PE/CR4.PCE checks, but before the counter referenced in ECX is checked for validity. In other words, the original KVM behavior of injecting a #GP was correct, and the KVM unit test needs to be adjusted accordingly, e.g. eat the #GP when the unit test guest (L3 in this case) executes RDPMC without RDPMC-exiting set in the unit test host (L2). This reverts commit e51bfdb6. Fixes: e51bfdb6 ("KVM: nVMX: Expose RDPMC-exiting only when guest supports PMU") Reported-by:
David Hill <hilld@binarystorm.net> Cc: Saar Amar <saaramar@microsoft.com> Cc: Mihai Carabas <mihai.carabas@oracle.com> Cc: Jim Mattson <jmattson@google.com> Cc: Liran Alon <liran.alon@oracle.com> Cc: stable@vger.kernel.org Signed-off-by:
Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-