- Sep 20, 2013
-
-
Steve Capper authored
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit. This patch widens elf_hwcap to be 64 bit. Signed-off-by:
Steve Capper <steve.capper@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
When a task crashes and we print debugging information, ensure that compat tasks show the actual AArch32 LR and SP registers rather than the AArch64 ones. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This function is only called from arch/arm64/mm/fault.c. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Sep 13, 2013
-
-
Martin Schwidefsky authored
After the last architecture switched to generic hard irqs the config options HAVE_GENERIC_HARDIRQS & GENERIC_HARDIRQS and the related code for !CONFIG_GENERIC_HARDIRQS can be removed. Signed-off-by:
Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- Sep 12, 2013
-
-
Johannes Weiner authored
Unlike global OOM handling, memory cgroup code will invoke the OOM killer in any OOM situation because it has no way of telling faults occuring in kernel context - which could be handled more gracefully - from user-triggered faults. Pass a flag that identifies faults originating in user space from the architecture-specific fault handlers to generic code so that memcg OOM handling can be improved. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Reviewed-by:
Michal Hocko <mhocko@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Johannes Weiner authored
Kernel faults are expected to handle OOM conditions gracefully (gup, uaccess etc.), so they should never invoke the OOM killer. Reserve this for faults triggered in user context when it is the only option. Most architectures already do this, fix up the remaining few. Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Reviewed-by:
Michal Hocko <mhocko@suse.cz> Acked-by:
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: David Rientjes <rientjes@google.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: azurIt <azurit@pobox.sk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Sep 11, 2013
-
-
Naoya Horiguchi authored
Currently hugepage migration works well only for pmd-based hugepages (mainly due to lack of testing,) so we had better not enable migration of other levels of hugepages until we are ready for it. Some users of hugepage migration (mbind, move_pages, and migrate_pages) do page table walk and check pud/pmd_huge() there, so they are safe. But the other users (softoffline and memory hotremove) don't do this, so without this patch they can try to migrate unexpected types of hugepages. To prevent this, we introduce hugepage_migration_support() as an architecture dependent check of whether hugepage are implemented on a pmd basis or not. And on some architecture multiple sizes of hugepages are available, so hugepage_migration_support() also checks hugepage size. Signed-off-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Hugh Dickins <hughd@google.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Sep 03, 2013
-
-
Will Deacon authored
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Sep 02, 2013
-
-
Dan Aloni authored
Signed-off-by:
Dan Aloni <alonid@stratoscale.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 30, 2013
-
-
Will Deacon authored
We always use a timer-backed delay loop for arm64, so don't bother reporting a bogomips value which appears to confuse some people. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 28, 2013
-
-
Grant Likely authored
Most architectures use the same implementation. Collapse the common ones into a single weak function that can be overridden. Signed-off-by:
Grant Likely <grant.likely@linaro.org>
-
Catalin Marinas authored
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Reported-by:
"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by:
"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
-
- Aug 27, 2013
-
-
Mark Salter authored
The current vmlinux.lds.S places the notes sections between the end of rw data and start of bss. This means that _edata doesn't really point to the end of data. Since notes are read-only, this patch moves them to the read-only segment so that _edata does point to the end of initialized rw data. Signed-off-by:
Mark Salter <msalter@redhat.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 22, 2013
-
-
Catalin Marinas authored
do_undefinstr() has to be called with interrupts disabled since it may read the instruction from the user address space which could lead to a data abort and subsequent might_sleep() warning in do_page_fault(). Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Roy Franz authored
Expand the arm64 image header to allow for co-existance with PE/COFF header required by the EFI stub. The PE/COFF format requires the "MZ" header to be at offset 0, and the offset to the PE/COFF header to be at offset 0x3c. The image header is expanded to allow 2 instructions at the beginning to accommodate a benign intruction at offset 0 that includes the "MZ" header, a magic number, and the offset to the PE/COFF header. Signed-off-by:
Roy Franz <roy.franz@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Chen Gang authored
Need include "asm/types.h", just like arm has done, or can not pass compiling, the related error: In file included from arch/arm64/include/asm/page.h:37:0, from drivers/staging/lustre/include/linux/lnet/linux/lib-lnet.h:42, from drivers/staging/lustre/include/linux/lnet/lib-lnet.h:44, from drivers/staging/lustre/lnet/lnet/api-ni.c:38: arch/arm64/include/asm/pgtable-2level-types.h:19:1: error: unknown type name ‘u64 arch/arm64/include/asm/pgtable-2level-types.h:20:1: error: unknown type name ‘u64’ Signed-off-by:
Chen Gang <gang.chen@asianux.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 20, 2013
-
-
Ard Biesheuvel authored
Add <asm/neon.h> containing kernel_neon_begin/kernel_neon_end function declarations and corresponding definitions in fpsimd.c These are needed to wrap uses of NEON in kernel mode. The names are identical to the ones used in arm/ so code using intrinsics or vectorized by GCC can be shared between arm and arm64. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This is a port of f2fe09b0 ("ARM: 7663/1: perf: fix ARMv7 EVTYPE_MASK to include NSH bit") to arm64, which fixes the broken evtype mask to include the NSH bit, allowing profiling at EL2. Cc: <stable@vger.kernel.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This is a port of cb2d8b34 ("ARM: 7698/1: perf: fix group validation when using enable_on_exec") to arm64, which fixes the event validation checking so that events in the OFF state are still considered when enable_on_exec is true. Cc: <stable@vger.kernel.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This is a port of c95eb318 ("ARM: 7809/1: perf: fix event validation for software group leaders") to arm64, which fixes a panic in the arm64 perf backend found as a result of Vince's fuzzing tool. Cc: <stable@vger.kernel.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This is a port of d9f96635 ("ARM: 7810/1: perf: Fix array out of bounds access in armpmu_map_hw_event()") to arm64, which fixes an oops in the arm64 perf backend found as a result of Vince's fuzzing tool. Cc: <stable@vger.kernel.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Aug 16, 2013
-
-
Linus Torvalds authored
Ben Tebulin reported: "Since v3.7.2 on two independent machines a very specific Git repository fails in 9/10 cases on git-fsck due to an SHA1/memory failures. This only occurs on a very specific repository and can be reproduced stably on two independent laptops. Git mailing list ran out of ideas and for me this looks like some very exotic kernel issue" and bisected the failure to the backport of commit 53a59fc6 ("mm: limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT"). That commit itself is not actually buggy, but what it does is to make it much more likely to hit the partial TLB invalidation case, since it introduces a new case in tlb_next_batch() that previously only ever happened when running out of memory. The real bug is that the TLB gather virtual memory range setup is subtly buggered. It was introduced in commit 597e1c35 ("mm/mmu_gather: enable tlb flush range in generic mmu_gather"), and the range handling was already fixed at least once in commit e6c495a9 ("mm: fix the TLB range flushed when __tlb_remove_page() runs out of slots"), but that fix was not complete. The problem with the TLB gather virtual address range is that it isn't set up by the initial tlb_gather_mmu() initialization (which didn't get the TLB range information), but it is set up ad-hoc later by the functions that actually flush the TLB. And so any such case that forgot to update the TLB range entries would potentially miss TLB invalidates. Rather than try to figure out exactly which particular ad-hoc range setup was missing (I personally suspect it's the hugetlb case in zap_huge_pmd(), which didn't have the same logic as zap_pte_range() did), this patch just gets rid of the problem at the source: make the TLB range information available to tlb_gather_mmu(), and initialize it when initializing all the other tlb gather fields. This makes the patch larger, but conceptually much simpler. And the end result is much more understandable; even if you want to play games with partial ranges when invalidating the TLB contents in chunks, now the range information is always there, and anybody who doesn't want to bother with it won't introduce subtle bugs. Ben verified that this fixes his problem. Reported-bisected-and-tested-by:
Ben Tebulin <tebulin@googlemail.com> Build-testing-by:
Stephen Rothwell <sfr@canb.auug.org.au> Build-testing-by:
Richard Weinberger <richard.weinberger@gmail.com> Reviewed-by:
Michal Hocko <mhocko@suse.cz> Acked-by:
Peter Zijlstra <peterz@infradead.org> Cc: stable@vger.kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 09, 2013
-
-
Chen Gang authored
'target' will be set to '-1' in kvm_arch_vcpu_init(), and it need check 'target' whether less than zero or not in kvm_vcpu_initialized(). So need define target as 'int' instead of 'u32', just like ARM has done. The related warning: arch/arm64/kvm/../../../arch/arm/kvm/arm.c:497:2: warning: comparison of unsigned expression >= 0 is always true [-Wtype-limits] Signed-off-by:
Chen Gang <gang.chen@asianux.com> [Marc: reformated the Subject line to fit the series] Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
When performing a Stage-2 TLB invalidation, it is necessary to make sure the write to the page tables is observable by all CPUs. For this purpose, add dsb instructions to __kvm_tlb_flush_vmid_ipa and __kvm_flush_vm_context before doing the TLB invalidation itself. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Not saving PAR_EL1 is an unfortunate oversight. If the guest performs an AT* operation and gets scheduled out before reading the result of the translation from PAREL1, it could become corrupted by another guest or the host. Saving this register is made slightly more complicated as KVM also uses it on the permission fault handling path, leading to an ugly "stash and restore" sequence. Fortunately, this is already a slow path so we don't really care. Also, Linux doesn't do any AT* operation, so Linux guests are not impacted by this bug. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- Jul 31, 2013
-
-
Stephen Boyd authored
We're going to introduce support to read and write the memory mapped timer registers in the next patch, so push the cp15 read/write functions one level deeper. This simplifies the next patch and makes it clearer what's going on. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by:
Stephen Boyd <sboyd@codeaurora.org> Signed-off-by:
Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by:
Mark Rutland <mark.rutland@arm.com>
-
Stephen Boyd authored
Using an enum for the register we wish to access allows newer compilers to determine if we've forgotten a case in our switch statement. This allows us to remove the BUILD_BUG() instances in the arm64 port, avoiding problems where optimizations may not happen. To try and force better code generation we're currently marking the accessor functions as inline, but newer compilers can ignore the inline keyword unless it's marked __always_inline. Luckily on arm and arm64 inline is __always_inline, but let's make everything __always_inline to be explicit. Suggested-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <Marc.Zyngier@arm.com> Signed-off-by:
Stephen Boyd <sboyd@codeaurora.org> Signed-off-by:
Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by:
Mark Rutland <mark.rutland@arm.com>
-
- Jul 26, 2013
-
-
Feng Kan authored
Written by Catalin Marinas, tested by APM on storm platform. This is needed because of the failures encountered when running SpecWeb benchmark test. Signed-off-by:
Feng Kan <fkan@apm.com> Acked-by:
Kumar Sankaran <ksankaran@apm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jul 24, 2013
-
-
Santosh Shilimkar authored
On some PAE architectures, the entire range of physical memory could reside outside the 32-bit limit. These systems need the ability to specify the initrd location using 64-bit numbers. This patch globally modifies the early_init_dt_setup_initrd_arch() function to use 64-bit numbers instead of the current unsigned long. There has been quite a bit of debate about whether to use u64 or phys_addr_t. It was concluded to stick to u64 to be consistent with rest of the device tree code. As summarized by Geert, "The address to load the initrd is decided by the bootloader/user and set at that point later in time. The dtb should not be tied to the kernel you are booting" More details on the discussion can be found here: https://lkml.org/lkml/2013/6/20/690 https://lkml.org/lkml/2012/9/13/544 Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by:
Rob Herring <rob.herring@calxeda.com> Acked-by:
Vineet Gupta <vgupta@synopsys.com> Acked-by:
Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by:
Grant Likely <grant.likely@linaro.org>
-
- Jul 23, 2013
-
-
Catalin Marinas authored
Commit ff701306 (arm64: use common reboot infrastructure) converted the arm_pm_restart declaration to the new reboot infrastructure but missed the actual definition. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a cached value of __boot_cpu_mode may be incoherent with that in memory. This could lead to a failure to detect mismatched boot modes. This patch adds flushing to ensure that writes by secondaries to __boot_cpu_mode are made visible before we test against it. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <cdall@cs.columbia.edu> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Jul 19, 2013
-
-
Marc Zyngier authored
Commit 7b6d864b (reboot: arm: change reboot_mode to use enum reboot_mode) changed the way reboot is handled on arm, which has a direct impact on arm64 as we share the reset driver on the VE platform. The obvious fix is to move arm64 to use the same infrastructure. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: removed reboot_mode = REBOOT_HARD default setting] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
On arm64, cache maintenance faults appear as data aborts with the CM bit set in the ESR. The WnR bit, usually used to distinguish between faulting loads and stores, always reads as 1 and (slightly confusingly) the instructions are treated as reads by the architecture. This patch fixes our fault handling code to treat cache maintenance faults in the same way as loads. Signed-off-by:
Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Chen Gang authored
If 'COMPAT' not defined, aarch32_break_handler() cannot pass compiling, and it can work independent with 'COMPAT', so remove dummy definition. The related error: arch/arm64/kernel/debug-monitors.c:249:5: error: redefinition of ‘aarch32_break_handler’ In file included from arch/arm64/kernel/debug-monitors.c:29:0: /root/linux-next/arch/arm64/include/asm/debug-monitors.h:89:12: note: previous definition of ‘aarch32_break_handler’ was here Signed-off-by:
Chen Gang <gang.chen@asianux.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
There is a slight chance that (timer) interrupts are triggered before a secondary CPU has been marked online with implications on softirq thread affinity. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Reported-by:
Kirill Tkhai <tkhai@yandex.ru>
-
- Jul 14, 2013
-
-
Paul Gortmaker authored
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/arm64 uses of the __cpuinit macros from all C files. Currently arm64 does not have any __CPUINIT used in assembly files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Paul Gortmaker <paul.gortmaker@windriver.com>
-
- Jul 11, 2013
-
-
Michel Lespinasse authored
Since all architectures have been converted to use vm_unmapped_area(), there is no remaining use for the free_area_cache. Signed-off-by:
Michel Lespinasse <walken@google.com> Acked-by:
Rik van Riel <riel@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Helge Deller <deller@gmx.de> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jul 04, 2013
-
-
Marc Zyngier authored
Finally plug KVM/arm64 into the config system, making it possible to enable KVM support on AArch64 CPUs. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Stefano Stabellini authored
Signed-off-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
-