- Feb 25, 2014
-
-
Will Deacon authored
After a bunch of benchmarking on the interaction between dmb and pldw, it turns out that issuing the pldw *after* the dmb instruction can give modest performance gains (~3% atomic_add_return improvement on a dual A15). This patch adds prefetchw invocations to our barriered atomic operations including cmpxchg, test_and_xxx and futexes. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Feb 10, 2014
-
-
Will Deacon authored
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Nicolas Pitre authored
Now that we select HAVE_EFFICIENT_UNALIGNED_ACCESS for ARMv6+ CPUs, replace the __LINUX_ARM_ARCH__ check in uaccess.h with the new symbol. Signed-off-by:
Nicolas Pitre <nico@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Christopher Covington authored
Add the trivial support necessary to get hardware breakpoints working for GDB on ARMv8 simulators running in AArch32 mode. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Christopher Covington <cov@codeaurora.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Jonathan Austin authored
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does. Note that as the ACTLR cannot (usually) be written from non-secure, it is the responsibility of the bootloader/firmware to set this bit per core - it is done here in Linux as last resort in case of bad firmware. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Jonathan Austin <jonathan.austin@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Jan 28, 2014
-
-
Ezequiel Garcia authored
Some SoC have MMIO regions that are shared across orthogonal subsystems. This commit implements a possible solution for the thread-safe access of such regions through a spinlock-protected API. Concurrent access is protected with a single spinlock for the entire MMIO address space. While this protects shared-registers, it also serializes access to unrelated/unshared registers. We add relaxed and non-relaxed variants, by using writel_relaxed and writel, respectively. The rationale for this is that some users may not require register write completion but only thread-safe access to a register. Tested-by:
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by:
Jason Cooper <jason@lakedaemon.net> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Ezequiel Garcia <ezequiel.garcia@free-electrons.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Jan 22, 2014
-
-
Santosh Shilimkar authored
Introduce memblock memory allocation APIs which allow to support PAE or LPAE extension on 32 bits archs where the physical memory start address can be beyond 4GB. In such cases, existing bootmem APIs which operate on 32 bit addresses won't work and needs memblock layer which operates on 64 bit addresses. So we add equivalent APIs so that we can replace usage of bootmem with memblock interfaces. Architectures already converted to NO_BOOTMEM use these new memblock interfaces. The architectures which are still not converted to NO_BOOTMEM continue to function as is because we still maintain the fal lback option of bootmem back-end supporting these new interfaces. So no functional change as such. In long run, once all the architectures moves to NO_BOOTMEM, we can get rid of bootmem layer completely. This is one step to remove the core code dependency with bootmem and also gives path for architectures to move away from bootmem. The proposed interface will became active if both CONFIG_HAVE_MEMBLOCK and CONFIG_NO_BOOTMEM are specified by arch. In case !CONFIG_NO_BOOTMEM, the memblock() wrappers will fallback to the existing bootmem apis so that arch's not converted to NO_BOOTMEM continue to work as is. The meaning of MEMBLOCK_ALLOC_ACCESSIBLE and MEMBLOCK_ALLOC_ANYWHERE is kept same. [akpm@linux-foundation.org: s/depricated/deprecated/] Signed-off-by:
Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@ti.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Paul Walmsley <paul@pwsan.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tony Lindgren <tony@atomide.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jan 13, 2014
-
-
Russell King authored
ARMs ffs/fls implementations are not type compatible with x86, so when they're used in combination with min()/max(), they provoke warnings. Change these to be inline functions with the correct types, providing the clz as a separate documentation, and document their individual behaviours. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Dario Faggioli authored
Add the syscalls needed for supporting scheduling algorithms with extended scheduling parameters (e.g., SCHED_DEADLINE). In general, it makes possible to specify a periodic/sporadic task, that executes for a given amount of runtime at each instance, and is scheduled according to the urgency of their own timing constraints, i.e.: - a (maximum/typical) instance execution time, - a minimum interval between consecutive instances, - a time constraint by which each instance must be completed. Thus, both the data structure that holds the scheduling parameters of the tasks and the system calls dealing with it must be extended. Unfortunately, modifying the existing struct sched_param would break the ABI and result in potentially serious compatibility issues with legacy binaries. For these reasons, this patch: - defines the new struct sched_attr, containing all the fields that are necessary for specifying a task in the computational model described above; - defines and implements the new scheduling related syscalls that manipulate it, i.e., sched_setattr() and sched_getattr(). Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a proof of concept and for developing and testing purposes. Making them available on other architectures is straightforward. Since no "user" for these new parameters is introduced in this patch, the implementation of the new system calls is just identical to their already existing counterpart. Future patches that implement scheduling policies able to exploit the new data structure must also take care of modifying the sched_*attr() calls accordingly with their own purposes. Signed-off-by:
Dario Faggioli <raistlin@linux.it> [ Rewrote to use sched_attr. ] Signed-off-by:
Juri Lelli <juri.lelli@gmail.com> [ Removed sched_setscheduler2() for now. ] Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.com Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Jan 12, 2014
-
-
Peter Zijlstra authored
A number of situations currently require the heavyweight smp_mb(), even though there is no need to order prior stores against later loads. Many architectures have much cheaper ways to handle these situations, but the Linux kernel currently has no portable way to make use of them. This commit therefore supplies smp_load_acquire() and smp_store_release() to remedy this situation. The new smp_load_acquire() primitive orders the specified load against any subsequent reads or writes, while the new smp_store_release() primitive orders the specifed store against any prior reads or writes. These primitives allow array-based circular FIFOs to be implemented without an smp_mb(), and also allow a theoretical hole in rcu_assign_pointer() to be closed at no additional expense on most architectures. In addition, the RCU experience transitioning from explicit smp_read_barrier_depends() and smp_wmb() to rcu_dereference() and rcu_assign_pointer(), respectively resulted in substantial improvements in readability. It therefore seems likely that replacing other explicit barriers with smp_load_acquire() and smp_store_release() will provide similar benefits. It appears that roughly half of the explicit barriers in core kernel code might be so replaced. [Changelog by PaulMck] Reviewed-by:
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Michael Neuling <mikey@neuling.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Victor Kaplansky <VICTORK@il.ibm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Oleg Nesterov <oleg@redhat.com> Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Jan 06, 2014
-
-
Konrad Rzeszutek Wilk authored
The 'xen_hvm_resume_frames' used to be an 'unsigned long' and contain the virtual address of the grants. That was OK for most architectures (PVHVM, ARM) were the grants are contiguous in memory. That however is not the case for PVH - in which case we will have to do a lookup for each virtual address for the PFN. Instead of doing that, lets make it a structure which will contain the array of PFNs, the virtual address and the count of said PFNs. Also provide a generic functions: gnttab_setup_auto_xlat_frames and gnttab_free_auto_xlat_frames to populate said structure with appropriate values for PVHVM and ARM. To round it off, change the name from 'xen_hvm_resume_frames' to a more descriptive one - 'xen_auto_xlat_grant_frames'. For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver" we will populate the 'xen_auto_xlat_grant_frames' by ourselves. v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames and also introduces xen_unmap for gnttab_free_auto_xlat_frames. Suggested-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v3: Based on top of 'asm/xen/page.h: remove redundant semicolon'] Acked-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
-
Wei Liu authored
Signed-off-by:
Wei Liu <liuw@liuw.name> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- Jan 05, 2014
-
-
Rob Herring authored
ioremap_cache is more aligned with other architectures. There are only 2 users of this in the kernel: pxa2xx-flash and Xen. This fixes Xen build failures on arm64: drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] Signed-off-by:
Rob Herring <rob.herring@calxeda.com> Signed-off-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Dec 29, 2013
-
-
Will Deacon authored
With commit 11ec50ca ("word-at-a-time: provide generic big-endian zero_bytemask implementation"), the asm-generic word-at-a-time code now provides a zero_bytemask implementation, allowing us to make use of DCACHE_WORD_ACCESS on big-endian CPUs, providing our load_unaligned_zeropad function is endianness-clean. This patch reworks the load_unaligned_zeropad fixup code to work for both big- and little-endian CPUs, then removes the !CPU_BIG_ENDIAN check when selecting DCACHE_WORD_ACCESS. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Laura Abbott authored
The definition of virt_addr_valid is that virt_addr_valid should return true if and only if virt_to_page returns a valid pointer. The current definition of virt_addr_valid only checks against the virtual address range. There's no guarantee that just because a virtual address falls bewteen PAGE_OFFSET and high_memory the associated physical memory has a valid backing struct page. Follow the example of other architectures and convert to pfn_valid to verify that the virtual address is actually valid. The check for an address between PAGE_OFFSET and high_memory is still necessary as vmalloc/highmem addresses are not valid with virt_to_page. Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
The IDE code used to specify the IDE IRQs for chipsets operating in legacy mode. This appears to no longer work, and this information must be provided by the arch. Do so. This partially fixes CY82C693 (and probably others) on Footbridge platforms. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Sebastian Hesselbarth authored
This adds support for the Marvell Tauros3 cache controller which is compatible with pl310 cache controller but broadcasts L1 cache operations to L2 cache. While updating the binding documentation, clean up the list of possible compatibles. Also reorder driver compatibles to allow non-ARM derivated to be compatible to ARM cache controller compatibles. Signed-off-by:
Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
There is a miscompilation of csum_tcpudp_magic() due to the way we pass the asm() operands in. Fortunately, this doesn't affect the IP code, but can affect anyone who passes ntohs(udp->len) as the length argument, or protocols with more than 8 bits. The problem stems from passing 16-bit operands into an asm() - GCC makes no guarantees about what may be in the high 16-bits of such a register passed into assembly which is in the "HI" machine mode. Address this by changing the way we handle the 16-bit arguments - since accumulating the protocol and length can never overflow, we can delegate this to the compiler to perform, and then accumulate it into the checksum inside the asm(), taking account of the endian-ness via an appropriate 32-bit rotation. While we are here, also realise that there's a chance to optimise this a little: several callers from IP code pass a constant zero as the initial sum. This is wasteful - if we detect this condition, we can optimise away one instruction. Tested-by:
Maxime Bizon <mbizon@freebox.fr> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Rob Herring authored
ioremap_cache is more aligned with other architectures. There are only 2 users of this in the kernel: pxa2xx-flash and Xen. This fixes Xen build failures on arm64 caused by commit c04e8e2f (arm64: allow ioremap_cache() to use existing RAM mappings) drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration] Signed-off-by:
Rob Herring <rob.herring@calxeda.com> Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Dec 21, 2013
-
-
Andre Przywara authored
For migration to work we need to save (and later restore) the state of each core's virtual generic timer. Since this is per VCPU, we can use the [gs]et_one_reg ioctl and export the three needed registers (control, counter, compare value). Though they live in cp15 space, we don't use the existing list, since they need special accessor functions and the arch timer is optional. Acked-by:
Marc Zynger <marc.zyngier@arm.com> Signed-off-by:
Andre Przywara <andre.przywara@linaro.org> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
- Dec 18, 2013
-
-
David S. Miller authored
Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Dec 13, 2013
-
-
Russell King authored
Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is not defined: In file included from arch/arm/include/asm/page.h:163:0, from include/linux/mm_types.h:16, from include/linux/sched.h:24, from arch/arm/kernel/asm-offsets.c:13: arch/arm/include/asm/memory.h: In function '__virt_to_phys': arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function) arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in arch/arm/include/asm/memory.h: In function '__phys_to_virt': arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function) Fixes: ca5a45c0 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions") Tested-By:
Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Alexandre Courbot authored
Trusted Foundations is a TrustZone-based secure monitor for ARM that can be invoked using the same SMC-based API on supported platforms. This patch adds initial basic support for Trusted Foundations using the ARM firmware API. Current features are limited to the ability to boot secondary processors. Note: The API followed by Trusted Foundations does *not* follow the SMC calling conventions. It has nothing to do with PSCI neither and is only relevant to devices that use Trusted Foundations (like most Tegra-based retail devices). Signed-off-by:
Alexandre Courbot <acourbot@nvidia.com> Reviewed-by:
Tomasz Figa <t.figa@samsung.com> Reviewed-by:
Stephen Warren <swarren@nvidia.com> Signed-off-by:
Stephen Warren <swarren@nvidia.com>
-
- Dec 11, 2013
-
-
Santosh Shilimkar authored
KVM initialisation fails on architectures implementing virt_to_idmap() because virt_to_phys() on such architectures won't fetch you the correct idmap page. So update the KVM ARM code to use the virt_to_idmap() to fix the issue. Since the KVM code is shared between arm and arm64, we create kvm_virt_to_phys() and handle the redirection in respective headers. Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Laura Abbott authored
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Laura Abbott authored
Other architectures define pte_mkexec to mark a pte as executable. Add pte_mkexec for ARM to get the same functionality. Although no other architectures currently define it, also add pte_mknexec to explicitly allow a pte to be marked as non executable. Signed-off-by:
Laura Abbott <lauraa@codeaurora.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
This patch allows the kernel page tables to be dumped via a debugfs file, allowing kernel developers to check the layout of the kernel page tables and the verify the various permissions and type settings. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Dec 04, 2013
-
-
Sylwester Nawrocki authored
This patch adds common __clk_get(), __clk_put() clkdev helpers that replace their platform specific counterparts when the common clock API is used. The owner module pointer field is added to struct clk so a reference to the clock supplier module can be taken by the clock consumers. The owner module is assigned while the clock is being registered, in functions _clk_register() and __clk_register(). Signed-off-by:
Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by:
Kyungmin Park <kyungmin.park@samsung.com> Acked-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Nov 30, 2013
-
-
Russell King authored
Commit f6f91b0d (ARM: allow kuser helpers to be removed from the vector page) required two pages for the vectors code. Although the code setting up the initial page tables was updated, the code which allocates page tables for new processes wasn't, neither was the code which tears down the mappings. Fix this. Fixes: f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page") Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk> Cc: <stable@vger.kernel.org>
-
- Nov 15, 2013
-
-
Kirill A. Shutemov authored
Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Nov 14, 2013
-
-
Bartlomiej Zolnierkiewicz authored
Remove support for DMA unmapping from drivers as it is no longer needed (DMA core code is now handling it). Cc: Vinod Koul <vinod.koul@intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Cc: Dave Jiang <dave.jiang@intel.com> Signed-off-by:
Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by:
Kyungmin Park <kyungmin.park@samsung.com> [djbw: fix up chan2parent() unused warning in drivers/dma/dw/core.c] Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Victor Kamensky authored
Make sure that inline assembler that expects 'r' operand receives 32 bit value. Before this fix in case of CONFIG_ARCH_PHYS_ADDR_T_64BIT and CONFIG_ARM_PATCH_PHYS_VIRT __phys_to_virt function passed 64 bit value to __pv_stub inline assembler where 'r' operand is expected. Compiler behavior in such case is not well specified. It worked in little endian case, but in big endian case incorrect code was generated, where compiler confused which part of 64 bit value it needed to modify. For example BE snippet looked like this: N:0x80904E08 : MOV r2,#0 N:0x80904E0C : SUB r2,r2,#0x81000000 when LE similar code looked like this N:0x808FCE2C : MOV r2,r0 N:0x808FCE30 : SUB r2,r2,#0xc0, 8 ; #0xc0000000 Note 'r0' register is va that have to be translated into phys To avoid this situation use explicit cast to 'unsigned long', which explicitly discard upper part of phys address and convert value to 32 bit. Also add comment so such cast will not be removed in the future. Signed-off-by:
Victor Kamensky <victor.kamensky@linaro.org> Acked-by:
Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Nov 13, 2013
-
-
Thomas Gleixner authored
No point in having this bit defined by architecture. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130917183629.090698799@linutronix.de
-
- Nov 11, 2013
-
-
Stefano Stabellini authored
Some common Xen drivers, like balloon.c, call pfn_to_mfn and mfn_to_pfn even for autotranslate guests, expecting the argument back. The following commit broke these drivers by changing the behavior of pfn_to_mfn and mfn_to_pfn: commit 4a19138c Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Date: Thu Oct 17 16:22:27 2013 +0000 arm/xen,arm64/xen: introduce p2m They now return INVALID_P2M_ENTRY if Linux doesn't actually know what is the mfn backing a pfn or what is the pfn corresponding to an mfn. Fix the regression by switching to the old behavior. Signed-off-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by:
Ian Campbell <ian.campbell@citrix.com> Tested-by:
Ian Campbell <ian.campbell@citrix.com>
-
- Nov 09, 2013
-
-
Chen Gang authored
In current kernel wide source code, except other architectures, only s390 scsi drivers use atomic_clear_mask(), and arm/arm64 need not support s390 drivers. So remove atomic_clear_mask() from "arm[64]/include/asm/atomic.h". Signed-off-by:
Chen Gang <gang.chen@asianux.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Chen Gang authored
For atomic_cmpxchg(), the type of 'oldval' need be 'int' to match the type of "*ptr" (used by 'ldrex' instruction) and 'old' (used by 'teq' instruction). Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Chen Gang <gang.chen@asianux.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Chen Gang authored
atomic* value is signed value, and atomic* functions need also process signed value (parameter value, and return value), so 32-bit arm need use 'long long' instead of 'u64'. After replacement, it will also fix a bug for atomic64_add_negative(): "u64 is never less than 0". The modifications are: in vim, use "1,% s/\<u64\>/long long/g" command. remove '__aligned(8)' which is useless for 64-bit. be sure of 80 column limitation after replacement. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Chen Gang <gang.chen@asianux.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Nov 08, 2013
-
-
Stefano Stabellini authored
Signed-off-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by:
Olof Johansson <olof@lixom.net> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-