Skip to content
  1. Feb 25, 2014
  2. Feb 10, 2014
  3. Jan 28, 2014
  4. Jan 22, 2014
    • Santosh Shilimkar's avatar
      mm/memblock: add memblock memory allocation apis · 26f09e9b
      Santosh Shilimkar authored
      
      
      Introduce memblock memory allocation APIs which allow to support PAE or
      LPAE extension on 32 bits archs where the physical memory start address
      can be beyond 4GB.  In such cases, existing bootmem APIs which operate
      on 32 bit addresses won't work and needs memblock layer which operates
      on 64 bit addresses.
      
      So we add equivalent APIs so that we can replace usage of bootmem with
      memblock interfaces.  Architectures already converted to NO_BOOTMEM use
      these new memblock interfaces.  The architectures which are still not
      converted to NO_BOOTMEM continue to function as is because we still
      maintain the fal lback option of bootmem back-end supporting these new
      interfaces.  So no functional change as such.
      
      In long run, once all the architectures moves to NO_BOOTMEM, we can get
      rid of bootmem layer completely.  This is one step to remove the core
      code dependency with bootmem and also gives path for architectures to
      move away from bootmem.
      
      The proposed interface will became active if both CONFIG_HAVE_MEMBLOCK
      and CONFIG_NO_BOOTMEM are specified by arch.  In case
      !CONFIG_NO_BOOTMEM, the memblock() wrappers will fallback to the
      existing bootmem apis so that arch's not converted to NO_BOOTMEM
      continue to work as is.
      
      The meaning of MEMBLOCK_ALLOC_ACCESSIBLE and MEMBLOCK_ALLOC_ANYWHERE
      is kept same.
      
      [akpm@linux-foundation.org: s/depricated/deprecated/]
      Signed-off-by: default avatarGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Paul Walmsley <paul@pwsan.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Lindgren <tony@atomide.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      26f09e9b
  5. Jan 13, 2014
    • Russell King's avatar
      ARM: fix ffs/fls implementations to match x86 · 23f6620a
      Russell King authored
      
      
      ARMs ffs/fls implementations are not type compatible with x86, so when
      they're used in combination with min()/max(), they provoke warnings.
      Change these to be inline functions with the correct types, providing
      the clz as a separate documentation, and document their individual
      behaviours.
      
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      23f6620a
    • Dario Faggioli's avatar
      sched: Add new scheduler syscalls to support an extended scheduling parameters ABI · d50dde5a
      Dario Faggioli authored
      
      
      Add the syscalls needed for supporting scheduling algorithms
      with extended scheduling parameters (e.g., SCHED_DEADLINE).
      
      In general, it makes possible to specify a periodic/sporadic task,
      that executes for a given amount of runtime at each instance, and is
      scheduled according to the urgency of their own timing constraints,
      i.e.:
      
       - a (maximum/typical) instance execution time,
       - a minimum interval between consecutive instances,
       - a time constraint by which each instance must be completed.
      
      Thus, both the data structure that holds the scheduling parameters of
      the tasks and the system calls dealing with it must be extended.
      Unfortunately, modifying the existing struct sched_param would break
      the ABI and result in potentially serious compatibility issues with
      legacy binaries.
      
      For these reasons, this patch:
      
       - defines the new struct sched_attr, containing all the fields
         that are necessary for specifying a task in the computational
         model described above;
      
       - defines and implements the new scheduling related syscalls that
         manipulate it, i.e., sched_setattr() and sched_getattr().
      
      Syscalls are introduced for x86 (32 and 64 bits) and ARM only, as a
      proof of concept and for developing and testing purposes. Making them
      available on other architectures is straightforward.
      
      Since no "user" for these new parameters is introduced in this patch,
      the implementation of the new system calls is just identical to their
      already existing counterpart. Future patches that implement scheduling
      policies able to exploit the new data structure must also take care of
      modifying the sched_*attr() calls accordingly with their own purposes.
      
      Signed-off-by: default avatarDario Faggioli <raistlin@linux.it>
      [ Rewrote to use sched_attr. ]
      Signed-off-by: default avatarJuri Lelli <juri.lelli@gmail.com>
      [ Removed sched_setscheduler2() for now. ]
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1383831828-15501-3-git-send-email-juri.lelli@gmail.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d50dde5a
  6. Jan 12, 2014
    • Peter Zijlstra's avatar
      arch: Introduce smp_load_acquire(), smp_store_release() · 47933ad4
      Peter Zijlstra authored
      
      
      A number of situations currently require the heavyweight smp_mb(),
      even though there is no need to order prior stores against later
      loads.  Many architectures have much cheaper ways to handle these
      situations, but the Linux kernel currently has no portable way
      to make use of them.
      
      This commit therefore supplies smp_load_acquire() and
      smp_store_release() to remedy this situation.  The new
      smp_load_acquire() primitive orders the specified load against
      any subsequent reads or writes, while the new smp_store_release()
      primitive orders the specifed store against any prior reads or
      writes.  These primitives allow array-based circular FIFOs to be
      implemented without an smp_mb(), and also allow a theoretical
      hole in rcu_assign_pointer() to be closed at no additional
      expense on most architectures.
      
      In addition, the RCU experience transitioning from explicit
      smp_read_barrier_depends() and smp_wmb() to rcu_dereference()
      and rcu_assign_pointer(), respectively resulted in substantial
      improvements in readability.  It therefore seems likely that
      replacing other explicit barriers with smp_load_acquire() and
      smp_store_release() will provide similar benefits.  It appears
      that roughly half of the explicit barriers in core kernel code
      might be so replaced.
      
      [Changelog by PaulMck]
      
      Reviewed-by: default avatar"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Victor Kaplansky <VICTORK@il.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Link: http://lkml.kernel.org/r/20131213150640.908486364@infradead.org
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      47933ad4
  7. Jan 06, 2014
  8. Jan 05, 2014
  9. Dec 29, 2013
    • Will Deacon's avatar
      ARM: 7927/1: dcache: select DCACHE_WORD_ACCESS for big-endian CPUs · cb601185
      Will Deacon authored
      
      
      With commit 11ec50ca ("word-at-a-time: provide generic big-endian
      zero_bytemask implementation"), the asm-generic word-at-a-time code now
      provides a zero_bytemask implementation, allowing us to make use of
      DCACHE_WORD_ACCESS on big-endian CPUs, providing our
      load_unaligned_zeropad function is endianness-clean.
      
      This patch reworks the load_unaligned_zeropad fixup code to work for
      both big- and little-endian CPUs, then removes the !CPU_BIG_ENDIAN check
      when selecting DCACHE_WORD_ACCESS.
      
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      cb601185
    • Laura Abbott's avatar
      ARM: 7931/1: Correct virt_addr_valid · efea3403
      Laura Abbott authored
      
      
      The definition of virt_addr_valid is that virt_addr_valid should
      return true if and only if virt_to_page returns a valid pointer.
      The current definition of virt_addr_valid only checks against the
      virtual address range. There's no guarantee that just because a
      virtual address falls bewteen PAGE_OFFSET and high_memory the
      associated physical memory has a valid backing struct page. Follow
      the example of other architectures and convert to pfn_valid to
      verify that the virtual address is actually valid. The check for
      an address between PAGE_OFFSET and high_memory is still necessary
      as vmalloc/highmem addresses are not valid with virt_to_page.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      efea3403
    • Russell King's avatar
      ARM: PCI: add legacy IDE IRQ implementation · a472b09d
      Russell King authored
      
      
      The IDE code used to specify the IDE IRQs for chipsets operating in
      legacy mode.  This appears to no longer work, and this information must
      be provided by the arch.  Do so.  This partially fixes CY82C693 (and
      probably others) on Footbridge platforms.
      
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      a472b09d
    • Sebastian Hesselbarth's avatar
      ARM: 7922/1: l2x0: add Marvell Tauros3 support · e68f31f4
      Sebastian Hesselbarth authored
      
      
      This adds support for the Marvell Tauros3 cache controller which
      is compatible with pl310 cache controller but broadcasts L1 cache
      operations to L2 cache. While updating the binding documentation,
      clean up the list of possible compatibles. Also reorder driver
      compatibles to allow non-ARM derivated to be compatible to ARM
      cache controller compatibles.
      
      Signed-off-by: default avatarSebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      e68f31f4
    • Russell King's avatar
      ARM: fix csum_tcpudp_magic() miscompilation · d46cda12
      Russell King authored
      
      
      There is a miscompilation of csum_tcpudp_magic() due to the way we pass
      the asm() operands in.  Fortunately, this doesn't affect the IP code,
      but can affect anyone who passes ntohs(udp->len) as the length argument,
      or protocols with more than 8 bits.
      
      The problem stems from passing 16-bit operands into an asm() - GCC makes
      no guarantees about what may be in the high 16-bits of such a register
      passed into assembly which is in the "HI" machine mode.
      
      Address this by changing the way we handle the 16-bit arguments - since
      accumulating the protocol and length can never overflow, we can delegate
      this to the compiler to perform, and then accumulate it into the
      checksum inside the asm(), taking account of the endian-ness via an
      appropriate 32-bit rotation.
      
      While we are here, also realise that there's a chance to optimise this
      a little: several callers from IP code pass a constant zero as the
      initial sum.  This is wasteful - if we detect this condition, we can
      optimise away one instruction.
      
      Tested-by: default avatarMaxime Bizon <mbizon@freebox.fr>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      d46cda12
    • Rob Herring's avatar
      ARM: 7896/1: rename ioremap_cached to ioremap_cache · 92341c83
      Rob Herring authored
      
      
      ioremap_cache is more aligned with other architectures. There are only
      2 users of this in the kernel: pxa2xx-flash and Xen.
      
      This fixes Xen build failures on arm64 caused by commit c04e8e2f (arm64:
      allow ioremap_cache() to use existing RAM mappings)
      
      drivers/tty/hvc/hvc_xen.c:233:2: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
      drivers/xen/grant-table.c:1174:3: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
      drivers/xen/xenbus/xenbus_probe.c:778:4: error: implicit declaration of function 'ioremap_cached' [-Werror=implicit-function-declaration]
      
      Signed-off-by: default avatarRob Herring <rob.herring@calxeda.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      92341c83
  10. Dec 21, 2013
  11. Dec 18, 2013
  12. Dec 13, 2013
    • Russell King's avatar
      ARM: fix asm/memory.h build error · b713aa0b
      Russell King authored
      
      
      Jason Gunthorpe reports a build failure when ARM_PATCH_PHYS_VIRT is
      not defined:
      
      In file included from arch/arm/include/asm/page.h:163:0,
                       from include/linux/mm_types.h:16,
                       from include/linux/sched.h:24,
                       from arch/arm/kernel/asm-offsets.c:13:
      arch/arm/include/asm/memory.h: In function '__virt_to_phys':
      arch/arm/include/asm/memory.h:244:40: error: 'PHYS_OFFSET' undeclared (first use in this function)
      arch/arm/include/asm/memory.h:244:40: note: each undeclared identifier is reported only once for each function it appears in
      arch/arm/include/asm/memory.h: In function '__phys_to_virt':
      arch/arm/include/asm/memory.h:249:13: error: 'PHYS_OFFSET' undeclared (first use in this function)
      
      Fixes: ca5a45c0 ("ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions")
      Tested-By: default avatarJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      b713aa0b
    • Alexandre Courbot's avatar
      ARM: add basic support for Trusted Foundations · d9a1beaa
      Alexandre Courbot authored
      
      
      Trusted Foundations is a TrustZone-based secure monitor for ARM that
      can be invoked using the same SMC-based API on supported platforms.
      This patch adds initial basic support for Trusted Foundations using
      the ARM firmware API. Current features are limited to the ability to
      boot secondary processors.
      
      Note: The API followed by Trusted Foundations does *not* follow the SMC
      calling conventions. It has nothing to do with PSCI neither and is only
      relevant to devices that use Trusted Foundations (like most Tegra-based
      retail devices).
      
      Signed-off-by: default avatarAlexandre Courbot <acourbot@nvidia.com>
      Reviewed-by: default avatarTomasz Figa <t.figa@samsung.com>
      Reviewed-by: default avatarStephen Warren <swarren@nvidia.com>
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      d9a1beaa
  13. Dec 11, 2013
  14. Dec 04, 2013
  15. Nov 30, 2013
    • Russell King's avatar
      ARM: fix booting low-vectors machines · d8aa712c
      Russell King authored
      
      
      Commit f6f91b0d (ARM: allow kuser helpers to be removed from the
      vector page) required two pages for the vectors code.  Although the
      code setting up the initial page tables was updated, the code which
      allocates page tables for new processes wasn't, neither was the code
      which tears down the mappings.  Fix this.
      
      Fixes: f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page")
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Cc: <stable@vger.kernel.org>
      d8aa712c
  16. Nov 15, 2013
  17. Nov 14, 2013
  18. Nov 13, 2013
  19. Nov 11, 2013
  20. Nov 09, 2013
  21. Nov 08, 2013
Loading