Skip to content
  1. Feb 25, 2014
  2. Feb 18, 2014
    • Steven Capper's avatar
      ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocator · 6ea41c80
      Steven Capper authored
      
      
      The Coherant DMA allocator allocates pages of high order then splits
      them up into smaller pages.
      
      This splitting logic would run into problems if the allocator was
      given compound pages. Thus the Coherant DMA allocator was originally
      incompatible with compound pages existing and, by extension, huge
      pages. A compile #error was put in place whenever huge pages were
      enabled.
      
      Compatibility with compound pages has since been introduced by the
      following commit (which merely excludes GFP_COMP pages from being
      requested by the coherant DMA allocator):
        ea2e7057 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations
      
      When huge page support was introduced to ARM, the compile #error in
      dma-mapping.c was replaced by a #warning when it should have been
      removed instead.
      
      This patch removes the compile #warning in dma-mapping.c when huge
      pages are enabled.
      
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      6ea41c80
    • Dave Martin's avatar
      ARM: 7962/2: Make all mcpm functions notrace · ea36d2ab
      Dave Martin authored
      
      
      The functions in mcpm_entry.c are mostly intended for use during
      scary cache and coherency disabling sequences, or do other things
      which confuse trace ... like powering a CPU down and not
      returning. Similarly for the backend code.
      
      For simplicity, this patch just makes whole files notrace.
      There should be more than enough traceable points on the paths to
      these functions, but we can be more fine-grained later if there is
      a need for it.
      
      Jon Medhurst:
      Also added spc.o to the list of files as it contains functions used by
      MCPM code which have comments comments like: "might be used in code
      paths where normal cacheable locks are not working"
      
      Signed-off-by: default avatarDave Martin <dave.martin@linaro.org>
      Signed-off-by: default avatarJon Medhurst <tixy@linaro.org>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ea36d2ab
  3. Feb 10, 2014
  4. Feb 07, 2014
    • Mark Rutland's avatar
      arm64: defconfig: Expand default enabled features · 55834a77
      Mark Rutland authored
      
      
      FPGA implementations of the Cortex-A57 and Cortex-A53 are now available
      in the form of the SMM-A57 and SMM-A53 Soft Macrocell Models (SMMs) for
      Versatile Express. As these attach to a Motherboard Express V2M-P1 it
      would be useful to have support for some V2M-P1 peripherals enabled by
      default.
      
      Additionally a couple of of features have been introduced since the last
      defconfig update (CMA, jump labels) that would be good to have enabled
      by default to ensure they are build and boot tested.
      
      This patch updates the arm64 defconfig to enable support for these
      devices and features. The arm64 Kconfig is modified to select
      HAVE_PATA_PLATFORM, which is required to enable support for the
      CompactFlash controller on the V2M-P1.
      
      A few options which don't need to appear in defconfig are trimmed:
      
      * BLK_DEV - selected by default
      * EXPERIMENTAL - otherwise gone from the kernel
      * MII - selected by drivers which require it
      * USB_SUPPORT - selected by default
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      55834a77
    • Will Deacon's avatar
      arm64: asm: remove redundant "cc" clobbers · 95c41896
      Will Deacon authored
      
      
      cbnz/tbnz don't update the condition flags, so remove the "cc" clobbers
      from inline asm blocks that only use these instructions to implement
      conditional branches.
      
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      95c41896
    • Will Deacon's avatar
      arm64: atomics: fix use of acquire + release for full barrier semantics · 8e86f0b4
      Will Deacon authored
      
      
      Linux requires a number of atomic operations to provide full barrier
      semantics, that is no memory accesses after the operation can be
      observed before any accesses up to and including the operation in
      program order.
      
      On arm64, these operations have been incorrectly implemented as follows:
      
      	// A, B, C are independent memory locations
      
      	<Access [A]>
      
      	// atomic_op (B)
      1:	ldaxr	x0, [B]		// Exclusive load with acquire
      	<op(B)>
      	stlxr	w1, x0, [B]	// Exclusive store with release
      	cbnz	w1, 1b
      
      	<Access [C]>
      
      The assumption here being that two half barriers are equivalent to a
      full barrier, so the only permitted ordering would be A -> B -> C
      (where B is the atomic operation involving both a load and a store).
      
      Unfortunately, this is not the case by the letter of the architecture
      and, in fact, the accesses to A and C are permitted to pass their
      nearest half barrier resulting in orderings such as Bl -> A -> C -> Bs
      or Bl -> C -> A -> Bs (where Bl is the load-acquire on B and Bs is the
      store-release on B). This is a clear violation of the full barrier
      requirement.
      
      The simple way to fix this is to implement the same algorithm as ARMv7
      using explicit barriers:
      
      	<Access [A]>
      
      	// atomic_op (B)
      	dmb	ish		// Full barrier
      1:	ldxr	x0, [B]		// Exclusive load
      	<op(B)>
      	stxr	w1, x0, [B]	// Exclusive store
      	cbnz	w1, 1b
      	dmb	ish		// Full barrier
      
      	<Access [C]>
      
      but this has the undesirable effect of introducing *two* full barrier
      instructions. A better approach is actually the following, non-intuitive
      sequence:
      
      	<Access [A]>
      
      	// atomic_op (B)
      1:	ldxr	x0, [B]		// Exclusive load
      	<op(B)>
      	stlxr	w1, x0, [B]	// Exclusive store with release
      	cbnz	w1, 1b
      	dmb	ish		// Full barrier
      
      	<Access [C]>
      
      The simple observations here are:
      
        - The dmb ensures that no subsequent accesses (e.g. the access to C)
          can enter or pass the atomic sequence.
      
        - The dmb also ensures that no prior accesses (e.g. the access to A)
          can pass the atomic sequence.
      
        - Therefore, no prior access can pass a subsequent access, or
          vice-versa (i.e. A is strictly ordered before C).
      
        - The stlxr ensures that no prior access can pass the store component
          of the atomic operation.
      
      The only tricky part remaining is the ordering between the ldxr and the
      access to A, since the absence of the first dmb means that we're now
      permitting re-ordering between the ldxr and any prior accesses.
      
      From an (arbitrary) observer's point of view, there are two scenarios:
      
        1. We have observed the ldxr. This means that if we perform a store to
           [B], the ldxr will still return older data. If we can observe the
           ldxr, then we can potentially observe the permitted re-ordering
           with the access to A, which is clearly an issue when compared to
           the dmb variant of the code. Thankfully, the exclusive monitor will
           save us here since it will be cleared as a result of the store and
           the ldxr will retry. Notice that any use of a later memory
           observation to imply observation of the ldxr will also imply
           observation of the access to A, since the stlxr/dmb ensure strict
           ordering.
      
        2. We have not observed the ldxr. This means we can perform a store
           and influence the later ldxr. However, that doesn't actually tell
           us anything about the access to [A], so we've not lost anything
           here either when compared to the dmb variant.
      
      This patch implements this solution for our barriered atomic operations,
      ensuring that we satisfy the full barrier requirements where they are
      needed.
      
      Cc: <stable@vger.kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      8e86f0b4
  5. Feb 06, 2014
  6. Feb 05, 2014
    • Matt Fleming's avatar
      x86/efi: Allow mapping BGRT on x86-32 · 081cd62a
      Matt Fleming authored
      CONFIG_X86_32 doesn't map the boot services regions into the EFI memory
      map (see commit 70087011 ("x86, efi: Don't map Boot Services on
      i386")), and so efi_lookup_mapped_addr() will fail to return a valid
      address. Executing the ioremap() path in efi_bgrt_init() causes the
      following warning on x86-32 because we're trying to ioremap() RAM,
      
       WARNING: CPU: 0 PID: 0 at arch/x86/mm/ioremap.c:102 __ioremap_caller+0x2ad/0x2c0()
       Modules linked in:
       CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.13.0-0.rc5.git0.1.2.fc21.i686 #1
       Hardware name: DellInc. Venue 8 Pro 5830/09RP78, BIOS A02 10/17/2013
        00000000 00000000 c0c0df08 c09a5196 00000000 c0c0df38 c0448c1e c0b41310
        00000000 00000000 c0b37bc1 00000066 c043bbfd c043bbfd 00e7dfe0 00073eff
        00073eff c0c0df48 c0448ce2 00000009 00000000 c0c0df9c c043bbfd 00078d88
       Call Trace:
        [<c09a5196>] dump_stack+0x41/0x52
        [<c0448c1e>] warn_slowpath_common+0x7e/0xa0
        [<c043bbfd>] ? __ioremap_caller+0x2ad/0x2c0
        [<c043bbfd>] ? __ioremap_caller+0x2ad/0x2c0
        [<c0448ce2>] warn_slowpath_null+0x22/0x30
        [<c043bbfd>] __ioremap_caller+0x2ad/0x2c0
        [<c0718f92>] ? acpi_tb_verify_table+0x1c/0x43
        [<c0719c78>] ? acpi_get_table_with_size+0x63/0xb5
        [<c087cd5e>] ? efi_lookup_mapped_addr+0xe/0xf0
        [<c043bc2b>] ioremap_nocache+0x1b/0x20
        [<c0cb01c8>] ? efi_bgrt_init+0x83/0x10c
        [<c0cb01c8>] efi_bgrt_init+0x83/0x10c
        [<c0cafd82>] efi_late_init+0x8/0xa
        [<c0c9bab2>] start_kernel+0x3ae/0x3c3
        [<c0c9b53b>] ? repair_env_string+0x51/0x51
        [<c0c9b378>] i386_start_kernel+0x12e/0x131
      
      Switch to using early_memremap(), which won't trigger this warning, and
      has the added benefit of more accurately conveying what we're trying to
      do - map a chunk of memory.
      
      This patch addresses the following bug report,
      
        https://bugzilla.kernel.org/show_bug.cgi?id=67911
      
      
      
      Reported-by: default avatarAdam Williamson <awilliam@redhat.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Signed-off-by: default avatarMatt Fleming <matt.fleming@intel.com>
      081cd62a
    • Ingo Molnar's avatar
      x86: Disable CONFIG_X86_DECODER_SELFTEST in allmod/allyesconfigs · f8f20234
      Ingo Molnar authored
      
      
      It can take some time to validate the image, make sure
      {allyes|allmod}config doesn't enable it.
      
      I'd say randconfig will cover it often enough, and the failure is also
      borderline build coverage related: you cannot really make the decoder
      test fail via source level changes, only with changes in the build
      environment, so I agree with Andi that we can disable this one too.
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      Suggested-and-acked-by: default avatarAndi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f8f20234
    • Linus Torvalds's avatar
      execve: use 'struct filename *' for executable name passing · c4ad8f98
      Linus Torvalds authored
      
      
      This changes 'do_execve()' to get the executable name as a 'struct
      filename', and to free it when it is done.  This is what the normal
      users want, and it simplifies and streamlines their error handling.
      
      The controlled lifetime of the executable name also fixes a
      use-after-free problem with the trace_sched_process_exec tracepoint: the
      lifetime of the passed-in string for kernel users was not at all
      obvious, and the user-mode helper code used UMH_WAIT_EXEC to serialize
      the pathname allocation lifetime with the execve() having finished,
      which in turn meant that the trace point that happened after
      mm_release() of the old process VM ended up using already free'd memory.
      
      To solve the kernel string lifetime issue, this simply introduces
      "getname_kernel()" that works like the normal user-space getname()
      function, except with the source coming from kernel memory.
      
      As Oleg points out, this also means that we could drop the tcomm[] array
      from 'struct linux_binprm', since the pathname lifetime now covers
      setup_new_exec().  That would be a separate cleanup.
      
      Reported-by: default avatarIgor Zhbanov <i.zhbanov@samsung.com>
      Tested-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c4ad8f98
    • Catalin Marinas's avatar
      arm64: compat: Wire up new AArch32 syscalls · 6290b53d
      Catalin Marinas authored
      
      
      This patch enables sys_compat, sys_finit_module, sys_sched_setattr and
      sys_sched_getattr for compat (AArch32) applications.
      
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      6290b53d
    • Nathan Lynch's avatar
      arm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE · d4022a33
      Nathan Lynch authored
      
      
      Update wall-to-monotonic fields in the VDSO data page
      unconditionally.  These are used to service CLOCK_MONOTONIC_COARSE,
      which is not guarded by use_syscall.
      
      Signed-off-by: default avatarNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d4022a33
    • Nathan Lynch's avatar
      arm64: vdso: fix coarse clock handling · 069b9186
      Nathan Lynch authored
      
      
      When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or
      CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the
      caller has placed in x2 ("ret x2" to return from the fast path).  Fix
      this by saving x30/LR to x2 only in code that will call
      __do_get_tspec, restoring x30 afterward, and using a plain "ret" to
      return from the routine.
      
      Also: while the resulting tv_nsec value for CLOCK_REALTIME and
      CLOCK_MONOTONIC must be computed using intermediate values that are
      left-shifted by cs_shift (x12, set by __do_get_tspec), the results for
      coarse clocks should be calculated using unshifted values
      (xtime_coarse_nsec is in units of actual nanoseconds).  The current
      code shifts intermediate values by x12 unconditionally, but x12 is
      uninitialized when servicing a coarse clock.  Fix this by setting x12
      to 0 once we know we are dealing with a coarse clock id.
      
      Signed-off-by: default avatarNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      069b9186
    • Mark Rutland's avatar
      arm64: simplify pgd_alloc · 883d50a0
      Mark Rutland authored
      
      
      Currently pgd_alloc has a redundant NULL check in its return path that
      can be removed with no ill effects. With that removed it's also possible
      to return early and eliminate the new_pgd temporary variable.
      
      This patch applies said modifications, making the logic of pgd_alloc
      correspond 1-1 with that of pgd_free.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      883d50a0
    • Mark Rutland's avatar
      arm64: fix typo: s/SERRROR/SERROR/ · bfb67a56
      Mark Rutland authored
      
      
      Somehow SERROR has acquired an additional 'R' in a couple of headers.
      This patch removes them before they spread further. As neither instance
      is in use yet, no other sites need to be fixed up.
      
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      bfb67a56
    • Catalin Marinas's avatar
      arm64: Invalidate the TLB when replacing pmd entries during boot · a55f9929
      Catalin Marinas authored
      
      
      With the 64K page size configuration, __create_page_tables in head.S
      maps enough memory to get started but using 64K pages rather than 512M
      sections with a single pgd/pud/pmd entry pointing to a pte table.
      create_mapping() may override the pgd/pud/pmd table entry with a block
      (section) one if the RAM size is more than 512MB and aligned correctly.
      For the end of this block to be accessible, the old TLB entry must be
      invalidated.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: default avatarMark Salter <msalter@redhat.com>
      Tested-by: default avatarMark Salter <msalter@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a55f9929
    • Laura Abbott's avatar
      arm64: Align CMA sizes to PAGE_SIZE · ccc9e244
      Laura Abbott authored
      
      
      dma_alloc_from_contiguous takes number of pages for a size.
      Align up the dma size passed in to page size to avoid truncation
      and allocation failures on sizes less than PAGE_SIZE.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarLaura Abbott <lauraa@codeaurora.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ccc9e244
    • Vinayak Kale's avatar
      arm64: add DSB after icache flush in __flush_icache_all() · 5044bad4
      Vinayak Kale authored
      
      
      Add DSB after icache flush to complete the cache maintenance operation.
      The function __flush_icache_all() is used only for user space mappings
      and an ISB is not required because of an exception return before executing
      user instructions. An exception return would behave like an ISB.
      
      Signed-off-by: default avatarVinayak Kale <vkale@apm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      5044bad4
  7. Feb 04, 2014
  8. Feb 03, 2014
  9. Feb 02, 2014
  10. Feb 01, 2014
Loading