- Apr 21, 2012
-
-
Linus Torvalds authored
This continues the theme started with vm_brk() and vm_munmap(): vm_mmap() does the same thing as do_mmap(), but additionally does the required VM locking. This uninlines (and rewrites it to be clearer) do_mmap(), which sadly duplicates it in mm/mmap.c and mm/nommu.c. But that way we don't have to export our internal do_mmap_pgoff() function. Some day we hopefully don't have to export do_mmap() either, if all modular users can become the simpler vm_mmap() instead. We're actually very close to that already, with the notable exception of the (broken) use in i810, and a couple of stragglers in binfmt_elf. Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 11, 2012
-
-
Chris Metcalf authored
Until we push the unaligned access support for tilegx, it's silly to have arch/tile/kernel/proc.c generate a warning about an unused variable. Extend the #ifdef to cover all the code and data for now. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
- Apr 09, 2012
-
-
Srivatsa S. Bhat authored
The scheduler depends on receiving the CPU_STARTING notification, without which we end up into a lot of trouble. So add the missing call to notify_cpu_starting() in the bringup code. Signed-off-by:
Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
- Apr 02, 2012
-
-
Chris Metcalf authored
The return path as we reload registers and core state requires that r30 hold a boolean indicating whether we are returning from an NMI, but in a couple of cases we weren't setting this properly, with the result that we could accidentally unmask the NMI interrupt(s), which could cause confusion. Now we set r30 in every place where we jump into the interrupt return path. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We were re-homing the initial task's kernel stack on the boot cpu, but in fact it's better to let it stay globally homed, since that task isn't bound to the boot cpu anyway. This is more of a general cleanup than an actual performance optimization, but it removes code, which is a good thing. :-) Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Previously we were returning SIGSEGV in this case. It seems cleaner to return SIGBUS since the hardware figures out alignment traps before TLB violations, so SIGBUS is the "more correct" signal. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
There were some correctness issues with this code that are now fixed with this change. The change is likely less performant than it could be, but it should no longer be vulnerable to any races with memory operations on the memory network while invalidating a range of memory. This code is run infrequently so performance isn't critical, but correctness definitely is. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
This idiom is used elsewhere when we do an unlock by writing a zero, but I missed it here. Using an atomic operation avoids waiting on the write buffer for the unlocking write to be sent to the home cache. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
It causes "make clean" to fail, for example. Once we have KVM support complete, we'll reinstate the subdir reference. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
This avois a bug in modules trying to use the function. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Pragmatically it couldn't be wrong to cast pointers to long to compare them (since all kernel addresses are in the top half of VA space), but it's more correct to cast to unsigned long. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
If we are single-stepping and make a syscall, we call ptrace_notify() explicitly on the return path back to user space, since we are returning to a pc value set artificially to the next instruction, and otherwise we won't register that we stepped over the syscall instruction (swint1). Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
This allows the later-panicking tiles to wait in a lower power state until they get interrupted with an smp_send_stop(). Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
This avoids the hardware istream prefetcher doing unnecessary work. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
This is more standard and avoids having to remember what units the options actually take. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We should be holding the init_mm.page_table_lock in shatter_huge_page() since we are modifying the kernel page tables. Then, only if we are walking the other root page tables to update them, do we want to take the pgd_lock. Add a comment about taking the pgd_lock that we always do it with interrupts disabled and therefore are not at risk from the tlbflush IPI deadlock as is seen on x86. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We were failing to track the memory when we allocated it. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We were carefully computing a value to use for the number of loops to spin for, and then ignoring it. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Previously we only handled kernels up to a single huge page in size. Now we create additional PTEs appropriately. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
If we took a page fault while we had interrupts disabled, we shouldn't enable them in the page fault handler. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We make sure not to try to set the home for an MMIO PTE (on tilegx) or a PTE that isn't referencing memory managed by Linux. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Doing so raises the possibility of self-deadlock if we are waiting for a backtrace for an oprofile or perf interrupt while we are in the middle of migrating our own stack page. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Not associated with any code changes, so I'm just lumping these comment changes into a commit by themselves. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We now respond to MEM_ERROR traps (e.g. an atomic instruction to non-cacheable memory) with a SIGBUS. We also no longer generate a console crash message if a user process die due to a SIGTRAP. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
In certain circumstances we need to do a bunch of jump-and-link instructions to fill the hardware return-address stack with nonzero values. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Fix a long-standing bug in the stack backtracer where we would print garbage to the console instead of kernel function names, if the kernel wasn't built with symbol support (e.g. mboot). Make sure to tag every line of userspace backtrace output if we actually have the mmap_sem, since that way if there's no tag, we know that it's because we couldn't trylock the semaphore. Stop doing a TLB flush and examining page tables during backtrace. Instead, just trust that __copy_from_user_inatomic() will properly fault and return a failure, which it should do in all cases. Fix a latent bug where the backtracer would directly examine a signal context in user space, rather than copying it safely to kernel memory first. This meant that a race with another thread could potentially have caused a kernel panic. Guard against unaligned sp when trying to restart backtrace at an interrupt or signal handler point in the kernel backtracer. Report kernel symbolic information for the call instruction rather than for the following instruction. We still report the actual numeric address corresponding to the instruction after the call, for the sake of consistency with the normal expectations for stack backtracers. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Add a comment explaining why this is important, and add a CFLAGS_REMOVE clause to the Makefile to make sure it happens. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
With lockstat we can end up trying to get a backtrace before "high_memory" is initialized, so don't worry about range testing if it is zero. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
This avoids assigning IRQ 0 to PCI devices, because we've seen that doesn't always work well. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Fix some signedness and variable usage warnings in change_bit() and test_and_change_bit(). Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
It still returns whether @v was not @u, not the old value, unlike __atomic_add_unless(). Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com> Acked-by:
Arun Sharma <asharma@fb.com>
-
Chris Metcalf authored
We aren't yet using this definition in the kernel, but fix it up before someone goes looking for it. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
It's fixed at half the VA space and there's no point in configuring it. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
We switched to using "tilepro" for the 32-bit stuff a while ago, but missed this one usage. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Chris Metcalf authored
Looks like a cut and paste bug from the x86 version. Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
Paul Gortmaker authored
Commit bd119c69 "Disintegrate asm/system.h for Tile" created the asm/switch_to.h file, but did not add an include of it to all its users. Also, commit b4816afa "Move the asm-generic/system.h xchg() implementation to asm-generic/cmpxchg.h" introduced the concept of asm/cmpxchg.h but the tile arch never got one. Fork the cmpxchg content out of the asm/atomic.h file to create one. Acked-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by:
Chris Metcalf <cmetcalf@tilera.com>
-
- Mar 29, 2012
-
-
Rusty Russell authored
This has been obsolescent for a while, fix documentation and misc comments. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This has been obsolescent for a while; time for the final push. In adjacent context, replaced old cpus_* with cpumask_*. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Acked-by: David S. Miller <davem@davemloft.net> (arch/sparc) Acked-by: Chris Metcalf <cmetcalf@tilera.com> (arch/tile) Cc: user-mode-linux-devel@lists.sourceforge.net Cc: Russell King <linux@arm.linux.org.uk> Cc: linux-arm-kernel@lists.infradead.org Cc: Richard Kuo <rkuo@codeaurora.org> Cc: linux-hexagon@vger.kernel.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Helge Deller <deller@gmx.de> Cc: sparclinux@vger.kernel.org
-
Gilad Ben-Yossef authored
We have lots of infrastructure in place to partition multi-core systems such that we have a group of CPUs that are dedicated to specific task: cgroups, scheduler and interrupt affinity, and cpuisol= boot parameter. Still, kernel code will at times interrupt all CPUs in the system via IPIs for various needs. These IPIs are useful and cannot be avoided altogether, but in certain cases it is possible to interrupt only specific CPUs that have useful work to do and not the entire system. This patch set, inspired by discussions with Peter Zijlstra and Frederic Weisbecker when testing the nohz task patch set, is a first stab at trying to explore doing this by locating the places where such global IPI calls are being made and turning the global IPI into an IPI for a specific group of CPUs. The purpose of the patch set is to get feedback if this is the right way to go for dealing with this issue and indeed, if the issue is even worth dealing with at all. Based on the feedback from this patch set I plan to offer further patches that address similar issue in other code paths. This patch creates an on_each_cpu_mask() and on_each_cpu_cond() infrastructure API (the former derived from existing arch specific versions in Tile and Arm) and uses them to turn several global IPI invocation to per CPU group invocations. Core kernel: on_each_cpu_mask() calls a function on processors specified by cpumask, which may or may not include the local processor. You must not call this function with disabled interrupts or from a hardware interrupt handler or from a bottom half handler. arch/arm: Note that the generic version is a little different then the Arm one: 1. It has the mask as first parameter 2. It calls the function on the calling CPU with interrupts disabled, but this should be OK since the function is called on the other CPUs with interrupts disabled anyway. arch/tile: The API is the same as the tile private one, but the generic version also calls the function on the with interrupts disabled in UP case This is OK since the function is called on the other CPUs with interrupts disabled. Signed-off-by:
Gilad Ben-Yossef <gilad@benyossef.com> Reviewed-by:
Christoph Lameter <cl@linux.com> Acked-by:
Chris Metcalf <cmetcalf@tilera.com> Acked-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Pekka Enberg <penberg@kernel.org> Cc: Matt Mackall <mpm@selenic.com> Cc: Rik van Riel <riel@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Sasha Levin <levinsasha928@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Avi Kivity <avi@redhat.com> Acked-by:
Michal Nazarewicz <mina86@mina86.org> Cc: Kosaki Motohiro <kosaki.motohiro@gmail.com> Cc: Milton Miller <miltonm@bga.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-