- Jun 15, 2010
-
-
Christoph Egger authored
CONFIG_HIGHPTE doesn't exist in Kconfig, therefore removing all references for it from the source code. Signed-off-by:
Christoph Egger <siccegge@cs.fau.de> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- May 21, 2010
-
-
Anton Blanchard authored
I've been told that the architected way to determine we are in form 1 affinity mode is by reading the ibm,architecture-vec-5 property which mirrors the layout of the fifth vector of the ibm,client-architecture structure. Eventually we may want to parse the ibm,architecture-vec-5 and create FW_FEATURE_* bits. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- May 17, 2010
-
-
Kumar Gala authored
When we build with ftrace enabled its possible that loadcam_entry would have used the stack pointer (even though the code doesn't need it). We call loadcam_entry in __secondary_start before the stack is setup. To ensure that loadcam_entry doesn't use the stack pointer the easiest solution is to just have it in asm code. Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
Alexander Graf authored
We need to reserve a context from KVM to make sure we have our own segment space. While we did that split for Book3S_64 already, 32 bit is still outstanding. So let's split it now. Signed-off-by:
Alexander Graf <agraf@suse.de> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
- May 14, 2010
-
-
Kumar Gala authored
When we build with ftrace enabled its possible that loadcam_entry would have used the stack pointer (even though the code doesn't need it). We call loadcam_entry in __secondary_start before the stack is setup. To ensure that loadcam_entry doesn't use the stack pointer the easiest solution is to just have it in asm code. Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
- May 06, 2010
-
-
Anton Blanchard authored
Convert NUMA code to new cpumask API. We shift the node to cpumask setup code until after we complete bootmem allocation so we can dynamically allocate the cpumasks. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
As explained in commit 1c0fe6e3, we want to call the architecture independent oom killer when getting an unexplained OOM from handle_mm_fault, rather than simply killing current. Cc: linuxppc-dev@ozlabs.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux-arch@vger.kernel.org Signed-off-by:
Nick Piggin <npiggin@suse.de> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Mark Nelson authored
We need to keep track of the backing pages that get allocated by vmemmap_populate() so that when we use kdump, the dump-capture kernel knows where these pages are. We use a simple linked list of structures that contain the physical address of the backing page and corresponding virtual address to track the backing pages. To save space, we just use a pointer to the next struct vmemmap_backing. We can also do this because we never remove nodes. We call the pointer "list" to be compatible with changes made to the crash utility. vmemmap_populate() is called either at boot-time or on a memory hotplug operation. We don't have to worry about the boot-time calls because they will be inherently single-threaded, and for a memory hotplug operation vmemmap_populate() is called through: sparse_add_one_section() | V kmalloc_section_memmap() | V sparse_mem_map_populate() | V vmemmap_populate() and in sparse_add_one_section() we're protected by pgdat_resize_lock(). So, we don't need a spinlock to protect the vmemmap_list. We allocate space for the vmemmap_backing structs by allocating whole pages in vmemmap_list_alloc() and then handing out chunks of this to vmemmap_list_populate(). This means that we waste at most just under one page, but this keeps the code is simple. Signed-off-by:
Mark Nelson <markn@au1.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
So we tried to speed things up a bit using flush_hash_pages() directly but that falls over on 603 of course meaning we fail to flush the TLB properly and we may even end up having it corrupt memory randomly by accessing a hash table that doesn't exist. This removes the "optimization" by always going through flush_tlb_page() for now at least. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- May 05, 2010
-
-
Dave Kleikamp authored
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Torez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by:
Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by:
Josh Boyer <jwboyer@linux.vnet.ibm.com>
-
- Apr 30, 2010
-
-
Wufei authored
The bypassing of this test is a leftover from 2.4 vintage kernels, and is no longer appropriate, or even used by KGDB. Currently KGDB uses probe_kernel_write() for all access to memory via the KGDB core, so it can simply be deleted. This fixes CVE-2010-1446. CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> CC: Kumar Gala <galak@kernel.crashing.org> Signed-off-by:
Wufei <fei.wu@windriver.com> Signed-off-by:
Jason Wessel <jason.wessel@windriver.com>
-
- Apr 28, 2010
-
-
Anton Blanchard authored
Firmware changed the way it represents memory and cpu affinity on POWER7. Unfortunately the old method now caps the topology to work around issues with legacy operating systems. For Linux to get the correct topology we need to use the new form 1 affinity information. We set the form 1 field in the client architecture, and if we see "1" in the ibm,associativity-form property firmware supports form 1 affinity and we should look at the first field in the ibm,associativity-reference-points array. If not we use the second field as we always have. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Apr 20, 2010
-
-
Becky Bruce authored
The code was looking for this in cpu_features, not mmu_features. Fix this. Signed-off-by:
Becky Bruce <beckyb@kernel.crashing.org> Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
- Apr 07, 2010
-
-
K.Prasad authored
Data address breakpoint exceptions are currently handled along with page-faults which require interrupts to remain in enabled state. Since exception handling for data breakpoints aren't pre-empt safe, we handle them separately. Signed-off-by:
K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
We can't just clear the user read permission in book3e pte, because that will also clear supervisor read permission. This surely isn't desired. Fix the problem by adding the supervisor read back. BenH: Slightly simplified the ifdef and applied to ppc64 too Signed-off-by:
Li Yang <leoli@freescale.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Mar 30, 2010
-
-
Tejun Heo authored
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by:
Tejun Heo <tj@kernel.org> Guess-its-ok-by:
Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- Mar 19, 2010
-
-
FUJITA Tomonori authored
powerpc initializes swiotlb before parsing the kernel boot options so swiotlb options (e.g. specifying the swiotlb buffer size) are ignored. Any time before freeing bootmem works for swiotlb so this patch moves powerpc's swiotlb initialization after parsing the kernel boot options, mem_init (as x86 does). Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Tested-by:
Becky Bruce <beckyb@kernel.crashing.org> Tested-by:
Albert Herranz <albert_herranz@yahoo.es> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Mar 06, 2010
-
-
H Hartley Sweeten authored
The macro any_online_node() is prone to producing sparse warnings due to the local symbol 'node'. Since all the in-tree users are really requesting the first online node (the mask argument is either NODE_MASK_ALL or node_online_map) just use the first_online_node macro and remove the any_online_node macro since there are no users. Signed-off-by:
H Hartley Sweeten <hsweeten@visionengravers.com> Acked-by:
David Rientjes <rientjes@google.com> Reviewed-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Cc: Milton Miller <miltonm@bga.com> Cc: Nathan Fontenot <nfont@austin.ibm.com> Cc: Geoff Levand <geoffrey.levand@am.sony.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: J. Bruce Fields <bfields@fieldses.org> Cc: Neil Brown <neilb@suse.de> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Cc: David S. Miller <davem@davemloft.net> Cc: Benny Halevy <bhalevy@panasas.com> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Ricardo Labiaga <Ricardo.Labiaga@netapp.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Feb 20, 2010
-
-
Russell King authored
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Feb 19, 2010
-
-
Thomas Gleixner authored
tlbivax_lock needs to be a real spinlock in RT. Convert it to raw_spinlock. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Kumar Gala <galak@kernel.crashing.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Thomas Gleixner authored
native_tlbie_lock needs to be a real spinlock in RT. Convert it to raw_spinlock. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Thomas Gleixner authored
context_lock needs to be a real spinlock in RT. Convert it to raw_spinlock. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 17, 2010
-
-
Anton Blanchard authored
Now we have real bit locks use them instead of open coding it. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 12, 2010
-
-
Stefan Roese authored
This patch adds support for boards with more that 512MByte RAM. Currently only 512MB of memory are enabled in the DCCR/ICCR real-mode cache control registers. This patch now enables caching in real-mode for 2GByte. Signed-off-by:
Stefan Roese <sr@denx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by:
Josh Boyer <jwboyer@linux.vnet.ibm.com>
-
- Feb 10, 2010
-
-
David Gibson authored
Commit f71dc176 'Make hpte_need_flush() correctly mask for multiple page sizes' introduced bug, which is triggered when a kernel with a 64k base page size is run on a system whose hardware does not 64k hash PTEs. In this case, we emulate 64k pages with multiple 4k hash PTEs, however in hpte_need_flush() we incorrectly only mask the hardware page size from the address, instead of the logical page size. This causes things to go wrong when we later attempt to iterate through the hardware subpages of the logical page. This patch corrects the error. It has been tested on pSeries bare metal by Michael Neuling. Signed-off-by:
David Gibson <dwg@au1.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 09, 2010
-
-
Anton Blanchard authored
We can use the much more lightweight ida allocator since we don't need the pointer storage idr provides. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 05, 2010
-
-
Uwe Kleine-König authored
Signed-off-by:
Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
Thadeu Lima de Souza Cascardo authored
Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
- Feb 03, 2010
-
-
Thadeu Lima de Souza Cascardo authored
Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Jan 15, 2010
-
-
Jiri Slaby authored
Make sure compiler won't do weird things with limits. E.g. fetching them twice may return 2 different values after writable limits are implemented. I.e. either use rlimit helpers added in 3e10e716 or ACCESS_ONCE if not applicable. Signed-off-by:
Jiri Slaby <jslaby@suse.cz> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: linuxppc-dev@ozlabs.org Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Dec 18, 2009
-
-
David Gibson authored
Commit d28513bc ("Fix bug in pagetable cache cleanup with CONFIG_PPC_SUBPAGE_PROT"), itself a fix for breakage caused by an earlier clean up patch of mine, contains a stupid bug. I changed the parameters of the subpage_protection() function, but failed to update one of the callers. This patch fixes it, and replaces a void * with a typed pointer so that the compiler will warn on such an error in future. Signed-off-by:
David Gibson <dwg@au1.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Li Yang authored
The function name of cpumask_clear_cpu was not correct. Fortunately nobody uses that code with hotplug yet :-) Reported-by:
Jin Qing <b24347@freescale.com> Signed-off-by:
Li Yang <leoli@freescale.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Sachin P. Sant authored
This time without the funny characters. Fix following build errors generated with DEBUG=1 cc1: warnings being treated as errors arch/powerpc/mm/hash_utils_64.c: In function 'htab_dt_scan_page_sizes': arch/powerpc/mm/hash_utils_64.c:343: error: format '%04x' expects type 'unsigned int', but argument 4 has type 'long unsigned int' arch/powerpc/mm/hash_utils_64.c:343: error: format '%08x' expects type 'unsigned int', but argument 5 has type 'long unsigned int' arch/powerpc/mm/hash_utils_64.c: In function 'htab_initialize': arch/powerpc/mm/hash_utils_64.c:666: error: format '%x' expects type 'unsigned int', but argument 4 has type 'long unsigned int' ... SNIP ... Signed-off-by:
Sachin Sant <sachinp@in.ibm.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Benjamin Herrenschmidt authored
Set need to call __set_pte_at() and not set_pte_at() from __change_page_attr() since the later will perform checks with CONFIG_DEBUG_VM that aren't suitable to the way we override an existing PTE. (More specifically, it doesn't let you write over a present PTE). Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Dec 14, 2009
-
-
Stephen Rothwell authored
Today's linux-next build (powerpc ppc44x_defconfig) failed like this: arch/powerpc/mm/pgtable_32.c: In function 'mapin_ram': arch/powerpc/mm/pgtable_32.c:318: error: too many arguments to function 'mmu_mapin_ram' Casued by commit de32400d ("wii: use both mem1 and mem2 as ram"). Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Grant Likely <grant.likely@secretlab.ca>
-
- Dec 13, 2009
-
-
Albert Herranz authored
Add a flag to let a platform ioremap memory regions marked as reserved. This flag will be used later by the Nintendo Wii support code to allow ioremapping the I/O region sitting between MEM1 and MEM2 and marked as reserved RAM in the patch "wii: use both mem1 and mem2 as ram". This will no longer be needed when proper discontig memory support for 32-bit PowerPC is added to the kernel. Signed-off-by:
Albert Herranz <albert_herranz@yahoo.es> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Grant Likely <grant.likely@secretlab.ca>
-
Albert Herranz authored
The Nintendo Wii video game console has two discontiguous RAM regions: - MEM1: 24MB @ 0x00000000 - MEM2: 64MB @ 0x10000000 Unfortunately, the kernel currently does not support discontiguous RAM memory regions on 32-bit PowerPC platforms. This patch adds a series of workarounds to allow the use of the second memory region (MEM2) as RAM by the kernel. Basically, a single range of memory from the beginning of MEM1 to the end of MEM2 is reported to the kernel, and a memory reservation is created for the hole between MEM1 and MEM2. With this patch the system is able to use all the available RAM and not just ~27% of it. This will no longer be needed when proper discontig memory support for 32-bit PowerPC is added to the kernel. Signed-off-by:
Albert Herranz <albert_herranz@yahoo.es> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Grant Likely <grant.likely@secretlab.ca>
-
- Dec 09, 2009
-
-
Joakim Tjernlund authored
8xx sometimes need to load a invalid/non-present TLBs in it DTLB asm handler. These must be invalidated separaly as linux mm don't. Signed-off-by:
Joakim Tjernlund <Joakim.Tjernlund@transmode.se> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Dec 08, 2009
-
-
David Gibson authored
Commit a0668cdc cleans up the handling of kmem_caches for allocating various levels of pagetables. Unfortunately, it conflicts badly with CONFIG_PPC_SUBPAGE_PROT, due to the latter's cleverly hidden technique of adding some extra allocation space to the top level page directory to store the extra information it needs. Since that extra allocation really doesn't fit into the cleaned up page directory allocating scheme, this patch alters CONFIG_PPC_SUBPAGE_PROT to instead allocate its struct subpage_prot_table as part of the mm_context_t. Signed-off-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Dec 01, 2009
-
-
Benjamin Herrenschmidt authored
This reverts commit c045256d. It breaks build when CONFIG_PPC_SUBPAGE_PROT is not set. I will commit a fixed version separately Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-