- Sep 15, 2008
-
-
Paul Mackerras authored
This implements CONFIG_RELOCATABLE for 64-bit by making the kernel as a position-independent executable (PIE) when it is set. This involves processing the dynamic relocations in the image in the early stages of booting, even if the kernel is being run at the address it is linked at, since the linker does not necessarily fill in words in the image for which there are dynamic relocations. (In fact the linker does fill in such words for 64-bit executables, though not for 32-bit executables, so in principle we could avoid calling relocate() entirely when we're running a 64-bit kernel at the linked address.) The dynamic relocations are processed by a new function relocate(addr), where the addr parameter is the virtual address where the image will be run. In fact we call it twice; once before calling prom_init, and again when starting the main kernel. This means that reloc_offset() returns 0 in prom_init (since it has been relocated to the address it is running at), which necessitated a few adjustments. This also changes __va and __pa to use an equivalent definition that is simpler. With the relocatable kernel, PAGE_OFFSET and MEMORY_START are constants (for 64-bit) whereas PHYSICAL_START is a variable (and KERNELBASE ideally should be too, but isn't yet). With this, relocatable kernels still copy themselves down to physical address 0 and run there. Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
Using LOAD_REG_IMMEDIATE to get the address of kernel symbols generates 5 instructions where LOAD_REG_ADDR can do it in one, and will generate R_PPC64_ADDR16_* relocations in the output when we get to making the kernel as a position-independent executable, which we'd rather not have to handle. This changes various bits of assembly code to use LOAD_REG_ADDR when we need to get the address of a symbol, or to use suitable position-independent code for cases where we can't access the TOC for various reasons, or if we're not running at the address we were linked at. It also cleans up a few minor things; there's no reason to save and restore SRR0/1 around RTAS calls, __mmu_off can get the return address from LR more conveniently than the caller can supply it in R4 (and we already assume elsewhere that EA == RA if the MMU is on in early boot), and enable_64b_mode was using 5 instructions where 2 would do. Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
This changes the way that the exception prologs transfer control to the handlers in 64-bit kernels with the aim of making it possible to have the prologs separate from the main body of the kernel. Now, instead of computing the address of the handler by taking the top 32 bits of the paca address (to get the 0xc0000000........ part) and ORing in something in the bottom 16 bits, we get the base address of the kernel by doing a load from the paca and add an offset. This also replaces an mfmsr and an ori to compute the MSR value for the handler with a load from the paca. That makes it unnecessary to have a separate version of EXCEPTION_PROLOG_PSERIES that forces 64-bit mode. We can no longer use a direct branches in the exception prolog code, which means that the SLB miss handlers can't branch directly to .slb_miss_realmode any more. Instead we have to compute the address and do an indirect branch. This is conditional on CONFIG_RELOCATABLE; for non-relocatable kernels we use a direct branch as before. (A later change will allow CONFIG_RELOCATABLE to be set on 64-bit powerpc.) Since the secondary CPUs on pSeries start execution in the first 0x100 bytes of real memory and then have to get to wherever the kernel is, we can't use a direct branch to get there. Instead this changes __secondary_hold_spinloop from a flag to a function pointer. When it is set to a non-NULL value, the secondary CPUs jump to the function pointed to by that value. Finally this eliminates one code difference between 32-bit and 64-bit by making __secondary_hold be the text address of the secondary CPU spinloop rather than a function descriptor for it. Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
This rearranges head_64.S so that we have all the first-level exception prologs together starting at 0x100, followed by all the second-level handlers that are invoked from the first-level prologs, followed by other code. This doesn't make any functional change but will make following changes for relocatable kernel support easier. Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Chandru authored
Kdump kernel needs to use only those memory regions that it is allowed to use (crashkernel, rtas, tce, etc.). Each of these regions have their own sizes and are currently added under 'linux,usable-memory' property under each memory@xxx node of the device tree. The ibm,dynamic-memory property of ibm,dynamic-reconfiguration-memory node (on POWER6) now stores in it the representation for most of the logical memory blocks with the size of each memory block being a constant (lmb_size). If one or more or part of the above mentioned regions lie under one of the lmb from ibm,dynamic-memory property, there is a need to identify those regions within the given lmb. This makes the kernel recognize a new 'linux,drconf-usable-memory' property added by kexec-tools. Each entry in this property is of the form of a count followed by that many (base, size) pairs for the above mentioned regions. The number of cells in the count value is given by the #size-cells property of the root node. Signed-off-by:
Chandru Siddalingappa <chandru@in.ibm.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Nathan Fontenot authored
The return code from invocation of the notifier for pSeries_reconfig_chain during update of the device tree is not checked. This causes writes to /proc/ppc64/ofdt to update memory properties (i.e. ibm,dyamic-reconfiguration-memory) to always return success, instead of the result of the notifier chain. This happens specifically when we remove/add memory from the device tree on machines using memory specified in the ibm,dynamic-reconfiguration-memory property of the device tree. Signed-off-by:
Nathan Fontenot <nfont@austin.ibm.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Mark Nelson authored
This new copy_4K_page() function was originally tuned for the best performance on the Cell processor, but after testing on more 64bit powerpc chips it was found that with a small modification it either matched the performance offered by the current mainline version or bettered it by a small amount. It was found that on a Cell-based QS22 blade the amount of system time measured when compiling a 2.6.26 pseries_defconfig decreased by 4%. Using the same test, a 4-way 970MP machine saw a decrease of 2% in system time. No noticeable change was seen on Power4, Power5 or Power6. The 4096 byte page is copied in thirty-two 128 byte strides. An initial setup loop executes dcbt instructions for the whole source page and dcbz instructions for the whole destination page. To do this, the cache line size is retrieved from ppc64_caches. A new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, (introduced in the previous patch) is used to make the modification to this new copy routine - on Power4, 970 and Cell the feature bit is set so the setup loop is executed, but on all other 64bit chips the setup loop is nop'ed out. Signed-off-by:
Mark Nelson <markn@au1.ibm.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Mark Nelson authored
Add a new CPU feature bit, CPU_FTR_CP_USE_DCBTZ, to be added to the 64bit powerpc chips that benefit from having dcbt and dcbz instructions used in their memory copy routines. This will be used in a subsequent patch that updates copy_4K_page(). The new bit is added to Cell, PPC970 and Power4 because they show better performance with the new copy_4K_page() when dcbt and dcbz instructions are used. Signed-off-by:
Mark Nelson <markn@au1.ibm.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
roel kluin authored
Evidently MACIO_FLAG_SCCA_ON was meant. Signed-off-by:
Roel Kluin <roel.kluin@gmail.com> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
- Sep 09, 2008
-
-
James Bottomley authored
It was introduced by "vsprintf: add support for '%pS' and '%pF' pointer formats" in commit 0fe1ef24. However, the current way its coded doesn't work on parisc64. For two reasons: 1) parisc isn't in the #ifdef and 2) parisc has a different format for function descriptors Make dereference_function_descriptor() more accommodating by allowing architecture overrides. I put the three overrides (for parisc64, ppc64 and ia64) in arch/kernel/module.c because that's where the kernel internal linker which knows how to deal with function descriptors sits. Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by:
Tony Luck <tony.luck@intel.com> Acked-by:
Kyle McMartin <kyle@mcmartin.ca> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Sep 08, 2008
-
-
Hugh Dickins authored
A make -j20 powerpc kernel build broke a couple of months ago saying: In file included from arch/powerpc/boot/gunzip_util.h:13, from arch/powerpc/boot/prpmc2800.c:21: arch/powerpc/boot/zlib.h:85: error: expected ‘:’, ‘,’, ‘;’, ‘}’ or ‘__attribute__’ before ‘*’ token arch/powerpc/boot/zlib.h:630: warning: type defaults to ‘int’ in declaration of ‘Byte’ arch/powerpc/boot/zlib.h:630: error: expected ‘;’, ‘,’ or ‘)’ before ‘*’ token It happened again yesterday: too rare for me to confirm the fix, but it looks like the list of dependants on gunzip_util.h was incomplete. Signed-off-by:
Hugh Dickins <hugh@veritas.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
- Sep 07, 2008
-
-
Andre Detsch authored
We currently have a race when scheduling a context to a SPE - after we have found a runnable context in spusched_tick, the same context may have been scheduled by spu_activate(). This may result in a panic if we try to unschedule a context that has been freed in the meantime. This change exits spu_schedule() if the context has already been scheduled, so we don't end up scheduling it twice. Signed-off-by:
Andre Detsch <adetsch@br.ibm.com> Signed-off-by:
Jeremy Kerr <jk@ozlabs.org>
-
- Sep 05, 2008
-
-
Jeremy Kerr authored
We currently have a race for a free SPE. With one thread doing a spu_yield(), and another doing a spu_activate(): thread 1 thread 2 spu_yield(oldctx) spu_activate(ctx) __spu_deactivate(oldctx) spu_unschedule(oldctx, spu) spu->alloc_state = SPU_FREE spu = spu_get_idle(ctx) - searches for a SPE in state SPU_FREE, gets the context just freed by thread 1 spu_schedule(ctx, spu) spu->alloc_state = SPU_USED spu_schedule(newctx, spu) - assumes spu is still free - tries to schedule context on already-used spu This change introduces a 'free_spu' flag to spu_unschedule, to indicate whether or not the function should free the spu after descheduling the context. We only set this flag if we're not going to re-schedule another context on this SPU. Add a comment to document this behaviour. Signed-off-by:
Jeremy Kerr <jk@ozlabs.org>
-
Jeremy Kerr authored
Commit 8d5636fb introduced a reference count on SPU contexts during find_victim, but this may cause a leak in the reference count if we later find a better contender for a context to unschedule. Change the reference to after we've found our victim context, so we don't do the extra get_spu_context(). Signed-off-by:
Jeremy Kerr <jk@ozlabs.org>
-
- Sep 03, 2008
-
-
Kumar Gala authored
The calculation to get TI_CPU based off of SPRG3 was just plain wrong, meaning that we were getting garbage for the CPU number on 6xx/G3/G4 based SMP boxes in this code. Just offset off the stack pointer (to get to thread_info) like all the other references to TI_CPU do. This was pointed out by Chen Gong <G.Chen@freescale.com> [paulus@samba.org - use rlwinm r12,r11,... instead of rlwinm r12,r1,...; tophys()] Signed-off-by:
Kumar Gala <galak@kernel.crashing.org> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
HAVE_ARCH_UNMAPPED_AREA and HAVE_ARCH_UNMAPPED_AREA_TOPDOWN must be defined whenever CONFIG_PPC_MM_SLICES is enabled, not just when CONFIG_HUGETLB_PAGE is. They used to be always defined together but this is no longer the case since 3a8247cc ("powerpc: Only demote individual slices rather than whole process"). Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Tony Breeds authored
This bug is causing random crashes (http://bugzilla.kernel.org/show_bug.cgi?id=11414 ). -fno-omit-frame-pointer is only needed on powerpc when -pg is also supplied, and there is a gcc bug that causes incorrect code generation on 32-bit powerpc when -fno-omit-frame-pointer is used---it uses stack locations below the stack pointer, which is not allowed by the ABI because those locations can and sometimes do get corrupted by an interrupt. This ensures that CONFIG_FRAME_POINTER is only selected by ftrace. When CONFIG_FTRACE is enabled we also pass -mno-sched-epilog to work around the gcc codegen bug. Patch based on work by: Andreas Schwab <schwab@suse.de> Segher Boessenkool <segher@kernel.crashing.org> Signed-off-by:
Tony Breeds <tony@bakeyournoodle.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
This makes core_kernel_text() (and therefore kernel_text_address()) return the correct result. Currently all the __devinit routines (at least) will not be considered to be kernel text. This is just a quick fix for 2.6.27 - hopefully we will be able to fix this better in 2.6.28. Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Paul Mackerras authored
Commit bc033b63 ("powerpc/mm: Fix attribute confusion with htab_bolt_mapping()") moved the check for whether we should make pages of the linear mapping executable from htab_bolt_mapping into its callers, including htab_initialize. A side-effect of this is that the decision is now made once for each contiguous section in the LMB array rather than for each page individually. This can often mean that the whole of the linear mapping ends up being executable. This reverts to the previous behaviour, where individual pages are checked for being part of the kernel text or not, by moving the check back down into htab_bolt_mapping. Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Michael Neuling authored
This fixes an uninitialised variable in the VSX alignment code. It can cause warnings from GCC (noticed with gcc-4.1.1). Gcc is actually correct in this instance, and this bug could cause the alignment interrupt handler to send a SIGSEGV to the process on a legitimate access. Signed-off-by:
Michael Neuling <mikey@neuling.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
- Aug 27, 2008
-
-
Heiko Schocher authored
Signed-off-by:
Heiko Schocher <hs@denx.de> Signed-off-by:
Vitaly Bordug <vitb@kernel.crashing.org> Signed-off-by:
Kumar Gala <galak@kernel.crashing.org> Signed-off-by:
Jeff Garzik <jgarzik@redhat.com>
-
- Aug 26, 2008
-
-
Paul Mackerras authored
Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Andrew Morton authored
This fixes an error building powerpc allmodconfig: ERROR: "CMO_PageSize" [arch/powerpc/platforms/pseries/cmm.ko] undefined! Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Masakazu Mokuno authored
Fix the ioremap of the spu shadow regs on the PS3. The current PS3 hypervisor requires the spu shadow regs to be mapped with the PTE page protection bits set as read-only (PP=3). This implementation uses the low level __ioremap() to bypass the page protection settings inforced by ioremap_flags() to get the needed PTE bits set for the shadow regs. This fixes a runtime failure on the PS3 introduced by the powerpc ioremap_prot rework of commit a1f242ff ("powerpc ioremap_prot"). Signed-off-by:
Masakazu Mokuno <mokuno@sm.sony.co.jp> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Masakazu Mokuno authored
Rework the PS3 MMU hash table code to remove the need to ioremap the hash table by using the HV calls lv1_insert_htab_entry() and lv1_read_htab_entries(). This fixes a runtime failure on the PS3 introduced by the powerpc ioremap_prot rework of commit a1f242ff ("powerpc ioremap_prot"). Signed-off-by:
Masakazu Mokuno <mokuno@sm.sony.co.jp> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Geoff Levand authored
Update ps3_defconfig. Signed-off-by:
Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
- Aug 23, 2008
-
-
Adrian Bunk authored
This patch lets the files using linux/version.h match the files that #include it. Signed-off-by:
Adrian Bunk <bunk@kernel.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 21, 2008
-
-
Kumar Gala authored
Since we are updated defconfigs I went ahead and moved the asp8347_defconfig under 83xx/ and the mpc8536_ds_defconfig under 85xx/ as that is where they should have been to start with. Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
Scott Wood authored
This patch fixes the following build error with mpc866_ads_defconfig: <-- snip --> ... WRAP arch/powerpc/boot/cuImage.mpc866ads powerpc64-linux-ld: arch/powerpc/boot/cuboot-mpc866ads.o: No such file: No such file or directory make[2]: *** [arch/powerpc/boot/cuImage.mpc866ads] Error 1 <-- snip --> Reported-by:
Adrian Bunk <bunk@kernel.org> Signed-off-by:
Scott Wood <scottwood@freescale.com> Signed-off-by:
Adrian Bunk <bunk@kernel.org> Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
Laurent Pinchart authored
The CPM2 GPIO library code uses the non thread-safe clrbits32/setbits32 macros. This patch protects them with a spinlock. Signed-off-by:
Laurent Pinchart <laurentp@cse-semaphore.com> Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
Timur Tabi authored
Fix two memory leaks in the Freescale QE library: add a missing kfree() in ucc_fast_init() and ucc_slow_init() if the ioremap() fails, and update ucc_fast_free() and ucc_slow_free() to call iounmap() if necessary. Based on a patch from Tony Breeds <tony@bakeyournoodle.com>. Signed-off-by:
Timur Tabi <timur@freescale.com> Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
Wolfgang Grandegger authored
Due to the missing compatible property for the SOC, the MPC I2C buses are not found any more. This patch fixes this issue. Furthermore it corrects the name of the SOC node and adds the missing I2C node for the RTC. Signed-off-by:
Wolfgang Grandegger <wg@grandegger.com> Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
Kumar Gala authored
When we coverted the .dts to v1 we lost a space between the irq and its polarity/sense information. This causes a bit of chaos as the reset of the blob is off by one cell. This was noticed by booting and getting errors w/ATA due to lock of interrupts. Signed-off-by:
Kumar Gala <galak@kernel.crashing.org>
-
- Aug 20, 2008
-
-
Stephen Rothwell authored
Now that we have removed all inclusions of asm/of_device.h, this compatability include can be removed. Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
The file arch/powerpc/kernel/sysfs.c is currently only compiled for 64-bit kernels. It contain code to register CPU sysdevs in sysfs and add various properties such as cache topology and raw access by root to performance monitor counters (PMCs). A lot of that can be re-used as is on 32-bits. This makes the file be built for both, with appropriate ifdef'ing for the few bits that are really 64-bit specific, and adds some support for the raw PMCs for 75x and 74xx processors. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Benjamin Herrenschmidt authored
They don't need to be macros, and having them as inline functions avoids warnings about unused variables on some configurations when the argument isn't evaluated. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Nathan Lynch authored
When removing a directory, the sysfs core takes care of removing files in the directory (see sysfs_remove_dir()). So when we are about to delete a kobject (and thus cause its sysfs directory to be removed), we don't have to explicitly remove the files attached to it, although it's harmless to do so. Signed-off-by:
Nathan Lynch <ntl@pobox.com> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Adrian Bunk authored
This changes powerpc to use the new bcd2bin/bin2bcd functions instead of the obsolete BCD_TO_BIN/BIN_TO_BCD macros. Signed-off-by:
Adrian Bunk <bunk@kernel.org> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
David Gibson authored
Some time ago, a copies of the upstream dtc and libfdt sources were included in the kernel tree to avoid having these as external dependencies for building the kernel. Since then development on the upstream dtc and libfdt has continued. This updates the in-kernel versions to match the recently released upstream dtc version 1.2.0. This includes a number of bugfixes, many cleanups and a few new features. Signed-off-by:
David Gibson <david@gibson.dropbear.id.au> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-
Stephen Rothwell authored
Now that we have removed all inclusions of asm/of_platform.h, this compatibility include can be removed. Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Paul Mackerras <paulus@samba.org>
-