- Mar 12, 2010
-
-
Christoph Hellwig authored
On an architecture that supports 32-bit compat we need to override the reported machine in uname with the 32-bit value. Instead of doing this separately in every architecture introduce a COMPAT_UTS_MACHINE define in <asm/compat.h> and apply it directly in sys_newuname(). Signed-off-by:
Christoph Hellwig <hch@lst.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: James Morris <jmorris@namei.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Hellwig authored
Add a generic implementation of the ipc demultiplexer syscall. Except for s390 and sparc64 all implementations of the sys_ipc are nearly identical. There are slight differences in the types of the parameters, where mips and powerpc as the only 64-bit architectures with sys_ipc use unsigned long for the "third" argument as it gets casted to a pointer later, while it traditionally is an "int" like most other paramters. frv goes even further and uses unsigned long for all parameters execept for "ptr" which is a pointer type everywhere. The change from int to unsigned long for "third" and back to "int" for the others on frv should be fine due to the in-register calling conventions for syscalls (we already had a similar issue with the generic sys_ptrace), but I'd prefer to have the arch maintainers looks over this in details. Except for that h8300, m68k and m68knommu lack an impplementation of the semtimedop sub call which this patch adds, and various architectures have gets used - at least on i386 it seems superflous as the compat code on x86-64 and ia64 doesn't even bother to implement it. [akpm@linux-foundation.org: add sys_ipc to sys_ni.c] Signed-off-by:
Christoph Hellwig <hch@lst.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@elte.hu> Reviewed-by:
H. Peter Anvin <hpa@zytor.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: James Morris <jmorris@namei.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Acked-by:
Jesper Nilsson <jesper.nilsson@axis.com> Acked-by:
Russell King <rmk+kernel@arm.linux.org.uk> Acked-by:
David Howells <dhowells@redhat.com> Acked-by:
Kyle McMartin <kyle@mcmartin.ca> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 01, 2010
-
-
Liu Yu authored
Old method prematurely sets ESR and DEAR. Move this part after we decide to inject interrupt, which is more like hardware behave. Signed-off-by:
Liu Yu <yu.liu@freescale.com> Acked-by:
Hollis Blanchard <hollis@penguinppc.org> Acked-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Liu Yu authored
commit 55fb1027c1cf9797dbdeab48180da530e81b1c39 doesn't update tlbcfg correctly. Fix it. And since guest OS likes 'fixed' hardware, initialize tlbcfg everytime when guest access is useless. So move this part to init code. Signed-off-by:
Liu Yu <yu.liu@freescale.com> Acked-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Liu Yu authored
Latest kernel start to access l1csr0 to contron L1. We just tell guest no operation is on going. Signed-off-by:
Liu Yu <yu.liu@freescale.com> Acked-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Alexander Graf authored
SRR1 stores more information that just the MSR value. It also stores valuable information about the type of interrupt we received, for example whether the storage interrupt we just got was because of a missing htab entry or not. We use that information to speed up the exit path. Now if we get preempted before we can interpret the shadow_msr values, we get into vcpu_put which then calls the MSR handler, which then sets all the SRR1 information bits in shadow_msr to 0. Great. So let's preserve the SRR1 specific bits in shadow_msr whenever we set the MSR. They don't hurt. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Commit 7d01b4c3ed2bb33ceaf2d270cb4831a67a76b51b introduced PACA backed vcpu values. With this patch, when a userspace app was setting GPRs before it was actually first loaded, the set values get discarded. This is because vcpu_load loads them from the vcpu backing store that we use whenever we're not owning the PACA. That behavior is not really a major problem, because we don't need it for qemu. Other users (like kvmctl) do have problems with it though, so let's better do it right. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
When our guest starts using either the FPU, Altivec or VSX we need to make sure Linux knows about it and sneak into its process switching code accordingly. This patch makes accesses to the above parts of the system work inside the VM. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Linux contains quite some bits of code to load FPU, Altivec and VSX lazily for a task. It calls those bits in real mode, coming from an interrupt handler. For KVM we better reuse those, so let's wrap a bit of trampoline magic around them and then we can call them from normal module code. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
An SLB entry contains two pieces of information related to size: 1) PTE size 2) SLB size The L bit defines the PTE be "large" (usually means 16MB), SLB_VSID_B_1T defines that the SLB should span 1 GB instead of the default 256MB. Apparently I messed things up and just put those two in one box, shaked it heavily and came up with the current code which handles large pages incorrectly, because it also treats large page SLB entries as "1TB" segment entries. This patch splits those two features apart, making Linux guests boot even when they have > 256MB. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Book3S needs some flags in SRR1 to get to know details about an interrupt. One such example is the trap instruction. It tells the guest kernel that a program interrupt is due to a trap using a bit in SRR1. This patch implements above behavior, making WARN_ON behave like WARN_ON. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
Currently we're racy when doing the transition from IR=1 to IR=0, from the module memory entry code to the real mode SLB switching code. To work around that I took a look at the RTAS entry code which is faced with a similar problem and did the same thing: A small helper in linear mapped memory that does mtmsr with IR=0 and then RFIs info the actual handler. Thanks to that trick we can safely take page faults in the entry code and only need to be really wary of what to do as of the SLB switching part. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
To fetch the last instruction we were interrupted on, we enable DR in early exit code, where we are still in a very transitional phase between guest and host state. Most of the time this seemed to work, but another CPU can easily flush our TLB and HTAB which makes us go in the Linux page fault handler which totally breaks because we still use the guest's SLB entries. To work around that, let's introduce a second KVM guest mode that defines that whenever we get a trap, we don't call the Linux handler or go into the KVM exit code, but just jump over the faulting instruction. That way a potentially bad lwz doesn't trigger any faults and we can later on interpret the invalid instruction we fetched as "fetch didn't work". Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We're being horribly racy right now. All the entry and exit code hijacks random fields from the PACA that could easily be used by different code in case we get interrupted, for example by a #MC or even page fault. After discussing this with Ben, we figured it's best to reserve some more space in the PACA and just shove off some vcpu state to there. That way we can drastically improve the readability of the code, make it less racy and less complex. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We now have helpers for the GPRs, so let's also add some for CR and XER. Having them in the PACA simplifies code a lot, as we don't need to care about where to store CC or not to overflow any integers. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
All code in PPC KVM currently accesses gprs in the vcpu struct directly. While there's nothing wrong with that wrt the current way gprs are stored and loaded, it doesn't suffice for the PACA acceleration that will follow in this patchset. So let's just create little wrapper inline functions that we call whenever a GPR needs to be read from or written to. The compiled code shouldn't really change at all for now. Signed-off-by:
Alexander Graf <agraf@suse.de> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
We treated the DEC interrupt like an edge based one. This is not true for Book3s. The DEC keeps firing until mtdec is issued again and thus clears the interrupt line. So let's implement this logic in KVM too. This patch moves the line clearing from the firing of the interrupt to the mtdec emulation. This makes PPC64 guests work without AGGRESSIVE_DEC defined. Signed-off-by:
Alexander Graf <agraf@suse.de> Acked-by:
Acked-by: Hollis Blanchard <hollis@penguinppc.org> Signed-off-by:
Avi Kivity <avi@redhat.com>
-
- Feb 26, 2010
-
-
Benjamin Herrenschmidt authored
Anton's commit enabling the use of the lwsync fixup mechanism on 64-bit breaks modules. The lwsync fixup section uses .long instead of the FTR_ENTRY_OFFSET macro used by other fixups sections, and thus will generate 32-bit relocations that our module loader cannot resolve. This changes it to use the same type as other feature sections. Note however that we might want to consider using 32-bit for all the feature fixup offsets and add support for R_PPC_REL32 to module_64.c instead as that would reduce the size of the kernel image. I'll leave that as an exercise for the reader for now... Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 20, 2010
-
-
Russell King authored
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Feb 19, 2010
-
-
Thomas Gleixner authored
mpic_lock, irq_rover_lock and fixup_lock need to be real spinlocks in RT. Convert them to raw_spinlock. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Thomas Gleixner authored
feature_lock needs to be a real spinlock in RT. Convert it to raw_spinlock. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 17, 2010
-
-
Anatolij Gustschin authored
MPC5121 has 12 PSC devices. Enable UART support for all of them by defining the number of max. PSCs depending on selection of PPC_MPC512x platform support. Signed-off-by:
Anatolij Gustschin <agust@denx.de> Acked-by:
Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by:
Grant Likely <grant.likely@secretlab.ca>
-
Dave Kleikamp authored
powerpc/booke: Add support for advanced debug registers From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Based on patches originally written by Torez Smith. This patch defines context switch and trap related functionality for BookE specific Debug Registers. It adds support to ptrace() for setting and getting BookE related Debug Registers Signed-off-by:
Dave Kleikamp <shaggy@linux.vnet.ibm.com> Cc: Torez Smith <lnxtorez@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Gibson <dwg@au1.ibm.com> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Sergio Durigan Junior <sergiodj@br.ibm.com> Cc: Thiago Jung Bauermann <bauerman@br.ibm.com> Cc: linuxppc-dev list <Linuxppc-dev@ozlabs.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Dave Kleikamp authored
powerpc/booke: Add definitions for advanced debug registers From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Based on patches originally written by Torez Smith. This patch adds additional definitions for BookE Debug Registers to the reg_booke.h header file. Signed-off-by:
Dave Kleikamp <shaggy@linux.vnet.ibm.com> Acked-by:
David Gibson <dwg@au1.ibm.com> Cc: Torez Smith <lnxtorez@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Sergio Durigan Junior <sergiodj@br.ibm.com> Cc: Thiago Jung Bauermann <bauerman@br.ibm.com> Cc: linuxppc-dev list <Linuxppc-dev@ozlabs.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Dave Kleikamp authored
powerpc: Extended ptrace interface From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Based on patches originally written by Torez Smith. Add a new extended ptrace interface so that user-space has a single interface for powerpc, without having to know the specific layout of the debug registers. Implement: PPC_PTRACE_GETHWDEBUGINFO PPC_PTRACE_SETHWDEBUG PPC_PTRACE_DELHWDEBUG Signed-off-by:
Dave Kleikamp <shaggy@linux.vnet.ibm.com> Acked-by:
David Gibson <dwg@au1.ibm.com> Cc: Torez Smith <lnxtorez@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Sergio Durigan Junior <sergiodj@br.ibm.com> Cc: Thiago Jung Bauermann <bauerman@br.ibm.com> Cc: linuxppc-dev list <Linuxppc-dev@ozlabs.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
Nick Piggin discovered that lwsync barriers around locks were faster than isync on 970. That was a long time ago and I completely dropped the ball in testing his patches across other ppc64 processors. Turns out the idea helps on other chips. Using a microbenchmark that uses a lot of threads to contend on a global pthread mutex (and therefore a global futex), POWER6 improves 8% and POWER7 improves 2%. I checked POWER5 and while I couldn't measure an improvement, there was no regression. This patch uses the lwsync patching code to replace the isyncs with lwsyncs on CPUs that support the instruction. We were marking POWER3 and RS64 as lwsync capable but in reality they treat it as a full sync (ie slow). Remove the CPU_FTR_LWSYNC bit from these CPUs so they continue to use the faster isync method. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
For performance reasons we are about to change ISYNC_ON_SMP to sometimes be lwsync. Now that the macro name doesn't make sense, change it and LWSYNC_ON_SMP to better explain what the barriers are doing. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
This patch implements the lwarx/ldarx hint bit for bit locks. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
Recent versions of the PowerPC architecture added a hint bit to the larx instructions to differentiate between an atomic operation and a lock operation: > 0 Other programs might attempt to modify the word in storage addressed by EA > even if the subsequent Store Conditional succeeds. > > 1 Other programs will not attempt to modify the word in storage addressed by > EA until the program that has acquired the lock performs a subsequent store > releasing the lock. To avoid a binutils dependency this patch create macros for the extended lwarx format and uses it in the spinlock code. To test this change I used a simple test case that acquires and releases a global pthread mutex: pthread_mutex_lock(&mutex); pthread_mutex_unlock(&mutex); On a 32 core POWER6, running 32 test threads we spend almost all our time in the futex spinlock code: 94.37% perf [kernel] [k] ._raw_spin_lock | |--99.95%-- ._raw_spin_lock | | | |--63.29%-- .futex_wake | | | |--36.64%-- .futex_wait_setup Which is a good test for this patch. The results (in lock/unlock operations per second) are: before: 1538203 ops/sec after: 2189219 ops/sec An improvement of 42% A 32 core POWER7 improves even more: before: 1279529 ops/sec after: 2282076 ops/sec An improvement of 78% Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
I often get asked if BAD interrupts are really bad. On some boxes (eg IBM machines running a hypervisor) there are valid cases where are presented with an interrupt that is not for us. These cases are common enough to show up as thousands of BAD interrupts a day. Tone them down by calling them spurious. Since they can be a significant cause of OS jitter, we may as well log them per cpu so we know where they are occurring. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
With NO_HZ it is useful to know how often the decrementer is going off. The patch below adds an entry for it and also adds it into the /proc/stat summaries. While here, I added performance monitoring and machine check exceptions. I found it useful to keep an eye on the PMU exception rate when using the perf tool. Since it's possible to take a completely handled machine check on a System p box it also sounds like a good idea to keep a machine check summary. The event naming matches x86 to keep gratuitous differences to a minimum. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
PowerPC is currently using asm-generic/hardirq.h which statically allocates an NR_CPUS irq_stat array. Switch to an arch specific implementation which uses per cpu data: On a kernel with NR_CPUS=1024, this saves quite a lot of memory: text data bss dec hex filename 8767938 2944132 1636796 13348866 cbb002 vmlinux.baseline 8767779 2944260 1505724 13217763 c9afe3 vmlinux.irq_cpustat A saving of around 128kB. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Breno Leitao authored
During a EEH recover, the pci_dev structure can be null, mainly if an eeh event is detected during cpi config operation. In this case, the pci_dev will not be known (and will be null) the kernel will crash with the following message: Unable to handle kernel paging request for data at address 0x000000a0 Faulting instruction address: 0xc00000000006b8b4 Oops: Kernel access of bad area, sig: 11 [#1] NIP [c00000000006b8b4] .eeh_event_handler+0x10c/0x1a0 LR [c00000000006b8a8] .eeh_event_handler+0x100/0x1a0 Call Trace: [c0000003a80dff00] [c00000000006b8a8] .eeh_event_handler+0x100/0x1a0 [c0000003a80dff90] [c000000000031f1c] .kernel_thread+0x54/0x70 The bug occurs because pci_name() tries to access a null pointer. This patch just guarantee that pci_name() is not called on Null pointers. Signed-off-by:
Breno Leitao <leitao@linux.vnet.ibm.com> Signed-off-by:
Linas Vepstas <linasvepstas@gmail.com> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 16, 2010
-
-
Anatolij Gustschin authored
Add reset module registers representation and machine restart callback for mpc5121 platform. Signed-off-by:
Piotr Ziecik <kosmo@semihalf.com> Signed-off-by:
Wolfgang Denk <wd@denx.de> Signed-off-by:
Anatolij Gustschin <agust@denx.de> Reviewed-by:
Wolfram Sang <w.sang@pengutronix.de> Signed-off-by:
Grant Likely <grant.likely@secretlab.ca>
-
- Feb 14, 2010
-
-
Grant Likely authored
Neither request_OF_resource or release_OF_resource are defined anywhere. Remove the declarations. Signed-off-by:
Grant Likely <grant.likely@secretlab.ca> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by:
Michal Simek <monstr@monstr.eu>
-
Grant Likely authored
Rather than defining of_chosen in each arch, it can be defined for all in driver/of/base.c Signed-off-by:
Grant Likely <grant.likely@secretlab.ca> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by:
Michal Simek <monstr@monstr.eu>
-
Grant Likely authored
Most architectures don't need to change these. Put them into common code to eliminate some duplication Signed-off-by:
Grant Likely <grant.likely@secretlab.ca> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by:
Michal Simek <monstr@monstr.eu>
-
- Feb 09, 2010
-
-
Grant Likely authored
Merge common code between PowerPC and Microblaze Signed-off-by:
Grant Likely <grant.likely@secretlab.ca> Tested-by:
Wolfram Sang <w.sang@pengutronix.de> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
Clean up SD_NODE_INITS so we can easily compare it to x86. Similar to the work in 47734f89 (sched: Clean up topology.h) Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- Feb 03, 2010
-
-
Anton Blanchard authored
The cputime code has a few places that do per_cpu(, smp_processor_id()). Replace them with __get_cpu_var(). Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org>
-