- May 26, 2015
-
-
Paolo Bonzini authored
Prepare for the case of multiple address spaces. Reviewed-by:
Radim Krcmar <rkrcmar@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Architecture-specific helpers are not supposed to muck with struct kvm_userspace_memory_region contents. Add const to enforce this. In order to eliminate the only write in __kvm_set_memory_region, the cleaning of deleted slots is pulled up from update_memslots to __kvm_set_memory_region. Reviewed-by:
Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp> Reviewed-by:
Radim Krcmar <rkrcmar@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Apr 30, 2015
-
-
Michael Ellerman authored
This reverts commit feba4036. Although the principle of this change is good, the implementation has a few issues. Firstly we can sometimes fail to abort a syscall because r12 may have been clobbered by C code if we went down the virtual CPU accounting path, or if syscall tracing was enabled. Secondly we have decided that it is safer to abort the syscall even earlier in the syscall entry path, so that we avoid the syscall tracing path when we are transactional. So that we have time to thoroughly test those changes we have decided to revert this for this merge window and will merge the fixed version in the next window. NB. Rather than reverting the selftest we just drop tm-syscall from TEST_PROGS so that it's not run by default. Fixes: feba4036 ("powerpc/tm: Abort syscalls in active transactions") Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
- Apr 21, 2015
-
-
Paul Mackerras authored
This replaces the assembler code for kvmhv_commence_exit() with C code in book3s_hv_builtin.c. It also moves the IPI sending code that was in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function so it can be used by kvmhv_commence_exit() as well as icp_rm_set_vcpu_irq(). Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
Currently, the entry_exit_count field in the kvmppc_vcore struct contains two 8-bit counts, one of the threads that have started entering the guest, and one of the threads that have started exiting the guest. This changes it to an entry_exit_map field which contains two bitmaps of 8 bits each. The advantage of doing this is that it gives us a bitmap of which threads need to be signalled when exiting the guest. That means that we no longer need to use the trick of setting the HDEC to 0 to pull the other threads out of the guest, which led in some cases to a spurious HDEC interrupt on the next guest entry. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
We can tell when a secondary thread has finished running a guest by the fact that it clears its kvm_hstate.kvm_vcpu pointer, so there is no real need for the nap_count field in the kvmppc_vcore struct. This changes kvmppc_wait_for_nap to poll the kvm_hstate.kvm_vcpu pointers of the secondary threads rather than polling vc->nap_count. Besides reducing the size of the kvmppc_vcore struct by 8 bytes, this also means that we can tell which secondary threads have got stuck and thus print a more informative error message. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
Rather than calling cond_resched() in kvmppc_run_core() before doing the post-processing for the vcpus that we have just run (that is, calling kvmppc_handle_exit_hv(), kvmppc_set_timer(), etc.), we now do that post-processing before calling cond_resched(), and that post- processing is moved out into its own function, post_guest_process(). The reschedule point is now in kvmppc_run_vcpu() and we define a new vcore state, VCORE_PREEMPT, to indicate that that the vcore's runner task is runnable but not running. (Doing the reschedule with the vcore in VCORE_INACTIVE state would be bad because there are potentially other vcpus waiting for the runner in kvmppc_wait_for_exec() which then wouldn't get woken up.) Also, we make use of the handy cond_resched_lock() function, which unlocks and relocks vc->lock for us around the reschedule. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
* Remove unused kvmppc_vcore::n_busy field. * Remove setting of RMOR, since it was only used on PPC970 and the PPC970 KVM support has been removed. * Don't use r1 or r2 in setting the runlatch since they are conventionally reserved for other things; use r0 instead. * Streamline the code a little and remove the ext_interrupt_to_host label. * Add some comments about register usage. * hcall_try_real_mode doesn't need to be global, and can't be called from C code anyway. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
Previously, if kvmppc_run_core() was running a VCPU that needed a VPA update (i.e. one of its 3 virtual processor areas needed to be pinned in memory so the host real mode code can update it on guest entry and exit), we would drop the vcore lock and do the update there and then. Future changes will make it inconvenient to drop the lock, so instead we now remove it from the list of runnable VCPUs and wake up its VCPU task. This will have the effect that the VCPU task will exit kvmppc_run_vcpu(), go around the do loop in kvmppc_vcpu_run_hv(), and re-enter kvmppc_run_vcpu(), whereupon it will do the necessary call to kvmppc_update_vpas() and then rejoin the vcore. The one complication is that the runner VCPU (whose VCPU task is the current task) might be one of the ones that gets removed from the runnable list. In that case we just return from kvmppc_run_core() and let the code in kvmppc_run_vcpu() wake up another VCPU task to be the runner if necessary. This all means that the VCORE_STARTING state is no longer used, so we remove it. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
This reads the timebase at various points in the real-mode guest entry/exit code and uses that to accumulate total, minimum and maximum time spent in those parts of the code. Currently these times are accumulated per vcpu in 5 parts of the code: * rm_entry - time taken from the start of kvmppc_hv_entry() until just before entering the guest. * rm_intr - time from when we take a hypervisor interrupt in the guest until we either re-enter the guest or decide to exit to the host. This includes time spent handling hcalls in real mode. * rm_exit - time from when we decide to exit the guest until the return from kvmppc_hv_entry(). * guest - time spend in the guest * cede - time spent napping in real mode due to an H_CEDE hcall while other threads in the same vcore are active. These times are exposed in debugfs in a directory per vcpu that contains a file called "timings". This file contains one line for each of the 5 timings above, with the name followed by a colon and 4 numbers, which are the count (number of times the code has been executed), the total time, the minimum time, and the maximum time, all in nanoseconds. The overhead of the extra code amounts to about 30ns for an hcall that is handled in real mode (e.g. H_SET_DABR), which is about 25%. Since production environments may not wish to incur this overhead, the new code is conditional on a new config symbol, CONFIG_KVM_BOOK3S_HV_EXIT_TIMING. Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Paul Mackerras authored
This creates a debugfs directory for each HV guest (assuming debugfs is enabled in the kernel config), and within that directory, a file by which the contents of the guest's HPT (hashed page table) can be read. The directory is named vmnnnn, where nnnn is the PID of the process that created the guest. The file is named "htab". This is intended to help in debugging problems in the host's management of guest memory. The contents of the file consist of a series of lines like this: 3f48 4000d032bf003505 0000000bd7ff1196 00000003b5c71196 The first field is the index of the entry in the HPT, the second and third are the HPT entry, so the third entry contains the real page number that is mapped by the entry if the entry's valid bit is set. The fourth field is the guest's view of the second doubleword of the entry, so it contains the guest physical address. (The format of the second through fourth fields are described in the Power ISA and also in arch/powerpc/include/asm/mmu-hash64.h.) Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Aneesh Kumar K.V authored
This adds helper routines for locking and unlocking HPTEs, and uses them in the rest of the code. We don't change any locking rules in this patch. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Aneesh Kumar K.V authored
We don't support real-mode areas now that 970 support is removed. Remove the remaining details of rma from the code. Also rename rma_setup_done to hpte_setup_done to better reflect the changes. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
Michael Ellerman authored
Some PowerNV systems include a hardware random-number generator. This HWRNG is present on POWER7+ and POWER8 chips and is capable of generating one 64-bit random number every microsecond. The random numbers are produced by sampling a set of 64 unstable high-frequency oscillators and are almost completely entropic. PAPR defines an H_RANDOM hypercall which guests can use to obtain one 64-bit random sample from the HWRNG. This adds a real-mode implementation of the H_RANDOM hypercall. This hypercall was implemented in real mode because the latency of reading the HWRNG is generally small compared to the latency of a guest exit and entry for all the threads in the same virtual core. Userspace can detect the presence of the HWRNG and the H_RANDOM implementation by querying the KVM_CAP_PPC_HWRNG capability. The H_RANDOM hypercall implementation will only be invoked when the guest does an H_RANDOM hypercall if userspace first enables the in-kernel H_RANDOM implementation using the KVM_CAP_PPC_ENABLE_HCALL capability. Signed-off-by:
Michael Ellerman <michael@ellerman.id.au> Signed-off-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Alexander Graf <agraf@suse.de>
-
David Gibson authored
On POWER, storage caching is usually configured via the MMU - attributes such as cache-inhibited are stored in the TLB and the hashed page table. This makes correctly performing cache inhibited IO accesses awkward when the MMU is turned off (real mode). Some CPU models provide special registers to control the cache attributes of real mode load and stores but this is not at all consistent. This is a problem in particular for SLOF, the firmware used on KVM guests, which runs entirely in real mode, but which needs to do IO to load the kernel. To simplify this qemu implements two special hypercalls, H_LOGICAL_CI_LOAD and H_LOGICAL_CI_STORE which simulate a cache-inhibited load or store to a logical address (aka guest physical address). SLOF uses these for IO. However, because these are implemented within qemu, not the host kernel, these bypass any IO devices emulated within KVM itself. The simplest way to see this problem is to attempt to boot a KVM guest from a virtio-blk device with iothread / dataplane enabled. The iothread code relies on an in kernel implementation of the virtio queue notification, which is not triggered by the IO hcalls, and so the guest will stall in SLOF unable to load the guest OS. This patch addresses this by providing in-kernel implementations of the 2 hypercalls, which correctly scan the KVM IO bus. Any access to an address not handled by the KVM IO bus will cause a VM exit, hitting the qemu implementation as before. Note that a userspace change is also required, in order to enable these new hcall implementations with KVM_CAP_PPC_ENABLE_HCALL. Signed-off-by:
David Gibson <david@gibson.dropbear.id.au> [agraf: fix compilation] Signed-off-by:
Alexander Graf <agraf@suse.de>
-
- Apr 17, 2015
-
-
Kees Cook authored
Switch to using the newly created asm-generic/seccomp.h for the seccomp strict mode syscall definitions. The obsolete sigreturn in COMPAT mode is retained as an override. Remaining definitions are identical, though they incorrectly appeared in uapi, which has been corrected. Signed-off-by:
Kees Cook <keescook@chromium.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Aneesh Kumar K.V authored
For THP that is marked trans splitting, we return the pte. This require the callers to handle the pmd_trans_splitting scenario, if they care. All the current callers are either looking at pfn or write_ok, hence we don't need to update them. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
We can disable a THP split or a hugepage collapse by disabling irq. We do send IPI to all the cpus in the early part of split/collapse, and disabling local irq ensure we don't make progress with split/collapse. If the THP is getting split we return NULL from find_linux_pte_or_hugepte(). For all the current callers it should be ok. We need to be careful if we want to use returned pte_t pointer outside the irq disabled region. W.r.t to THP split, the pfn remains the same, but then a hugepage collapse will result in a pfn change. There are few steps we can take to avoid a hugepage collapse.One way is to take page reference inside the irq disable region. Other option is to take mmap_sem so that a parallel collapse will not happen. We can also disable collapse by taking pmd_lock. Another method used by kvm subsystem is to check whether we had a mmu_notifer update in between using mmu_notifier_retry(). Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
This patch remove helpers which we had used only once in the code. Limiting page table walk variants help in ensuring that we won't end up with code walking page table with wrong assumptions. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
pte can get updated from other CPUs as part of multiple activities like THP split, huge page collapse, unmap. We need to make sure we don't reload the pte value again and again for different checks. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
- Apr 14, 2015
-
-
Kees Cook authored
The arch_randomize_brk() function is used on several architectures, even those that don't support ET_DYN ASLR. To avoid bulky extern/#define tricks, consolidate the support under CONFIG_ARCH_HAS_ELF_RANDOMIZE for the architectures that support it, while still handling CONFIG_COMPAT_BRK. Signed-off-by:
Kees Cook <keescook@chromium.org> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by:
Ingo Molnar <mingo@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "David A. Long" <dave.long@linaro.org> Cc: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Arun Chandran <achandran@mvista.com> Cc: Yann Droneaud <ydroneaud@opteya.com> Cc: Min-Hua Chen <orca.chen@gmail.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Alex Smith <alex@alex-smith.me.uk> Cc: Markos Chandras <markos.chandras@imgtec.com> Cc: Vineeth Vijayan <vvijayan@mvista.com> Cc: Jeff Bailey <jeffbailey@google.com> Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Behan Webster <behanw@converseincode.com> Cc: Ismael Ripoll <iripoll@upv.es> Cc: Jan-Simon Mller <dl9pf@gmx.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 12, 2015
-
-
Richard Weinberger authored
Signed-off-by:
Richard Weinberger <richard@nod.at>
-
- Apr 11, 2015
-
-
Anton Blanchard authored
The hard lockup detector uses a PMU event as a periodic NMI to detect if we are stuck (where stuck means no timer interrupts have occurred). Ben's rework of the ppc64 soft disable code has made ppc64 PMU exceptions a partial NMI. They can get disabled if an external interrupt comes in, but otherwise PMU interrupts will fire in interrupt disabled regions. We disable the hard lockup detector by default for a few reasons: - It breaks userspace event based branches on POWER8. - It is likely to produce false positives on KVM guests. - Since PMCs can only count to 2^31, counting cycles means we might take multiple PMU exceptions per second per hardware thread even if our hard lockup timeout is 10 seconds. It can be enabled via a boot option, or via procfs. Signed-off-by:
Anton Blanchard <anton@samba.org> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Cyril Bur authored
This change adds the OPAL interface definitions to allow Linux to read, write and erase from system flash devices. We register platform devices for the flash devices exported by firmware. We clash with the existing opal_flash_init function, which is really for the FSP flash update functionality, so we rename that initcall to opal_flash_update_init(). A future change will add an mtd driver that uses this interface. Changes from Joel Stanley and Jeremy Kerr. Signed-off-by:
Cyril Bur <cyrilbur@gmail.com> Signed-off-by:
Jeremy Kerr <jk@ozlabs.org> Signed-off-by:
Joel Stanley <joel@jms.id.au> Acked-by:
Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Sam bobroff authored
This patch changes the syscall handler to doom (tabort) active transactions when a syscall is made and return immediately without performing the syscall. Currently, the system call instruction automatically suspends an active transaction which causes side effects to persist when an active transaction fails. This does change the kernel's behaviour, but in a way that was documented as unsupported. It doesn't reduce functionality because syscalls will still be performed after tsuspend. It also provides a consistent interface and makes the behaviour of user code substantially the same across powerpc and platforms that do not support suspended transactions (e.g. x86 and s390). Performance measurements using http://ozlabs.org/~anton/junkcode/null_syscall.c indicate the cost of a system call increases by about 0.5%. Signed-off-by:
Sam Bobroff <sam.bobroff@au1.ibm.com> Acked-By:
Michael Neuling <mikey@neuling.org> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Remove shims, patch callsites to use pci_controller_ops versions instead. Also move back the probe mode defines, as explained in the patch for pci_probe_mode. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
If a pci_controller_ops struct is provided to iommu_init_early_dart, populate that with the DMA setup ops, rather than ppc_md. If NULL is provided, populate ppc_md as before. This also patches the call sites for Maple and Power Mac to pass NULL, so existing behaviour is preserved. The benefit of making this optional is that it means we don't have to change dart, Maple and Power Mac over to the controller_ops system in one fell swoop. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Add pci_controller_ops.reset_secondary_bus, shadowing ppc_md.pcibios_reset_secondary_bus. Add a shim, and changes the callsites to use the shim. Use pcibios_reset_secondary_bus_shim, as both pcibios_reset_secondary_bus and pci_reset_secondary_bus are already taken. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Add pci_controller_ops.window_alignment, shadowing ppc_md.pcibios_window_alignment. Add a shim, and changes the callsites to use the shim. Here, we use pci_window_alignment, as pcibios_window_alignment is already taken. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Add pci_controller_ops.enable_device_hook, shadowing ppc_md.pcibios_enable_device_hook. Add a shim, and changes the callsites to use the shim. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Add pci_controller_ops.probe_mode, shadowing ppc_md.pci_probe_mode. Add a shim, and changes the callsites to use the shim. We also need to move the probe mode defines to pci-bridge.h from pci.h. They are required by the shim in order to return a sensible default. Previously, the were defined in pci.h, but pci.h includes pci-bridge.h before the relevant #defines. This means the definitions are absent if pci.h is included before pci-bridge.h. This occurs in some drivers. So, move the definitons now, and move them back when we remove the shim. Anything that wants the defines would have had to include pci.h, and since pci.h includes pci-bridge.h, nothing will lose access to the defines. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Add pci_controller_ops.dma_bus_setup, shadowing ppc_md.pci_dma_bus_setup. Add a shim, and changes the callsites to use the shim. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Introduces the pci_controller_ops structure. Add pci_controller_ops.dma_dev_setup, shadowing ppc_md.pci_dma_dev_setup. Add a shim, and change the callsites to use the shim. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
pcibios_enable_device_hook returned an int. Every implementation returned either -EINVAL or 0. The return value wasn't propagated by the caller: any non-zero return value caused pcibios_enable_device to return -EINVAL itself. Therefore, make the hook return a bool. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
Previously, find_and_init_phbs() was used in both PowerNV and pSeries setup. However, since RTAS support has been dropped from PowerNV, we can move it into a platform-specific file. Signed-off-by:
Daniel Axtens <dja@axtens.net> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
- Apr 10, 2015
-
-
Michael Ellerman authored
smp_ops->probe() is currently supposed to return the number of cpus in the system. The last actual usage of the value was removed in May 2007 in e147ec8f "[POWERPC] Simplify smp_space_timers". We still passed the value around until June 2010 when even that was finally removed in c1aa687d "powerpc: Clean up obsolete code relating to decrementer and timebase". So drop that requirement, probe() now returns void, and update all implementations. Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
We have a powerpc specific global called mem_init_done which is "set on boot once kmalloc can be called". But that's not *quite* true. We set it at the bottom of mem_init(), and rely on the fact that mm_init() calls kmem_cache_init() immediately after that, and nothing is running in parallel. So replace it with the generic and 100% correct slab_is_available(). Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Fix the 32-bit code also] Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au>
-
- Apr 07, 2015
-
-
Michael Ellerman authored
The celleb code has seen no actual development for ~7 years. We (maintainers) have no access to test hardware, and it is highly likely the code has bit-rotted. As far as we're aware the hardware was never widely available, and is certainly no longer available, and no one on the list has shown any interest in it over the years. So remove it. If anyone has one and cares please speak up. Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Acked-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by:
Jeremy Kerr <jk@ozlabs.org>
-
- Apr 01, 2015
-
-
Kevin Hao authored
All the cache line size of the current book3e 64bit SoCs are 64 bytes. So we should use this size to align the member of paca_struct. This only change the paca_struct's members which are private to book3e CPUs, and should not have any effect to book3s ones. With this, we save 192 bytes. Also change it to __aligned(size) since it is preferred over __attribute__((aligned(size))). Before: /* size: 1920, cachelines: 30, members: 46 */ /* sum members: 1667, holes: 6, sum holes: 141 */ /* padding: 112 */ After: /* size: 1728, cachelines: 27, members: 46 */ /* sum members: 1667, holes: 4, sum holes: 13 */ /* padding: 48 */ Signed-off-by:
Kevin Hao <haokexin@gmail.com> Signed-off-by:
Scott Wood <scottwood@freescale.com>
-