- Dec 16, 2013
-
-
Lorenzo Pieralisi authored
When CPU idle is enabled, the architectural idle call should go through the idle subsystem to allow CPUs to enter idle states defined by the platform CPU idle back-end operations. This patch, mirroring other archs behaviour, adds the CPU idle call to the architectural arch_cpu_idle implementation for arm64. Acked-by:
Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
On platforms with power management capabilities, timers that are shut down when a CPU enters deep C-states must be emulated using an always-on timer and a timer IPI to relay the timer IRQ to target CPUs on an SMP system. This patch enables the generic clockevents broadcast infrastructure for arm64, by providing the required Kconfig entries and adding the timer IPI infrastructure. Acked-by:
Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
When a CPU is shutdown either through CPU idle or suspend to RAM, the content of HW breakpoint registers must be reset or restored to proper values when CPU resume from low power states. This patch adds debug register restore operations to the HW breakpoint control function and implements a CPU PM notifier that allows to restore the content of HW breakpoint registers to allow proper suspend/resume operations. Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
Most of the code executed to install and uninstall breakpoints is common and can be factored out in a function that through a runtime operations type provides the requested implementation. This patch creates a common function that can be used to install/uninstall breakpoints and defines the set of operations that can be carried out through it. Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
When a CPU enters a low power state, its FP register content is lost. This patch adds a notifier to save the FP context on CPU shutdown and restore it on CPU resume. The context is saved and restored only if the suspending thread is not a kernel thread, mirroring the current context switch behaviour. Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
Kernel subsystems like CPU idle and suspend to RAM require a generic mechanism to suspend a processor, save its context and put it into a quiescent state. The cpu_{suspend}/{resume} implementation provides such a framework through a kernel interface allowing to save/restore registers, flush the context to DRAM and suspend/resume to/from low-power states where processor context may be lost. The CPU suspend implementation relies on the suspend protocol registered in CPU operations to carry out a suspend request after context is saved and flushed to DRAM. The cpu_suspend interface: int cpu_suspend(unsigned long arg); allows to pass an opaque parameter that is handed over to the suspend CPU operations back-end so that it can take action according to the semantics attached to it. The arg parameter allows suspend to RAM and CPU idle drivers to communicate to suspend protocol back-ends; it requires standardization so that the interface can be reused seamlessly across systems, paving the way for generic drivers. Context memory is allocated on the stack, whose address is stashed in a per-cpu variable to keep track of it and passed to core functions that save/restore the registers required by the architecture. Even though, upon successful execution, the cpu_suspend function shuts down the suspending processor, the warm boot resume mechanism, based on the cpu_resume function, makes the resume path operate as a cpu_suspend function return, so that cpu_suspend can be treated as a C function by the caller, which simplifies coding the PM drivers that rely on the cpu_suspend API. Upon context save, the minimal amount of memory is flushed to DRAM so that it can be retrieved when the MMU is off and caches are not searched. The suspend CPU operation, depending on the required operations (eg CPU vs Cluster shutdown) is in charge of flushing the cache hierarchy either implicitly (by calling firmware implementations like PSCI) or explicitly by executing the required cache maintainance functions. Debug exceptions are disabled during cpu_{suspend}/{resume} operations so that debug registers can be saved and restored properly preventing preemption from debug agents enabled in the kernel. Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
Power management software requires the kernel to save and restore CPU registers while going through suspend and resume operations triggered by kernel subsystems like CPU idle and suspend to RAM. This patch implements code that provides save and restore mechanism for the arm v8 implementation. Memory for the context is passed as parameter to both cpu_do_suspend and cpu_do_resume functions, and allows the callers to implement context allocation as they deem fit. The registers that are saved and restored correspond to the registers set actually required by the kernel to be up and running which represents a subset of v8 ISA. Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
On ARM64 SMP systems, cores are identified by their MPIDR_EL1 register. The MPIDR_EL1 guidelines in the ARM ARM do not provide strict enforcement of MPIDR_EL1 layout, only recommendations that, if followed, split the MPIDR_EL1 on ARM 64 bit platforms in four affinity levels. In multi-cluster systems like big.LITTLE, if the affinity guidelines are followed, the MPIDR_EL1 can not be considered a linear index. This means that the association between logical CPU in the kernel and the HW CPU identifier becomes somewhat more complicated requiring methods like hashing to associate a given MPIDR_EL1 to a CPU logical index, in order for the look-up to be carried out in an efficient and scalable way. This patch provides a function in the kernel that starting from the cpu_logical_map, implement collision-free hashing of MPIDR_EL1 values by checking all significative bits of MPIDR_EL1 affinity level bitfields. The hashing can then be carried out through bits shifting and ORing; the resulting hash algorithm is a collision-free though not minimal hash that can be executed with few assembly instructions. The mpidr_el1 is filtered through a mpidr mask that is built by checking all bits that toggle in the set of MPIDR_EL1s corresponding to possible CPUs. Bits that do not toggle do not carry information so they do not contribute to the resulting hash. Pseudo code: /* check all bits that toggle, so they are required */ for (i = 1, mpidr_el1_mask = 0; i < num_possible_cpus(); i++) mpidr_el1_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0)); /* * Build shifts to be applied to aff0, aff1, aff2, aff3 values to hash the * mpidr_el1 * fls() returns the last bit set in a word, 0 if none * ffs() returns the first bit set in a word, 0 if none */ fs0 = mpidr_el1_mask[7:0] ? ffs(mpidr_el1_mask[7:0]) - 1 : 0; fs1 = mpidr_el1_mask[15:8] ? ffs(mpidr_el1_mask[15:8]) - 1 : 0; fs2 = mpidr_el1_mask[23:16] ? ffs(mpidr_el1_mask[23:16]) - 1 : 0; fs3 = mpidr_el1_mask[39:32] ? ffs(mpidr_el1_mask[39:32]) - 1 : 0; ls0 = fls(mpidr_el1_mask[7:0]); ls1 = fls(mpidr_el1_mask[15:8]); ls2 = fls(mpidr_el1_mask[23:16]); ls3 = fls(mpidr_el1_mask[39:32]); bits0 = ls0 - fs0; bits1 = ls1 - fs1; bits2 = ls2 - fs2; bits3 = ls3 - fs3; aff0_shift = fs0; aff1_shift = 8 + fs1 - bits0; aff2_shift = 16 + fs2 - (bits0 + bits1); aff3_shift = 32 + fs3 - (bits0 + bits1 + bits2); u32 hash(u64 mpidr_el1) { u32 l[4]; u64 mpidr_el1_masked = mpidr_el1 & mpidr_el1_mask; l[0] = mpidr_el1_masked & 0xff; l[1] = mpidr_el1_masked & 0xff00; l[2] = mpidr_el1_masked & 0xff0000; l[3] = mpidr_el1_masked & 0xff00000000; return (l[0] >> aff0_shift | l[1] >> aff1_shift | l[2] >> aff2_shift | l[3] >> aff3_shift); } The hashing algorithm relies on the inherent properties set in the ARM ARM recommendations for the MPIDR_EL1. Exotic configurations, where for instance the MPIDR_EL1 values at a given affinity level have large holes, can end up requiring big hash tables since the compression of values that can be achieved through shifting is somewhat crippled when holes are present. Kernel warns if the number of buckets of the resulting hash table exceeds the number of possible CPUs by a factor of 4, which is a symptom of a very sparse HW MPIDR_EL1 configuration. The hash algorithm is quite simple and can easily be implemented in assembly code, to be used in code paths where the kernel virtual address space is not set-up (ie cpu_resume) and instruction and data fetches are strongly ordered so code must be compact and must carry out few data accesses. Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
In order to simplify access to different affinity levels within the MPIDR_EL1 register values, this patch implements some preprocessor macros that allow to retrieve the MPIDR_EL1 affinity level value according to the level passed as input parameter. Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
- Dec 06, 2013
-
-
Steve Capper authored
Modify the value of PMD_SECT_PROT_NONE to match that of PTE_NONE. This should have been in commit 3676f9ef (Move PTE_PROT_NONE higher up). Signed-off-by:
Steve Capper <steve.capper@linaro.org> Cc: <stable@vger.kernel.org> # 3.11+: 3676f9ef: arm64: Move PTE_PROT_NONE higher up Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
Write-combine and cacheable mappings use Normal memory on arm64. On SMP systems, the pte needs the shareability bit which is set in pgprot_default. Use this for defining PROT_DEFAULT used by ioremap_wc and ioremap_cache (Device memory is shareable by default, does not need additional attributes). Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
The refactoring of el2_setup split code setting up EL2 and detecting the CPU boot mode in separate chunks. This allows the code that sets up EL2 to run in an endian independent way - ie before the endianess is set up in the respective sctlr registers. This patch brings secondary_entry up-to-date so that CPUs entering the kernel through this code path set-up EL2 and the cpu boot mode properly. Signed-off-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by:
Mark Rutland <mark.rutand@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Rob Herring authored
Rather than continue to add per platform defaults, make the default a likely common core count. 8 is also the default for x86. Signed-off-by:
Rob Herring <rob.herring@calxeda.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Currently there is no dsb between the tlbi in __cpu_setup and the write to SCTLR_EL1 which enables the MMU in __turn_mmu_on. This means that the TLB invalidation is not guaranteed to have completed at the point address translation is enabled, leading to a number of possible issues including incorrect translations and TLB conflict faults. This patch moves the tlbi in __cpu_setup above an existing dsb used to synchronise I-cache invalidation, ensuring that the TLBs have been invalidated at the point the MMU is enabled. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 29, 2013
-
-
Catalin Marinas authored
PTE_PROT_NONE means that a pte is present but does not have any read/write attributes. However, setting the memory type like pgprot_writecombine() is allowed and such bits overlap with PTE_PROT_NONE. This causes mmap/munmap issues in drivers that change the vma->vm_pg_prot on PROT_NONE mappings. This patch reverts the PTE_FILE/PTE_PROT_NONE shift in commit 59911ca4 (ARM64: mm: Move PTE_PROT_NONE bit) and moves PTE_PROT_NONE together with the other software bits. Signed-off-by:
Steve Capper <steve.capper@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Steve Capper <steve.capper@linaro.org> Cc: <stable@vger.kernel.org> # 3.11+
-
Catalin Marinas authored
This provides better performance compared to Device GRE and also allows unaligned accesses. Such memory is intended to be used with standard RAM (e.g. framebuffers) and not I/O. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 28, 2013
-
-
Matthew Leach authored
The current breakpoint instruction checking code for A32 is not endian clean. Fix this with appropriate byte-swapping when retrieving instructions. Signed-off-by:
Matthew Leach <matthew.leach@arm.com> Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Matthew Leach authored
On a BE system the wrong half of the X registers is retrieved/written when attempting to get/set the value of aarch32 registers through ptrace. Ensure that types are the correct width so that the relevant casting occurs. Signed-off-by:
Matthew Leach <matthew.leach@arm.com> Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 25, 2013
-
-
Catalin Marinas authored
The asynchronous aborts are generally fatal for the kernel but they can be masked via the pstate A bit. If a system error happens while in kernel mode, it won't be visible until returning to user space. This patch enables this kind of abort early to help identifying the cause. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
With the spin-table SMP booting method, secondary CPUs poll a location passed in the DT. The foundation-v8.dts file doesn't have this memory reserved and there is a risk of Linux using it before secondary CPUs are started. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Commit f27dde8d (sched: Add NEED_RESCHED to the preempt_count) introduced the use of bit 31 in preempt_count for obscure scheduling purposes. This causes interrupts taken from EL0 to hit the (open coded) BUG when this flag is flipped while handling the interrupt (we compare the values before and after, and kill the kernel if they are different). The fix is to stop messing with the preempt count entirely, as this is already being dealt with in the generic code (irq_enter/irq_exit). Tested on a dual A53 FPGA running cyclictest. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 15, 2013
-
-
Christoph Hellwig authored
We've switched over every architecture that supports SMP to it, so remove the new useless config variable. Signed-off-by:
Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Kirill A. Shutemov authored
Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Nov 13, 2013
-
-
Thomas Gleixner authored
No point in having this bit defined by architecture. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130917183629.090698799@linutronix.de
-
Jianguo Wu authored
Use more appropriate NUMA_NO_NODE instead of -1 in all archs' module_alloc() Signed-off-by:
Jianguo Wu <wujianguo@huawei.com> Acked-by:
David Rientjes <rientjes@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Nov 09, 2013
-
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Chen Gang authored
In current kernel wide source code, except other architectures, only s390 scsi drivers use atomic_clear_mask(), and arm/arm64 need not support s390 drivers. So remove atomic_clear_mask() from "arm[64]/include/asm/atomic.h". Signed-off-by:
Chen Gang <gang.chen@asianux.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- Nov 08, 2013
-
-
Stefano Stabellini authored
Signed-off-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by:
Olof Johansson <olof@lixom.net> Signed-off-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- Nov 07, 2013
-
-
Marc Zyngier authored
When booting a vcpu using PSCI, make sure we start it with the endianness of the caller. Otherwise, secondaries can be pretty unhappy to execute a BE kernel in LE mode... This conforms to PSCI spec Rev B, 5.13.3. Acked-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Do the necessary byteswap when host and guest have different views of the universe. Actually, the only case we need to take care of is when the guest is BE. All the other cases are naturally handled. Also be careful about endianness when the data is being memcopy-ed from/to the run buffer. Acked-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Sudeep Holla authored
The non-IPI interrupts are displayed only for the online cpus from show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts(). As a result, the column headers and the IPI count don't match if any CPU is offline. This patch fixes show_ipi_list to display IPIs for online CPUs only. Signed-off-by:
Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 06, 2013
-
-
Catalin Marinas authored
Commit 52ea2a56 (arm64: locks: introduce ticket-based spinlock implementation) introduces the arch_spin_is_contended() function making CONFIG_GENERIC_LOCKBREAK unnecessary. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Ensure that accesses to the GICH_* registers are byteswapped when the kernel is compiled as big-endian. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Force SCTLR_EL2.EE to 1 if the kernel is compiled as BE. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 05, 2013
-
-
T.J. Purtell authored
The ARM architecture reference specifies that the IT state bits in the PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM function is registered as a signal handler, and that signal is delivered inside a block of instructions following an IT instruction, some of the instructions at the beginning of the signal handler may be skipped if the IT state bits of the Program Status Register are not cleared by the kernel. Signed-off-by:
T.J. Purtell <tj@mobisocial.us> [catalin.marinas@arm.com: code comment and commit log updated] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This patch expands the VA_BITS to 42 when the 64K page configuration is enabled allowing 2TB kernel linear mapping. Linux still uses 2 levels of page tables in this configuration with pgd now being a full page. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Will Deacon authored
Relocations that require an instruction immediate to be re-encoded must ensure that the instruction pattern is represented in a little-endian format for the manipulation code to work correctly. This patch converts the loaded instruction into native-endianess prior to encoding and then converts back to little-endian byteorder before updating memory. Signed-off-by:
Will Deacon <will.deacon@arm.com> Tested-by:
Matthew Leach <matthew.leach@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This way we can spot early bugs when just testing with the default config. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
preempt_count is defined as an int. Oddly enough, we access it as a 64bit value. Things become interesting when running a BE kernel, and looking at the current CPU number, which is stored as an int next to preempt_count. Like in a per-cpu interrupt handler, for example... Using a 32bit access fixes the issue for good. Cc: Matthew Leach <matthew.leach@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Nov 04, 2013
-
-
Marc Zyngier authored
Commit 53ae3acd (arm64: Only enable local interrupts after the CPU is marked online) moved the enabling of the GIC after the CPUs are marked online. This has some interesting effect: [...] [<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160 [<ffffffc000088f58>] smp_send_reschedule+0x38/0x40 [<ffffffc0000c8728>] resched_task+0x84/0xc0 [<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98 [<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4 [<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70 [<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4 [<ffffffc0000cae6c>] default_wake_function+0x10/0x18 [<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0 [<ffffffc0000c7784>] complete+0x48/0x64 [<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110 [...] Here, we end-up calling gic_raise_softirq without having initialized the interrupt controller for this CPU. While this goes unnoticed with GICv2 (the distributor is always accessible), it explodes with GICv3. The fix is to move the call to notify_cpu_starting before we set the secondary CPU online. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-