- Sep 28, 2016
-
-
Aleksey Makarov authored
SBBR mentions SPCR as a mandatory ACPI table. So enable it for ARM64 Earlycon should be set up as early as possible. ACPI boot tables are mapped in arch/arm64/kernel/acpi.c:acpi_boot_table_init() that is called from setup_arch() and that's where we parse SPCR. So it has to be opted-in per-arch. When ACPI_SPCR_TABLE is defined initialization of DT earlycon is deferred until the DT/ACPI decision is done. Initialize DT earlycon if ACPI is disabled. Acked-by:
Will Deacon <will.deacon@arm.com> Acked-by:
Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by:
Aleksey Makarov <aleksey.makarov@linaro.org> Tested-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Tested-by:
Christopher Covington <cov@codeaurora.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mark Rutland authored
As with dsb() and isb(), add a __tlbi() helper so that we can avoid distracting asm boilerplate every time we want a TLBI. As some TLBI operations take an argument while others do not, some pre-processor is used to handle these two cases with different assembly blocks. The existing tlbflush.h code is moved over to use the helper. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> [ rename helper to __tlbi, update comment and commit log ] Signed-off-by:
Punit Agrawal <punit.agrawal@arm.com> Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Kefeng Wang authored
The arm64 forces CONFIG_SMP=y with commit 4b3dc967, no need to add SMP dependence for NUMA. Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 26, 2016
-
-
Kefeng Wang authored
Move OF_NUMA select under NUMA config, and select ACPI_NUMA when ACPI enabled. Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
In some places, dump_backtrace() is called with a NULL tsk parameter, e.g. in bug_handler() in arch/arm64, or indirectly via show_stack() in core code. The expectation is that this is treated as if current were passed instead of NULL. Similar is true of unwind_frame(). Commit a80a0eb7 ("arm64: make irq_stack_ptr more robust") didn't take this into account. In dump_backtrace() it compares tsk against current *before* we check if tsk is NULL, and in unwind_frame() we never set tsk if it is NULL. Due to this, we won't initialise irq_stack_ptr in either function. In dump_backtrace() this results in calling dump_mem() for memory immediately above the IRQ stack range, rather than for the relevant range on the task stack. In unwind_frame we'll reject unwinding frames on the IRQ stack. In either case this results in incomplete or misleading backtrace information, but is not otherwise problematic. The initial percpu areas (including the IRQ stacks) are allocated in the linear map, and dump_mem uses __get_user(), so we shouldn't access anything with side-effects, and will handle holes safely. This patch fixes the issue by having both functions handle the NULL tsk case before doing anything else with tsk. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Fixes: a80a0eb7 ("arm64: make irq_stack_ptr more robust") Acked-by:
James Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yang Shi <yang.shi@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 23, 2016
-
-
Scott Wood authored
Instead of comparing the name to a magic string, use archdata to explicitly communicate whether the arch timer is suitable for direct vdso access. Acked-by:
Will Deacon <will.deacon@arm.com> Acked-by:
Russell King <rmk+kernel@armlinux.org.uk> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Scott Wood <oss@buserror.net> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Scott Wood authored
Erratum A-008585 says that the ARM generic timer counter "has the potential to contain an erroneous value for a small number of core clock cycles every time the timer value changes". Accesses to TVAL (both read and write) are also affected due to the implicit counter read. Accesses to CVAL are not affected. The workaround is to reread TVAL and count registers until successive reads return the same value. Writes to TVAL are replaced with an equivalent write to CVAL. The workaround is to reread TVAL and count registers until successive reads return the same value, and when writing TVAL to retry until counter reads before and after the write return the same value. The workaround is enabled if the fsl,erratum-a008585 property is found in the timer node in the device tree. This can be overridden with the clocksource.arm_arch_timer.fsl-a008585 boot parameter, which allows KVM users to enable the workaround until a mechanism is implemented to automatically communicate this information. This erratum can be found on LS1043A and LS2080A. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Scott Wood <oss@buserror.net> [will: renamed read macro to reflect that it's not usually unstable] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
AKASHI Takahiro authored
Handle read-only cases when CONFIG_DEBUG_RODATA (4.0) or CONFIG_DEBUG_SET_MODULE_RONX (3.18) are enabled by using aarch64_insn_write() instead of probe_kernel_write() as introduced by commit 2f896d58 ("arm64: use fixmap for text patching") in 4.0. Fixes: 11d91a77 ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support") Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Jason Wessel <jason.wessel@windriver.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
David Daney authored
The wq_numa_init() function makes a private CPU to node map by calling cpu_to_node() early in the boot process, before the non-boot CPUs are brought online. Since the default implementation of cpu_to_node() returns zero for CPUs that have never been brought online, the workqueue system's view is that *all* CPUs are on node zero. When the unbound workqueue for a non-zero node is created, the tsk_cpus_allowed() for the worker threads is the empty set because there are, in the view of the workqueue system, no CPUs on non-zero nodes. The code in try_to_wake_up() using this empty cpumask ends up using the cpumask empty set value of NR_CPUS as an index into the per-CPU area pointer array, and gets garbage as it is one past the end of the array. This results in: [ 0.881970] Unable to handle kernel paging request at virtual address fffffb1008b926a4 [ 1.970095] pgd = fffffc00094b0000 [ 1.973530] [fffffb1008b926a4] *pgd=0000000000000000, *pud=0000000000000000, *pmd=0000000000000000 [ 1.982610] Internal error: Oops: 96000004 [#1] SMP [ 1.987541] Modules linked in: [ 1.990631] CPU: 48 PID: 295 Comm: cpuhp/48 Tainted: G W 4.8.0-rc6-preempt-vol+ #9 [ 1.999435] Hardware name: Cavium ThunderX CN88XX board (DT) [ 2.005159] task: fffffe0fe89cc300 task.stack: fffffe0fe8b8c000 [ 2.011158] PC is at try_to_wake_up+0x194/0x34c [ 2.015737] LR is at try_to_wake_up+0x150/0x34c [ 2.020318] pc : [<fffffc00080e7468>] lr : [<fffffc00080e7424>] pstate: 600000c5 [ 2.027803] sp : fffffe0fe8b8fb10 [ 2.031149] x29: fffffe0fe8b8fb10 x28: 0000000000000000 [ 2.036522] x27: fffffc0008c63bc8 x26: 0000000000001000 [ 2.041896] x25: fffffc0008c63c80 x24: fffffc0008bfb200 [ 2.047270] x23: 00000000000000c0 x22: 0000000000000004 [ 2.052642] x21: fffffe0fe89d25bc x20: 0000000000001000 [ 2.058014] x19: fffffe0fe89d1d00 x18: 0000000000000000 [ 2.063386] x17: 0000000000000000 x16: 0000000000000000 [ 2.068760] x15: 0000000000000018 x14: 0000000000000000 [ 2.074133] x13: 0000000000000000 x12: 0000000000000000 [ 2.079505] x11: 0000000000000000 x10: 0000000000000000 [ 2.084879] x9 : 0000000000000000 x8 : 0000000000000000 [ 2.090251] x7 : 0000000000000040 x6 : 0000000000000000 [ 2.095621] x5 : ffffffffffffffff x4 : 0000000000000000 [ 2.100991] x3 : 0000000000000000 x2 : 0000000000000000 [ 2.106364] x1 : fffffc0008be4c24 x0 : ffffff0ffffada80 [ 2.111737] [ 2.113236] Process cpuhp/48 (pid: 295, stack limit = 0xfffffe0fe8b8c020) [ 2.120102] Stack: (0xfffffe0fe8b8fb10 to 0xfffffe0fe8b90000) [ 2.125914] fb00: fffffe0fe8b8fb80 fffffc00080e7648 . . . [ 2.442859] Call trace: [ 2.445327] Exception stack(0xfffffe0fe8b8f940 to 0xfffffe0fe8b8fa70) [ 2.451843] f940: fffffe0fe89d1d00 0000040000000000 fffffe0fe8b8fb10 fffffc00080e7468 [ 2.459767] f960: fffffe0fe8b8f980 fffffc00080e4958 ffffff0ff91ab200 fffffc00080e4b64 [ 2.467690] f980: fffffe0fe8b8f9d0 fffffc00080e515c fffffe0fe8b8fa80 0000000000000000 [ 2.475614] f9a0: fffffe0fe8b8f9d0 fffffc00080e58e4 fffffe0fe8b8fa80 0000000000000000 [ 2.483540] f9c0: fffffe0fe8d10000 0000000000000040 fffffe0fe8b8fa50 fffffc00080e5ac4 [ 2.491465] f9e0: ffffff0ffffada80 fffffc0008be4c24 0000000000000000 0000000000000000 [ 2.499387] fa00: 0000000000000000 ffffffffffffffff 0000000000000000 0000000000000040 [ 2.507309] fa20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 2.515233] fa40: 0000000000000000 0000000000000000 0000000000000000 0000000000000018 [ 2.523156] fa60: 0000000000000000 0000000000000000 [ 2.528089] [<fffffc00080e7468>] try_to_wake_up+0x194/0x34c [ 2.533723] [<fffffc00080e7648>] wake_up_process+0x28/0x34 [ 2.539275] [<fffffc00080d3764>] create_worker+0x110/0x19c [ 2.544824] [<fffffc00080d69dc>] alloc_unbound_pwq+0x3cc/0x4b0 [ 2.550724] [<fffffc00080d6bcc>] wq_update_unbound_numa+0x10c/0x1e4 [ 2.557066] [<fffffc00080d7d78>] workqueue_online_cpu+0x220/0x28c [ 2.563234] [<fffffc00080bd288>] cpuhp_invoke_callback+0x6c/0x168 [ 2.569398] [<fffffc00080bdf74>] cpuhp_up_callbacks+0x44/0xe4 [ 2.575210] [<fffffc00080be194>] cpuhp_thread_fun+0x13c/0x148 [ 2.581027] [<fffffc00080dfbac>] smpboot_thread_fn+0x19c/0x1a8 [ 2.586929] [<fffffc00080dbd64>] kthread+0xdc/0xf0 [ 2.591776] [<fffffc0008083380>] ret_from_fork+0x10/0x50 [ 2.597147] Code: b00057e1 91304021 91005021 b8626822 (b8606821) [ 2.603464] ---[ end trace 58c0cd36b88802bc ]--- [ 2.608138] Kernel panic - not syncing: Fatal exception Fix by moving call to numa_store_cpu_info() for all CPUs into smp_prepare_cpus(), which happens before wq_numa_init(). Since smp_store_cpu_info() now contains only a single function call, simplify by removing the function and out-lining its contents. Suggested-by:
Robert Richter <rric@kernel.org> Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms.") Cc: <stable@vger.kernel.org> # 4.7.x- Signed-off-by:
David Daney <david.daney@cavium.com> Reviewed-by:
Robert Richter <rrichter@cavium.com> Tested-by:
Yisheng Xie <xieyisheng1@huawei.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- Sep 22, 2016
-
-
Vladimir Murzin authored
This patch allows to build and use vgic-v3 in 32-bit mode. Unfortunately, it can not be split in several steps without extra stubs to keep patches independent and bisectable. For instance, virt/kvm/arm/vgic/vgic-v3.c uses function from vgic-v3-sr.c, handling access to GICv3 cpu interface from the guest requires vgic_v3.vgic_sre to be already defined. It is how support has been done: * handle SGI requests from the guest * report configured SRE on access to GICv3 cpu interface from the guest * required vgic-v3 macros are provided via uapi.h * static keys are used to select GIC backend * to make vgic-v3 build KVM_ARM_VGIC_V3 guard is removed along with the static inlines Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Vladimir Murzin authored
By now ITS code guarded with KVM_ARM_VGIC_V3 config option which was introduced to hide everything specific to vgic-v3 from 32-bit world. We are going to support vgic-v3 in 32-bit world and KVM_ARM_VGIC_V3 will gone, but we don't have support for ITS there yet and we need to continue keeping ITS away. Introduce the new config option to prevent ITS code being build in 32-bit mode when support for vgic-v3 is done. Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Vladimir Murzin authored
So we can reuse the code under arch/arm Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Vladimir Murzin authored
Since we are going to share vgic-v3 save/restore code with ARM keep arch specific accessors separately. Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Acked-by:
Christoffer Dall <christoffer.dall@linaro.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Vladimir Murzin authored
Currently GIC backend is selected via alternative framework and this is fine. We are going to introduce vgic-v3 to 32-bit world and there we don't have patching framework in hand, so we can either check support for GICv3 every time we need to choose which backend to use or try to optimise it by using static keys. The later looks quite promising because we can share logic involved in selecting GIC backend between architectures if both uses static keys. This patch moves arm64 from alternative to static keys framework for selecting GIC backend. For that we embed static key into vgic_global and enable the key during vgic initialisation based on what has already been exposed by the host GIC driver. Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
Laura Abbott authored
virt_addr_valid is supposed to return true if and only if virt_to_page returns a valid page structure. The current macro does math on whatever address is given and passes that to pfn_valid to verify. vmalloc and module addresses can happen to generate a pfn that 'happens' to be valid. Fix this by only performing the pfn_valid check on addresses that have the potential to be valid. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Laura Abbott <labbott@redhat.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 20, 2016
-
-
Paul Gortmaker authored
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by:
Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 19, 2016
-
-
Sebastian Andrzej Siewior authored
Install the callbacks via the state machine. Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: rt@linutronix.de Cc: linux-arm-kernel@lists.infradead.org Link: http://lkml.kernel.org/r/20160906170457.32393-2-bigeasy@linutronix.de Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Sep 16, 2016
-
-
Jeremy Linton authored
Move the PMU name into a common header file so it may be referenced by other users. Signed-off-by:
Jeremy Linton <jeremy.linton@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Jeremy Linton authored
ARMv8 machines can identify the micro/arch defined counters that are available on a machine. Add all these counters to the default armv8 perf map. At run-time disable the counters which are not available on the given PMU. Signed-off-by:
Jeremy Linton <jeremy.linton@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Salter authored
In preparation for ACPI support, add a pmu_probe_info table to the arm_pmu_device_probe() call. This table gets used when probing in the absence of a devicetree node for PMU. Signed-off-by:
Mark Salter <msalter@redhat.com> Signed-off-by:
Jeremy Linton <jeremy.linton@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 15, 2016
-
-
David A. Long authored
Kprobes searches backwards a finite number of instructions to determine if there is an attempt to probe a load/store exclusive sequence. It stops when it hits the maximum number of instructions or a load or store exclusive. However this means it can run up past the beginning of the function and start looking at literal constants. This has been shown to cause a false positive and blocks insertion of the probe. To fix this, further limit the backwards search to stop if it hits a symbol address from kallsyms. The presumption is that this is the entry point to this code (particularly for the common case of placing probes at the beginning of functions). This also improves efficiency by not searching code that is not part of the function. There may be some possibility that the label might not denote the entry path to the probed instruction but the likelihood seems low and this is just another example of how the kprobes user really needs to be careful about what they are doing. Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
David A. Long <dave.long@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 14, 2016
-
-
Marc Zyngier authored
The ARM architected timer specification mandates that the interrupt associated with each timer is level triggered (which corresponds to the "counter >= comparator" condition). A number of DTs are being remarkably creative, declaring the interrupt to be edge triggered. A quick look at the TRM for the corresponding ARM CPUs clearly shows that this is wrong, and I've corrected those. For non-ARM designs (and in the absence of a publicly available TRM), I've made them active low as well, which can't be completely wrong as the GIC cannot disinguish between level low and level high. The respective maintainers are of course welcome to prove me wrong. While I was at it, I took the liberty to fix a couple of related issue, such as some spurious affinity bits on ThunderX, and their complete absence on ls1043a (both of which seem to be related to copy-pasting from other DTs). Acked-by:
Duc Dang <dhdang@apm.com> Acked-by:
Carlo Caione <carlo@endlessm.com> Acked-by:
Michal Simek <michal.simek@xilinx.com> Acked-by:
Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by:
Dinh Nguyen <dinguyen@opensource.altera.com> Acked-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Sep 13, 2016
-
-
Ard Biesheuvel authored
The AES-CTR glue code avoids calling into the blkcipher API for the tail portion of the walk, by comparing the remainder of walk.nbytes modulo AES_BLOCK_SIZE with the residual nbytes, and jumping straight into the tail processing block if they are equal. This tail processing block checks whether nbytes != 0, and does nothing otherwise. However, in case of an allocation failure in the blkcipher layer, we may enter this code with walk.nbytes == 0, while nbytes > 0. In this case, we should not dereference the source and destination pointers, since they may be NULL. So instead of checking for nbytes != 0, check for (walk.nbytes % AES_BLOCK_SIZE) != 0, which implies the former in non-error conditions. Fixes: 49788fe2 ("arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions") Cc: stable@vger.kernel.org Reported-by:
xiakaixu <xiakaixu@huawei.com> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Sep 12, 2016
-
-
Daniel Thompson authored
Currently, when running on FVP, CPU 0 boots up with its BPR changed from the reset value. This renders it impossible to (preemptively) prioritize interrupts on CPU 0. This is harmless on normal systems since Linux typically does not support preemptive interrupts. It does however cause problems in systems with additional changes (such as patches for NMI simulation). Many thanks to Andrew Thoelke for suggesting the BPR as having the potential to harm preemption. Suggested-by:
Andrew Thoelke <andrew.thoelke@arm.com> Signed-off-by:
Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
Mark Rutland authored
Make use of the new alternative_if and alternative_else_nop_endif and get rid of our open-coded NOP sleds, making the code simpler to read. Note that for __kvm_call_hyp the branch to __vhe_hyp_call has been moved out of the alternative sequence, and in the default case there will be four additional NOPs executed. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: kvmarm@lists.cs.columbia.edu Acked-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Make use of the new alternative_if and alternative_else_nop_endif and get rid of our homebew NOP sleds, making the code simpler to read. Note that for cpu_do_switch_mm the ret has been moved out of the alternative sequence, and in the default case there will be three additional NOPs executed. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
In some cases, one side of an alternative sequence is simply a number of NOPs used to balance the other side. Keeping track of this manually is tedious, and the presence of large chains of NOPs makes the code more painful to read than necessary. To ameliorate matters, this patch adds a new alternative_else_nop_endif, which automatically balances an alternative sequence with a trivial NOP sled. In many cases, we would like a NOP-sled in the default case, and instructions patched in in the presence of a feature. To enable the NOPs to be generated automatically for this case, this patch also adds a new alternative_if, and updates alternative_else and alternative_endif to work with either alternative_if or alternative_endif. Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> [will: use new nops macro to generate nop sequences] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Sep 09, 2016
-
-
Will Deacon authored
The LSE atomics are implemented using alternative code sequences of different lengths, and explicit NOP padding is used to ensure the patching works correctly. This patch converts the bulk of the LSE code over to using the __nops macro, which makes it slightly clearer as to what is going on and also consolidates all of the padding at the end of the various sequences. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
NOP sequences tend to get used for padding out alternative sections and uarch-specific pipeline flushes in errata workarounds. This patch adds macros for generating these sequences as both inline asm blocks, but also as strings suitable for embedding in other asm blocks directly. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Similar to our {read,write}_sysreg accessors for architected, named system registers, this patch introduces {read,write}_sysreg_s variants that can take arbitrary sys_reg output and therefore access IMPDEF registers or registers that unsupported by binutils. Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ian Campbell authored
The ../../../arm... style cross-references added by commit 9d56c22a ("ARM: bcm2835: Add devicetree for the Raspberry Pi 3.") do not work in the context of the split device-tree repository[0] (where the directory structure differs). As with commit 8ee57b81 ("ARM64: dts: vexpress: Use a symlink to vexpress-v2m-rs1.dtsi from arch=arm") use symlinks instead. [0] https://git.kernel.org/cgit/linux/kernel/git/devicetree/devicetree-rebasing.git/ Signed-off-by:
Ian Campbell <ijc@hellion.org.uk> Acked-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Rob Herring <robh+dt@kernel.org> Cc: Frank Rowand <frowand.list@gmail.com> Cc: Eric Anholt <eric@anholt.net> Cc: Stephen Warren <swarren@wwwdotorg.org> Cc: Lee Jones <lee@kernel.org> Cc: Gerd Hoffmann <kraxel@redhat.com> Cc: devicetree@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-rpi-kernel@lists.infradead.org Cc: arm@kernel.org Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
Robin Murphy authored
We've grown our own versions of bug.h, ftrace.h, pci.h and topology.h, so generating the generic ones as well is unnecessary and a potential source of build hiccups. At the very least, having them present has confused my source-indexing tool, and that simply will not do. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Systems with differing CPU i-cache/d-cache line sizes can cause problems with the cache management by software when the execution is migrated from one to another. Usually, the application reads the cache size on a CPU and then uses that length to perform cache operations. However, if it gets migrated to another CPU with a smaller cache line size, things could go completely wrong. To prevent such cases, always use the smallest cache line size among the CPUs. The kernel CPU feature infrastructure already keeps track of the safe value for all CPUID registers including CTR. This patch works around the problem by : For kernel, dynamically patch the kernel to read the cache size from the system wide copy of CTR_EL0. For applications, trap read accesses to CTR_EL0 (by clearing the SCTLR.UCT) and emulate the mrs instruction to return the system wide safe value of CTR_EL0. For faster access (i.e, avoiding to lookup the system wide value of CTR_EL0 via read_system_reg), we keep track of the pointer to table entry for CTR_EL0 in the CPU feature infrastructure. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Right now we trap some of the user space data cache operations based on a few Errata (ARM 819472, 826319, 827319 and 824069). We need to trap userspace access to CTR_EL0, if we detect mismatched cache line size. Since both these traps share the EC, refactor the handler a little bit to make it a bit more reader friendly. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
On systems with mismatched i/d cache min line sizes, we need to use the smallest size possible across all CPUs. This will be done by fetching the system wide safe value from CPU feature infrastructure. However the some special users(e.g kexec, hibernate) would need the line size on the CPU (rather than the system wide), when either the system wide feature may not be accessible or it is guranteed that the caller executes with a gurantee of no migration. Provide another helper which will fetch cache line size on the current CPU. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
James Morse <james.morse@arm.com> Reviewed-by:
Geoff Levand <geoff@infradead.org> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
adrp uses PC-relative address offset to a page (of 4K size) of a symbol. If it appears in an alternative code patched in, we should adjust the offset to reflect the address where it will be run from. This patch adds support for fixing the offset for adrp instructions. Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Adds helpers for decoding/encoding the PC relative addresses for adrp. This will be used for handling dynamic patching of 'adrp' instructions in alternative code patching. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
The alternative code patching doesn't check if the replaced instruction uses a pc relative literal. This could cause silent corruption in the instruction stream as the instruction will be executed from a different address than what it was compiled for. Catch all such cases. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Suggested-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
Right now we run through the work around checks on a CPU from __cpuinfo_store_cpu. There are some problems with that: 1) We initialise the system wide CPU feature registers only after the Boot CPU updates its cpuinfo. Now, if a work around depends on the variance of a CPU ID feature (e.g, check for Cache Line size mismatch), we have no way of performing it cleanly for the boot CPU. 2) It is out of place, invoked from __cpuinfo_store_cpu() in cpuinfo.c. It is not an obvious place for that. This patch rearranges the CPU specific capability(aka work around) checks. 1) At the moment we use verify_local_cpu_capabilities() to check if a new CPU has all the system advertised features. Use this for the secondary CPUs to perform the work around check. For that we rename verify_local_cpu_capabilities() => check_local_cpu_capabilities() which: If the system wide capabilities haven't been initialised (i.e, the CPU is activated at the boot), update the system wide detected work arounds. Otherwise (i.e a CPU hotplugged in later) verify that this CPU conforms to the system wide capabilities. 2) Boot CPU updates the work arounds from smp_prepare_boot_cpu() after we have initialised the system wide CPU feature values. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
This is a cosmetic change to rename the functions dealing with the errata work arounds to be more consistent with their naming. 1) check_local_cpu_errata() => update_cpu_errata_workarounds() check_local_cpu_errata() actually updates the system's errata work arounds. So rename it to reflect the same. 2) verify_local_cpu_errata() => verify_local_cpu_errata_workarounds() Use errata_workarounds instead of _errata. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-