- Apr 15, 2016
-
-
Ganapatrao Kulkarni authored
Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by:
Steve Capper <steve.capper@arm.com> Reviewed-by:
Robert Richter <rrichter@cavium.com> Signed-off-by:
Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by:
David Daney <david.daney@cavium.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Attempt to get the memory and CPU NUMA node via of_numa. If that fails, default the dummy NUMA node and map all memory and CPUs to node 0. Tested-by:
Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by:
Robert Richter <rrichter@cavium.com> Signed-off-by:
Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by:
David Daney <david.daney@cavium.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
David Daney authored
In order to extract NUMA information from the device tree, we need to have the tree in its unflattened form. Move the call to bootmem_init() in the tail of paging_init() into setup_arch, and adjust header files so that its declaration is visible. Move the unflatten_device_tree() call between the calls to paging_init() and bootmem_init(). Follow on patches add NUMA handling to bootmem_init(). Signed-off-by:
David Daney <david.daney@cavium.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
With a VHE capable CPU, kernel can run at EL2 and is a decided at early boot. If some of the CPUs didn't start it EL2 or doesn't have VHE, we could have CPUs running at different exception levels, all in the same kernel! This patch adds an early check for the secondary CPUs to detect such situations. For each non-boot CPU add a sanity check to make sure we don't have different run levels w.r.t the boot CPU. We save the information on whether the boot CPU is running in hyp mode or not and ensure the remaining CPUs match it. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> [will: made boot_cpu_hyp_mode static] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
During the activation of a secondary CPU, we could report serious configuration issues and hence request to crash the kernel. We do this for CPU ASID bit check now. We will need it also for handling mismatched exception levels for the CPUs with VHE. Hence, add a helper to do the same for reusability. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Apr 14, 2016
-
-
James Morse authored
With CONFIG_PROVE_LOCKING, CONFIG_DEBUG_LOCKDEP and CONFIG_TRACE_IRQFLAGS enabled, lockdep will compare current->hardirqs_enabled with the flags from local_irq_save(). When a debug exception occurs, interrupts are disabled in entry.S, but lockdep isn't told, resulting in: DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled) ------------[ cut here ]------------ WARNING: at ../kernel/locking/lockdep.c:3523 Modules linked in: CPU: 3 PID: 1752 Comm: perf Not tainted 4.5.0-rc4+ #2204 Hardware name: ARM Juno development board (r1) (DT) task: ffffffc974868000 ti: ffffffc975f40000 task.ti: ffffffc975f40000 PC is at check_flags.part.35+0x17c/0x184 LR is at check_flags.part.35+0x17c/0x184 pc : [<ffffff80080fc93c>] lr : [<ffffff80080fc93c>] pstate: 600003c5 [...] ---[ end trace 74631f9305ef5020 ]--- Call trace: [<ffffff80080fc93c>] check_flags.part.35+0x17c/0x184 [<ffffff80080ffe30>] lock_acquire+0xa8/0xc4 [<ffffff8008093038>] breakpoint_handler+0x118/0x288 [<ffffff8008082434>] do_debug_exception+0x3c/0xa8 [<ffffff80080854b4>] el1_dbg+0x18/0x6c [<ffffff80081e82f4>] do_filp_open+0x64/0xdc [<ffffff80081d6e60>] do_sys_open+0x140/0x204 [<ffffff80081d6f58>] SyS_openat+0x10/0x18 [<ffffff8008085d30>] el0_svc_naked+0x24/0x28 possible reason: unannotated irqs-off. irq event stamp: 65857 hardirqs last enabled at (65857): [<ffffff80081fb1c0>] lookup_mnt+0xf4/0x1b4 hardirqs last disabled at (65856): [<ffffff80081fb188>] lookup_mnt+0xbc/0x1b4 softirqs last enabled at (65790): [<ffffff80080bdca4>] __do_softirq+0x1f8/0x290 softirqs last disabled at (65757): [<ffffff80080be038>] irq_exit+0x9c/0xd0 This patch adds the annotations to do_debug_exception(), while trying not to call trace_hardirqs_off() if el1_dbg() interrupted a task that already had irqs disabled. Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Anna-Maria Gleixner authored
Since commit 1cf4f629 ("cpu/hotplug: Move online calls to hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP function calls are no longer required. Replace smp_call_function_single() with a direct call of hw_breakpoint_reset(). To keep the calling convention, interrupts are explicitly disabled around the call. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by:
Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Anna-Maria Gleixner authored
Since commit 1cf4f629 ("cpu/hotplug: Move online calls to hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP function calls are no longer required. Replace smp_call_function_single() with a direct call to clear_os_lock(). The function writes the OSLAR register to clear OS locking. This does not require to be called with interrupts disabled, therefore the smp_call_function_single() calling convention is not preserved. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by:
Anna-Maria Gleixner <anna-maria@linutronix.de> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The mapping of the kernel consist of four segments, each of which is mapped with different permission attributes and/or lifetimes. To optimize the TLB and translation table footprint, we define various opaque constants in the linker script that resolve to different aligment values depending on the page size and whether CONFIG_DEBUG_ALIGN_RODATA is set. Considering that - a 4 KB granule kernel benefits from a 64 KB segment alignment (due to the fact that it allows the use of the contiguous bit), - the minimum alignment of the .data segment is THREAD_SIZE already, not PAGE_SIZE (i.e., we already have padding between _data and the start of the .data payload in many cases), - 2 MB is a suitable alignment value on all granule sizes, either for mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on 16 KB and 64 KB), - anything beyond 2 MB exceeds the minimum alignment mandated by the boot protocol, and can only be mapped efficiently if the physical alignment happens to be the same, we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e., regardless of granule size, all segments are aligned either to 64 KB, or to 2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Keeping .head.text out of the .text mapping buys us very little: its actual payload is only 4 KB, most of which is padding, but the page alignment may add up to 2 MB (in case of CONFIG_DEBUG_ALIGN_RODATA=y) of additional padding to the uncompressed kernel Image. Also, on 4 KB granule kernels, the 4 KB misalignment of .text forces us to map the adjacent 56 KB of code without the PTE_CONT attribute, and since this region contains things like the vector table and the GIC interrupt handling entry point, this region is likely to benefit from the reduced TLB pressure that results from PTE_CONT mappings. So remove the alignment between the .head.text and .text sections, and use the [_text, _etext) rather than the [_stext, _etext) interval for mapping the .text segment. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Apart from the arm64/linux and EFI header data structures, there is nothing in the .head.text section that must reside at the beginning of the Image. So let's move it to the .init section where it belongs. Note that this involves some minor tweaking of the EFI header, primarily because the address of 'stext' no longer coincides with the start of the .text section. It also requires a couple of relocated symbol references to be slightly rewritten or their definition moved to the linker script. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Replace the poorly defined term chunk with segment, which is a term that is already used by the ELF spec to describe contiguous mappings with the same permission attributes of statically allocated ranges of an executable. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Now that the vmemmap region has been redefined to cover the linear region rather than the entire physical address space, we no longer need to perform a virtual-to-physical translation in the implementaion of virt_to_page(). This restricts virt_to_page() translations to the linear region, so redefine virt_addr_valid() as well. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This moves the vmemmap region right below PAGE_OFFSET, aka the start of the linear region, and redefines its size to be a power of two. Due to the placement of PAGE_OFFSET in the middle of the address space, whose size is a power of two as well, this guarantees that virt to page conversions and vice versa can be implemented efficiently, by masking and shifting rather than ordinary arithmetic. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Before restricting virt_to_page() to the linear mapping, ensure that the text patching code does not use it to resolve references into the core kernel text, which is mapped in the vmalloc area. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The zero page is statically allocated, so grab its struct page pointer without using virt_to_page(), which will be restricted to the linear mapping later. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The implementation of free_initmem_default() expects __init_begin and __init_end to be covered by the linear mapping, which is no longer the case. So open code it instead, using addresses that are explicitly translated from kernel virtual to linear virtual. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
The translation performed by virt_to_page() is only valid for linear addresses, and kernel symbols are no longer in the linear mapping. So perform the __pa() translation explicitly, which does the right thing in either case, and only then translate to a struct page offset. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This removes the relocate_initrd() implementation and invocation, which are no longer needed now that the placement of the initrd is guaranteed to be covered by the linear mapping. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
Instead of going out of our way to relocate the initrd if it turns out to occupy memory that is not covered by the linear mapping, just add the initrd to the linear mapping. This puts the burden on the bootloader to pass initrd= and mem= options that are mutually consistent. Note that, since the placement of the linear region in the PA space is also dependent on the placement of the kernel Image, which may reside anywhere in memory, we may still end up with a situation where the initrd and the kernel Image are simply too far apart to be covered by the linear region. Since we now leave it up to the bootloader to pass the initrd in memory that is guaranteed to be accessible by the kernel, add a mention of this to the arm64 boot protocol specification as well. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This reverts commit 36e5cd6b, since the section alignment is now guaranteed by construction when choosing the value of memstart_addr. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
This redefines ARM64_MEMSTART_ALIGN in terms of the minimal alignment required by sparsemem vmemmap. This comes down to using 1 GB for all translation granules if CONFIG_SPARSEMEM_VMEMMAP is enabled. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
After choosing memstart_addr to be the highest multiple of ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory address, we clip the memblocks to the maximum size of the linear region. Since the kernel may be high up in memory, we take care not to clip the kernel itself, which means we have to clip some memory from the bottom if this occurs, to ensure that the distance between the first and the last usable physical memory address can be covered by the linear region. However, we fail to update memstart_addr if this clipping from the bottom occurs, which means that we may still end up with virtual addresses that wrap into the userland range. So increment memstart_addr as appropriate to prevent this from happening. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Apr 13, 2016
-
-
Jisheng Zhang authored
Currently, we check two pointers: cpu_ops and cpu_suspend on every idle state entry. These pointers check can be avoided: If cpu_ops has not been registered, arm_cpuidle_init() will return -EOPNOTSUPP, so arm_cpuidle_suspend() will never have chance to run. In other word, the cpu_ops check can be avoid. Similarly, the cpu_suspend check could be avoided in this hot path by moving it into arm_cpuidle_init(). I measured the 4096 * time from arm_cpuidle_suspend entry point to the cpu_psci_cpu_suspend entry point. HW platform is Marvell BG4CT STB board. 1. only one shell, no other process, hot-unplug secondary cpus, execute the following cmd while true do sleep 0.2 done before the patch: 1581220ns after the patch: 1579630ns reduced by 0.1% 2. only one shell, no other process, hot-unplug secondary cpus, execute the following cmd while true do md5sum /tmp/testfile sleep 0.2 done NOTE: the testfile size should be larger than L1+L2 cache size before the patch: 1961960ns after the patch: 1912500ns reduced by 2.5% So the more complex the system load, the bigger the improvement. Signed-off-by:
Jisheng Zhang <jszhang@marvell.com> Acked-by:
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Kefeng Wang authored
There are some new cpu features which can be identified by id_aa64mmfr2, this patch appends all fields of it. Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- Apr 08, 2016
-
-
Helge Deller authored
Update the comment to reflect the changes of commit 0de79858 (parisc: Use generic extable search and sort routines). Signed-off-by:
Helge Deller <deller@gmx.de>
-
Helge Deller authored
Handling exceptions from modules never worked on parisc. It was just masked by the fact that exceptions from modules don't happen during normal use. When a module triggers an exception in get_user() we need to load the main kernel dp value before accessing the exception_data structure, and afterwards restore the original dp value of the module on exit. Noticed-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org
-
Helge Deller authored
The kernel module testcase (lib/test_user_copy.c) exhibited a kernel crash on parisc if the parameters for copy_from_user were reversed ("illegal reversed copy_to_user" testcase). Fix this potential crash by checking the fault handler if the faulting address is in the exception table. Signed-off-by:
Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org Cc: Kees Cook <keescook@chromium.org>
-
Helge Deller authored
We want to avoid the kernel module loader to create function pointers for the kernel fixup routines of get_user() and put_user(). Changing the external reference from function type to int type fixes this. This unbreaks exception handling for get_user() and put_user() when called from a kernel module. Signed-off-by:
Helge Deller <deller@gmx.de> Cc: stable@vger.kernel.org
-
Helge Deller authored
Commit 0de79858 (parisc: Use generic extable search and sort routines) changed the exception tables to use 32bit relative offsets. This patch now adds support to the kernel module loader to handle such R_PARISC_PCREL32 relocations for 32- and 64-bit modules. Signed-off-by:
Helge Deller <deller@gmx.de>
-
- Apr 07, 2016
-
-
Nicolas Pitre authored
It was reported that a kernel with CONFIG_ARM_PATCH_IDIV=y stopped booting when compiled with the upcoming gcc 6. Turns out that turning a function address into a writable array is undefined and gcc 6 decided it was OK to omit the store to the first word of the function while still preserving the store to the second word. Even though gcc 6 is now fixed to behave more coherently, it is a mystery that gcc 4 and gcc 5 actually produce wanted code in the kernel. And in fact the reduced test case to illustrate the issue does indeed break with gcc < 6 as well. In any case, let's guard the kernel against undefined compiler behavior by hiding the nature of the array location as suggested by gcc developers. Reference: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70128 Signed-off-by:
Nicolas Pitre <nico@linaro.org> Reported-by:
Marcin Juszkiewicz <mjuszkiewicz@redhat.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: stable@vger.kernel.org # v4.5 Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Wire up the preadv2 and pwritev2 syscalls for ARM. Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
Len Brown authored
Some processors use the Interrupt Response Time Limit (IRTL) MSR value to describe the maximum IRQ response time latency for deep package C-states. (Though others have the register, but do not use it) Lets print it out to give insight into the cases where it is used. IRTL begain in SNB, with PC3/PC6/PC7, and HSW added PC8/PC9/PC10. Signed-off-by:
Len Brown <len.brown@intel.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- Apr 06, 2016
-
-
Linus Torvalds authored
Let's see if anybody even notices. I doubt anybody uses this, and it does expose addresses that should be randomized, so let's just remove the code. It's old and traditional, and it used to be cute, but we should have removed this long ago. If it turns out anybody notices and this breaks something, we'll have to revert this, and maybe we'll end up using other approaches instead (using %pK or similar). But removing unnecessary code is always the preferred option. Noted-by:
Emrah Demir <ed@abdsec.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 05, 2016
-
-
Luiz Capitulino authored
When a vCPU runs on a nohz_full core, the hrtimer used by the lapic emulation code can be migrated to another core. When this happens, it's possible to observe milisecond latency when delivering timer IRQs to KVM guests. The huge latency is mainly due to the fact that apic_timer_fn() expects to run during a kvm exit. It sets KVM_REQ_PENDING_TIMER and let it be handled on kvm entry. However, if the timer fires on a different core, we have to wait until the next kvm exit for the guest to see KVM_REQ_PENDING_TIMER set. This problem became visible after commit 9642d18e. This commit changed the timer migration code to always attempt to migrate timers away from nohz_full cores. While it's discussable if this is correct/desirable (I don't think it is), it's clear that the lapic emulation code has a requirement on firing the hrtimer in the same core where it was started. This is achieved by making the hrtimer pinned. Lastly, note that KVM has code to migrate timers when a vCPU is scheduled to run in different core. However, this forced migration may fail. When this happens, we can have the same problem. If we want 100% correctness, we'll have to modify apic_timer_fn() to cause a kvm exit when it runs on a different core than the vCPU. Not sure if this is possible. Here's a reproducer for the issue being fixed: 1. Set all cores but core0 to be nohz_full cores 2. Start a guest with a single vCPU 3. Trace apic_timer_fn() and kvm_inject_apic_timer_irqs() You'll see that apic_timer_fn() will run in core0 while kvm_inject_apic_timer_irqs() runs in a different core. If you get both on core0, try running a program that takes 100% of the CPU and pin it to core0 to force the vCPU out. Signed-off-by:
Luiz Capitulino <lcapitulino@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Christian Borntraeger authored
commit 1e133ab2 ("s390/mm: split arch/s390/mm/pgtable.c") dropped some changes from commit a3a92c31 ("KVM: s390: fix mismatch between user and in-kernel guest limit") - this breaks KVM for some memory sizes (kvm-s390: failed to commit memory region) like exactly 2GB. Cc: Dominik Dingel <dingel@linux.vnet.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by:
Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by:
Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Apr 04, 2016
-
-
Kirill A. Shutemov authored
Mostly direct substitution with occasional adjustment or removing outdated comments. Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Kirill A. Shutemov authored
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Maciej W. Rozycki authored
Make sure it's the microMIPS rather than MIPS16 ISA before emulating microMIPS RDHWR. Mostly needed as an optimisation for configurations where `cpu_has_mmips' is hardcoded to 0 and also a good measure in case we add further microMIPS instructions to emulate in the future, as the corresponding MIPS16 encoding is ADDIUSP, not supposed to trap. Signed-off-by:
Maciej W. Rozycki <macro@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12282/ Signed-off-by:
Ralf Baechle <ralf@linux-mips.org>
-
Florian Fainelli authored
The SUN GISB arbiter was added with the wrong compatible string, leading to using the wrong register layout, use the correct compatible string for this chip: brcm,bcm7435-gisb-arb. Fixes: 8394968be4c7 ("MIPS: BMIPS: Add BCM7435 dtsi") Signed-off-by:
Florian Fainelli <f.fainelli@gmail.com> Cc: blogic@openwrt.org Cc: cernekee@gmail.com Cc: jogo@openwrt.org Cc: jaedon.shin@gmail.com Cc: pgynther@google.com Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/12285/ Signed-off-by:
Ralf Baechle <ralf@linux-mips.org>
-