- Aug 24, 2020
-
-
Lukas Bulwahn authored
As long as there are only a few maintainer entry profiles, i.e., three in v5.8, continue to maintain a complete a list of entries in the maintainer handbook. Complete the list by adding the RISC-V ARCHITECTURE maintainer entry profile found in MAINTAINERS. Signed-off-by:
Lukas Bulwahn <lukas.bulwahn@gmail.com> Link: https://lore.kernel.org/r/20200815115728.15128-1-lukas.bulwahn@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Puranjay Mohan authored
Replace :c:func: with func() as the previous usage is deprecated. Signed-off-by:
Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20200812180224.24810-1-puranjay12@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Puranjay Mohan authored
Replace :c:func: with func() as the previous usage is deprecated. Signed-off-by:
Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20200812174611.18580-1-puranjay12@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Marta Rybczynska authored
Fix issues with local_locks documentation: - fix function names, local_lock.h has local_unlock_irqrestore(), not local_lock_irqrestore() - fix mapping table, local_unlock_irqrestore() maps to local_irq_restore(), not _save() Signed-off-by:
Marta Rybczynska <rybczynska@gmail.com> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/CAApg2=SKxQ3Sqwj6TZnV-0x0cKLXFKDaPvXT4N15MPDMKq724g@mail.gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Aug 14, 2020
-
-
Rob Herring authored
Another wack-a-mole pass of killing off unnecessary 'allOf + $ref' usage. json-schema versions draft7 and earlier have a weird behavior in that any keywords combined with a '$ref' are ignored (silently). The correct form was to put a '$ref' under an 'allOf'. This behavior is now changed in the 2019-09 json-schema spec and '$ref' can be mixed with other keywords. The json-schema library doesn't yet support this, but the tooling now does a fixup for this and either way works. This has been a constant source of review comments, so let's change this treewide so everyone copies the simpler syntax. Signed-off-by:
Rob Herring <robh@kernel.org>
-
Rob Herring authored
Clean-up incorrect indentation, extra spaces, long lines, and missing EOF newline in schema files. Most of the clean-ups are for list indentation which should always be 2 spaces more than the preceding keyword. Found with yamllint (which I plan to integrate into the checks). Cc: linux-arm-kernel@lists.infradead.org Cc: linux-clk@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linux-spi@vger.kernel.org Cc: linux-gpio@vger.kernel.org Cc: linux-remoteproc@vger.kernel.org Cc: linux-hwmon@vger.kernel.org Cc: linux-i2c@vger.kernel.org Cc: linux-fbdev@vger.kernel.org Cc: linux-iio@vger.kernel.org Cc: linux-input@vger.kernel.org Cc: linux-pm@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: alsa-devel@alsa-project.org Cc: linux-mmc@vger.kernel.org Cc: linux-mtd@lists.infradead.org Cc: netdev@vger.kernel.org Cc: linux-rtc@vger.kernel.org Cc: linux-serial@vger.kernel.org Cc: linux-usb@vger.kernel.org Acked-by:
Sam Ravnborg <sam@ravnborg.org> Signed-off-by:
Rob Herring <robh@kernel.org>
-
- Aug 13, 2020
-
-
Huang Shijie authored
We have three categories locks, not two. Signed-off-by:
Huang Shijie <sjhuang@iluvatar.ai> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200813060220.18199-1-sjhuang@iluvatar.ai Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Huang Shijie authored
We have three categories locks, not two. Signed-off-by:
Huang Shijie <sjhuang@iluvatar.ai> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200813060220.18199-1-sjhuang@iluvatar.ai
-
Alexander A. Klimov authored
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `\bxmlns\b`: For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`: If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. Signed-off-by:
Alexander A. Klimov <grandmaster@al2klimov.de> Acked-by:
Rob Herring <robh@kernel.org> Signed-off-by:
Lee Jones <lee.jones@linaro.org>
-
Fabio Estevam authored
Remove the I2C unit name to fix the following build warning with 'make dt_binding_check': Warning (unit_address_vs_reg): /example-0/i2c@0: node has a unit name, but no reg or ranges property Signed-off-by:
Fabio Estevam <festevam@gmail.com> Acked-by:
Rob Herring <robh@kernel.org> Signed-off-by:
Lee Jones <lee.jones@linaro.org>
-
Roger Quadros authored
Add DT binding schema for J721e system controller. Signed-off-by:
Roger Quadros <rogerq@ti.com> Reviewed-by:
Rob Herring <robh@kernel.org> Signed-off-by:
Lee Jones <lee.jones@linaro.org>
-
Michael Walle authored
This MFD driver has no user. The keypad driver of this device never made it into the kernel. Therefore, this driver is useless. Remove it. Signed-off-by:
Michael Walle <michael@walle.cc> Cc: Sourav Poddar <sourav.poddar@ti.com> Signed-off-by:
Lee Jones <lee.jones@linaro.org>
-
- Aug 12, 2020
-
-
Lepton Wu authored
The document reads "%e" should be "executable filename" while actually it could be changed by things like pr_ctl PR_SET_NAME. People who uses "%e" in core_pattern get surprised when they find out they get thread name instead of executable filename. This is either a bug of document or a bug of code. Since the behavior of "%e" is there for long time, it could bring another surprise for users if we "fix" the code. So we just "fix" the document. And more, for users who really need the "executable filename" in core_pattern, we introduce a new "%f" for the real executable filename. We already have "%E" for executable path in kernel, so just reuse most of its code for the new added "%f" format. Signed-off-by:
Lepton Wu <ytht.net@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200701031432.2978761-1-ytht.net@gmail.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Anshuman Khandual authored
Add following new vmstat events which will help in validating THP migration without split. Statistics reported through these new VM events will help in performance debugging. 1. THP_MIGRATION_SUCCESS 2. THP_MIGRATION_FAILURE 3. THP_MIGRATION_SPLIT In addition, these new events also update normal page migration statistics appropriately via PGMIGRATE_SUCCESS and PGMIGRATE_FAILURE. While here, this updates current trace event 'mm_migrate_pages' to accommodate now available THP statistics. [akpm@linux-foundation.org: s/hpage_nr_pages/thp_nr_pages/] [ziy@nvidia.com: v2] Link: http://lkml.kernel.org/r/C5E3C65C-8253-4638-9D3C-71A61858BB8B@nvidia.com [anshuman.khandual@arm.com: s/thp_nr_pages/hpage_nr_pages/] Link: http://lkml.kernel.org/r/1594287583-16568-1-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Zi Yan <ziy@nvidia.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Hugh Dickins <hughd@google.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Zi Yan <ziy@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Link: http://lkml.kernel.org/r/1594080415-27924-1-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Hocko authored
The exported value includes oom_score_adj so the range is no [0, 1000] as described in the previous section but rather [0, 2000]. Mention that fact explicitly. Signed-off-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: David Rientjes <rientjes@google.com> Cc: Yafang Shao <laoar.shao@gmail.com> Link: http://lkml.kernel.org/r/20200709062603.18480-2-mhocko@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Michal Hocko authored
There are at least two notes in the oom section. The 3% discount for root processes is gone since d46078b2 ("mm, oom: remove 3% bonus for CAP_SYS_ADMIN processes"). Likewise children of the selected oom victim are not sacrificed since bbbe4802 ("mm, oom: remove 'prefer children over parent' heuristic") Drop both of them. Signed-off-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: David Rientjes <rientjes@google.com> Cc: Yafang Shao <laoar.shao@gmail.com> Link: http://lkml.kernel.org/r/20200709062603.18480-1-mhocko@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Nitin Gupta authored
For some applications, we need to allocate almost all memory as hugepages. However, on a running system, higher-order allocations can fail if the memory is fragmented. Linux kernel currently does on-demand compaction as we request more hugepages, but this style of compaction incurs very high latency. Experiments with one-time full memory compaction (followed by hugepage allocations) show that kernel is able to restore a highly fragmented memory state to a fairly compacted memory state within <1 sec for a 32G system. Such data suggests that a more proactive compaction can help us allocate a large fraction of memory as hugepages keeping allocation latencies low. For a more proactive compaction, the approach taken here is to define a new sysctl called 'vm.compaction_proactiveness' which dictates bounds for external fragmentation which kcompactd tries to maintain. The tunable takes a value in range [0, 100], with a default of 20. Note that a previous version of this patch [1] was found to introduce too many tunables (per-order extfrag{low, high}), but this one reduces them to just one sysctl. Also, the new tunable is an opaque value instead of asking for specific bounds of "external fragmentation", which would have been difficult to estimate. The internal interpretation of this opaque value allows for future fine-tuning. Currently, we use a simple translation from this tunable to [low, high] "fragmentation score" thresholds (low=100-proactiveness, high=low+10%). The score for a node is defined as weighted mean of per-zone external fragmentation. A zone's present_pages determines its weight. To periodically check per-node score, we reuse per-node kcompactd threads, which are woken up every 500 milliseconds to check the same. If a node's score exceeds its high threshold (as derived from user-provided proactiveness value), proactive compaction is started until its score reaches its low threshold value. By default, proactiveness is set to 20, which implies threshold values of low=80 and high=90. This patch is largely based on ideas from Michal Hocko [2]. See also the LWN article [3]. Performance data ================ System: x64_64, 1T RAM, 80 CPU threads. Kernel: 5.6.0-rc3 + this patch echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/enabled echo madvise | sudo tee /sys/kernel/mm/transparent_hugepage/defrag Before starting the driver, the system was fragmented from a userspace program that allocates all memory and then for each 2M aligned section, frees 3/4 of base pages using munmap. The workload is mainly anonymous userspace pages, which are easy to move around. I intentionally avoided unmovable pages in this test to see how much latency we incur when hugepage allocations hit direct compaction. 1. Kernel hugepage allocation latencies With the system in such a fragmented state, a kernel driver then allocates as many hugepages as possible and measures allocation latency: (all latency values are in microseconds) - With vanilla 5.6.0-rc3 percentile latency –––––––––– ––––––– 5 7894 10 9496 25 12561 30 15295 40 18244 50 21229 60 27556 75 30147 80 31047 90 32859 95 33799 Total 2M hugepages allocated = 383859 (749G worth of hugepages out of 762G total free => 98% of free memory could be allocated as hugepages) - With 5.6.0-rc3 + this patch, with proactiveness=20 sysctl -w vm.compaction_proactiveness=20 percentile latency –––––––––– ––––––– 5 2 10 2 25 3 30 3 40 3 50 4 60 4 75 4 80 4 90 5 95 429 Total 2M hugepages allocated = 384105 (750G worth of hugepages out of 762G total free => 98% of free memory could be allocated as hugepages) 2. JAVA heap allocation In this test, we first fragment memory using the same method as for (1). Then, we start a Java process with a heap size set to 700G and request the heap to be allocated with THP hugepages. We also set THP to madvise to allow hugepage backing of this heap. /usr/bin/time java -Xms700G -Xmx700G -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch The above command allocates 700G of Java heap using hugepages. - With vanilla 5.6.0-rc3 17.39user 1666.48system 27:37.89elapsed - With 5.6.0-rc3 + this patch, with proactiveness=20 8.35user 194.58system 3:19.62elapsed Elapsed time remains around 3:15, as proactiveness is further increased. Note that proactive compaction happens throughout the runtime of these workloads. The situation of one-time compaction, sufficient to supply hugepages for following allocation stream, can probably happen for more extreme proactiveness values, like 80 or 90. In the above Java workload, proactiveness is set to 20. The test starts with a node's score of 80 or higher, depending on the delay between the fragmentation step and starting the benchmark, which gives more-or-less time for the initial round of compaction. As t he benchmark consumes hugepages, node's score quickly rises above the high threshold (90) and proactive compaction starts again, which brings down the score to the low threshold level (80). Repeat. bpftrace also confirms proactive compaction running 20+ times during the runtime of this Java benchmark. kcompactd threads consume 100% of one of the CPUs while it tries to bring a node's score within thresholds. Backoff behavior ================ Above workloads produce a memory state which is easy to compact. However, if memory is filled with unmovable pages, proactive compaction should essentially back off. To test this aspect: - Created a kernel driver that allocates almost all memory as hugepages followed by freeing first 3/4 of each hugepage. - Set proactiveness=40 - Note that proactive_compact_node() is deferred maximum number of times with HPAGE_FRAG_CHECK_INTERVAL_MSEC of wait between each check (=> ~30 seconds between retries). [1] https://patchwork.kernel.org/patch/11098289/ [2] https://lore.kernel.org/linux-mm/20161230131412.GI13301@dhcp22.suse.cz/ [3] https://lwn.net/Articles/817905/ Signed-off-by:
Nitin Gupta <nigupta@nvidia.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Tested-by:
Oleksandr Natalenko <oleksandr@redhat.com> Reviewed-by:
Vlastimil Babka <vbabka@suse.cz> Reviewed-by:
Khalid Aziz <khalid.aziz@oracle.com> Reviewed-by:
Oleksandr Natalenko <oleksandr@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Khalid Aziz <khalid.aziz@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Cc: Nitin Gupta <ngupta@nitingupta.dev> Cc: Oleksandr Natalenko <oleksandr@redhat.com> Link: http://lkml.kernel.org/r/20200616204527.19185-1-nigupta@nvidia.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Roman Gushchin authored
Percpu memory can represent a noticeable chunk of the total memory consumption, especially on big machines with many CPUs. Let's track percpu memory usage for each memcg and display it in memory.stat. A percpu allocation is usually scattered over multiple pages (and nodes), and can be significantly smaller than a page. So let's add a byte-sized counter on the memcg level: MEMCG_PERCPU_B. Byte-sized vmstat infra created for slabs can be perfectly reused for percpu case. [guro@fb.com: v3] Link: http://lkml.kernel.org/r/20200623184515.4132564-4-guro@fb.com Signed-off-by:
Roman Gushchin <guro@fb.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Shakeel Butt <shakeelb@google.com> Acked-by:
Dennis Zhou <dennis@kernel.org> Acked-by:
Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@kernel.org> Cc: Pekka Enberg <penberg@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Tobin C. Harding <tobin@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Waiman Long <longman@redhat.com> Cc: Bixuan Cui <cuibixuan@huawei.com> Cc: Michal Koutný <mkoutny@suse.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Link: http://lkml.kernel.org/r/20200608230819.832349-4-guro@fb.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Liam Beguin authored
The PCA2129 is the automotive grade version of the PCF2129. add it to the list of compatibles. Signed-off-by:
Liam Beguin <lvb@xiphos.com> Signed-off-by:
Alexandre Belloni <alexandre.belloni@bootlin.com> Reviewed-by:
Bruno Thomsen <bruno.thomsen@gmail.com> Link: https://lore.kernel.org/r/20200630024211.12782-2-liambeguin@gmail.com
-
- Aug 11, 2020
-
-
Lukas Bulwahn authored
Documentation generation warns: Documentation/translations/zh_CN/admin-guide/index.rst:3: WARNING: undefined label: documentation/admin-guide/index.rst Use doc reference for .rst files to resolve the warning. Fixes: 37a607cf ("doc/zh_CN: add admin-guide index") Signed-off-by:
Lukas Bulwahn <lukas.bulwahn@gmail.com> Link: https://lore.kernel.org/r/20200802161956.18268-1-lukas.bulwahn@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Lukas Bulwahn authored
Documentation generation warns: Documentation/translations/zh_CN/admin-guide/cpu-load.rst:1: WARNING: Title overline too short. Extend title heading markup by one. It was just off by one. Fixes: e210c66d ("doc/zh_CN: add cpu-load Chinese version") Signed-off-by:
Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by:
Tao Zhou <ouwen210@hotmail.com> Link: https://lore.kernel.org/r/20200802162101.18875-1-lukas.bulwahn@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Stephen Kitt authored
All the drivers have long since been upgraded, and all the important information here is also included in the "Implementing I2C device drivers" guide. Signed-off-by:
Stephen Kitt <steve@sk2.org> Reviewed-by:
Wolfram Sang <wsa@kernel.org> Link: https://lore.kernel.org/r/20200806161456.8680-1-steve@sk2.org Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Billy Wilson authored
A table lists the 5.2 stable release date as September 15, but it was released on July 7. This may confuse a reader who is trying to understand the stable update release cycle. Signed-off-by:
Billy Wilson <billy_wilson@byu.edu> Link: https://lore.kernel.org/r/20200806231754.7735-1-billy_wilson@byu.edu Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Remi Andruccioli authored
"The capability fags" should be "The capability flags". In rst markup, a incorrect markup expression is causing bad rendering in Sphinx output. Replace the erroneous single quote by a backquote. Signed-off-by:
Remi Andruccioli <remi.andruccioli@gmail.com> Link: https://lore.kernel.org/r/20200808163123.17643-1-remi.andruccioli@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Randy Dunlap authored
Documentation/admin-guide/kernel-parameters.rst includes a legend telling us what configurations or hardware platforms are relevant for certain boot options. For X86, it is spelled "X86" and for x86_64, it is spelled "X86-64", so make corrections for those. Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: x86@kernel.org Link: https://lore.kernel.org/r/20200810024941.30231-1-rdunlap@infradead.org Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Tobias Klauser authored
Support for these was added by the following commits: f2c9699f ("riscv: Add STACKPROTECTOR supported") 3c469798 ("riscv: Enable LOCKDEP_SUPPORT & fixup TRACE_IRQFLAGS_SUPPORT"). ed48b297 ("riscv: Enable context tracking") cbb3d91d ("riscv: Add kmemleak support") Signed-off-by:
Tobias Klauser <tklauser@distanz.ch> Link: https://lore.kernel.org/r/20200810095000.32092-1-tklauser@distanz.ch Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Sumera Priyadarsini authored
Modify coccinelle documentation to further clarify the usage of the makefile C variable by coccicheck. Signed-off-by:
Sumera Priyadarsini <sylphrenadin@gmail.com> Acked-by:
Julia Lawall <julia.lawall@inria.fr> Link: https://lore.kernel.org/r/20200811002350.5553-1-sylphrenadin@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Puranjay Mohan authored
Replace :c:func: with func() as the previous usage is deprecated. Signed-off-by:
Puranjay Mohan <puranjay12@gmail.com> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Link: https://lore.kernel.org/r/20200810183019.22170-1-puranjay12@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Puranjay Mohan authored
Replace :c:func: with func() as the previous usage is deprecated. Signed-off-by:
Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20200810183613.25643-1-puranjay12@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Puranjay Mohan authored
Replace :c:func: with func() as the previous usage is deprecated. Signed-off-by:
Puranjay Mohan <puranjay12@gmail.com> Link: https://lore.kernel.org/r/20200810184828.29297-1-puranjay12@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Bryan Brattlof authored
emumerated -> enumerated Signed-off-by:
Bryan Brattlof <hello@bryanbrattlof.com> Link: https://lore.kernel.org/r/87lfili2d8.fsf@bryanbrattlof.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Rafael J. Wysocki authored
Allow intel_pstate to work in the passive mode with HWP enabled and make it set the HWP minimum performance limit (HWP floor) to the P-state value given by the target frequency supplied by the cpufreq governor, so as to prevent the HWP algorithm and the CPU scheduler from working against each other, at least when the schedutil governor is in use, and update the intel_pstate documentation accordingly. Among other things, this allows utilization clamps to be taken into account, at least to a certain extent, when intel_pstate is in use and makes it more likely that sufficient capacity for deadline tasks will be provided. After this change, the resulting behavior of an HWP system with intel_pstate in the passive mode should be close to the behavior of the analogous non-HWP system with intel_pstate in the passive mode, except that the HWP algorithm is generally allowed to make the CPU run at a frequency above the floor P-state set by intel_pstate in the entire available range of P-states, while without HWP a CPU can run in a P-state above the requested one if the latter falls into the range of turbo P-states (referred to as the turbo range) or if the P-states of all CPUs in one package are coordinated with each other at the hardware level. [Note that in principle the HWP floor may not be taken into account by the processor if it falls into the turbo range, in which case the processor has a license to choose any P-state, either below or above the HWP floor, just like a non-HWP processor in the case when the target P-state falls into the turbo range.] With this change applied, intel_pstate in the passive mode assumes complete control over the HWP request MSR and concurrent changes of that MSR (eg. via the direct MSR access interface) are overridden by it. Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by:
Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Reviewed-by:
Francisco Jerez <currojerez@riseup.net>
-
Johannes Thumshirn authored
Update the zonefs documentation to reflect the difference between a zone's size and it's capacity. The maximum file size in zonefs is the zones capacity, for ZBC and ZAC based devices, which do not have a separate zone capacity, the zone capacity is equal to the zone size. Signed-off-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Damien Le Moal <damien.lemoal@wdc.com>
-
- Aug 09, 2020
-
-
Masahiro Yamada authored
To build host programs, you need to add the program names to 'hostprogs' to use the necessary build rule, but it is not enough to build them because there is no dependency. There are two types of host programs: built as the prerequisite of another (e.g. gen_crc32table in lib/Makefile), or always built when Kbuild visits the Makefile (e.g. genksyms in scripts/genksyms/Makefile). The latter is typical in Makefiles under scripts/, which contains host programs globally used during the kernel build. To build them, you need to add them to both 'hostprogs' and 'always-y'. This commit adds hostprogs-always-y as a shorthand. The same applies to user programs. net/bpfilter/Makefile builds bpfilter_umh on demand, hence always-y is unneeded. In contrast, programs under samples/ are added to both 'userprogs' and 'always-y' so they are always built when Kbuild visits the Makefiles. userprogs-always-y works as a shorthand. Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org> Acked-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
-
Alexander A. Klimov authored
Rationale: Reduces attack surface on kernel devs opening the links for MITM as HTTPS traffic is much harder to manipulate. Deterministic algorithm: For each file: If not .svg: For each line: If doesn't contain `\bxmlns\b`: For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`: If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`: If both the HTTP and HTTPS versions return 200 OK and serve the same content: Replace HTTP with HTTPS. Signed-off-by:
Alexander A. Klimov <grandmaster@al2klimov.de> Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org>
-
Masahiro Yamada authored
CFLAGS_REMOVE_<file>.o filters out flags when compiling a particular object, but there is no convenient way to do that for every object in a directory. Add ccflags-remove-y and asflags-remove-y to make it easily. Use ccflags-remove-y to clean up some Makefiles. The add/remove order works as follows: [1] KBUILD_CFLAGS specifies compiler flags used globally [2] ccflags-y adds compiler flags for all objects in the current Makefile [3] ccflags-remove-y removes compiler flags for all objects in the current Makefile (New feature) [4] CFLAGS_<file> adds compiler flags per file. [5] CFLAGS_REMOVE_<file> removes compiler flags per file. Having [3] before [4] allows us to remove flags from most (but not all) objects in the current Makefile. For example, kernel/trace/Makefile removes $(CC_FLAGS_FTRACE) from all objects in the directory, then adds it back to trace_selftest_dynamic.o and CFLAGS_trace_kprobe_selftest.o The same applies to lib/livepatch/Makefile. Please note ccflags-remove-y has no effect to the sub-directories. In contrast, the previous notation got rid of compiler flags also from all the sub-directories. The following are not affected because they have no sub-directories: arch/arm/boot/compressed/ arch/powerpc/xmon/ arch/sh/ kernel/trace/ However, lib/ has several sub-directories. To keep the behavior, I added ccflags-remove-y to all Makefiles in subdirectories of lib/, except the following: lib/vdso/Makefile - Kbuild does not descend into this Makefile lib/raid/test/Makefile - This is not used for the kernel build I think commit 2464a609 ("ftrace: do not trace library functions") excluded too much. In the next commit, I will remove ccflags-remove-y from the sub-directories of lib/. Suggested-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org> Acked-by:
Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Acked-by: Brendan Higgins <brendanhiggins@google.com> (KUnit) Tested-by:
Anders Roxell <anders.roxell@linaro.org>
-
- Aug 07, 2020
-
-
Walter Wu authored
Generic KASAN will support to record the last two call_rcu() call stacks and print them in KASAN report. So that need to update documentation. Signed-off-by:
Walter Wu <walter-zh.wu@mediatek.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Tested-by:
Dmitry Vyukov <dvyukov@google.com> Reviewed-by:
Dmitry Vyukov <dvyukov@google.com> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Alexander Potapenko <glider@google.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: "Paul E . McKenney" <paulmck@kernel.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Joel Fernandes <joel@joelfernandes.org> Link: http://lkml.kernel.org/r/20200601051111.1359-1-walter-zh.wu@mediatek.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Marco Elver authored
Updates the recently changed compiler requirements for KASAN. In particular, we require GCC >= 8.3.0, and add a note that Clang 11 supports OOB detection of globals. Fixes: 7b861a53 ("kasan: Bump required compiler version") Fixes: acf7b0bf ("kasan: Fix required compiler version") Signed-off-by:
Marco Elver <elver@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Andrey Konovalov <andreyknvl@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Walter Wu <walter-zh.wu@mediatek.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Daniel Axtens <dja@axtens.net> Link: http://lkml.kernel.org/r/20200629104157.3242503-2-elver@google.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent functions that call memory_present() for each region in memblock.memory: sparse_memory_present_with_active_regions() and membocks_present(). Moreover, all architectures have a call to either of these functions preceding the call to sparse_init() and in the most cases they are called one after the other. Mark the regions from memblock.memory as present during sparce_init() by making sparse_init() call memblocks_present(), make memblocks_present() and memory_present() functions static and remove redundant sparse_memory_present_with_active_regions() function. Also remove no longer required HAVE_MEMORY_PRESENT configuration option. Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Anshuman Khandual authored
There are many instances where vmemap allocation is often switched between regular memory and device memory just based on whether altmap is available or not. vmemmap_alloc_block_buf() is used in various platforms to allocate vmemmap mappings. Lets also enable it to handle altmap based device memory allocation along with existing regular memory allocations. This will help in avoiding the altmap based allocation switch in many places. To summarize there are two different methods to call vmemmap_alloc_block_buf(). vmemmap_alloc_block_buf(size, node, NULL) /* Allocate from system RAM */ vmemmap_alloc_block_buf(size, node, altmap) /* Allocate from altmap */ This converts altmap_alloc_block_buf() into a static function, drops it's entry from the header and updates Documentation/vm/memory-model.rst. Suggested-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Tested-by:
Jia He <justin.he@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Will Deacon <will@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Hsin-Yi Wang <hsinyi@chromium.org> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@linux.ibm.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Pavel Tatashin <pasha.tatashin@soleen.com> Cc: Steve Capper <steve.capper@arm.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Yu Zhao <yuzhao@google.com> Link: http://lkml.kernel.org/r/1594004178-8861-3-git-send-email-anshuman.khandual@arm.com Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-