Skip to content
  1. Jun 13, 2020
    • Masahiro Yamada's avatar
      treewide: replace '---help---' in Kconfig files with 'help' · a7f7f624
      Masahiro Yamada authored
      
      
      Since commit 84af7a61 ("checkpatch: kconfig: prefer 'help' over
      '---help---'"), the number of '---help---' has been gradually
      decreasing, but there are still more than 2400 instances.
      
      This commit finishes the conversion. While I touched the lines,
      I also fixed the indentation.
      
      There are a variety of indentation styles found.
      
        a) 4 spaces + '---help---'
        b) 7 spaces + '---help---'
        c) 8 spaces + '---help---'
        d) 1 space + 1 tab + '---help---'
        e) 1 tab + '---help---'    (correct indentation)
        f) 1 tab + 1 space + '---help---'
        g) 1 tab + 2 spaces + '---help---'
      
      In order to convert all of them to 1 tab + 'help', I ran the
      following commend:
      
        $ find . -name 'Kconfig*' | xargs sed -i 's/^[[:space:]]*---help---/\thelp/'
      
      Signed-off-by: default avatarMasahiro Yamada <masahiroy@kernel.org>
      a7f7f624
  2. Jun 12, 2020
    • Dave Rodgman's avatar
      lib/lzo: fix ambiguous encoding bug in lzo-rle · b5265c81
      Dave Rodgman authored
      
      
      In some rare cases, for input data over 32 KB, lzo-rle could encode two
      different inputs to the same compressed representation, so that
      decompression is then ambiguous (i.e.  data may be corrupted - although
      zram is not affected because it operates over 4 KB pages).
      
      This modifies the compressor without changing the decompressor or the
      bitstream format, such that:
      
       - there is no change to how data produced by the old compressor is
         decompressed
      
       - an old decompressor will correctly decode data from the updated
         compressor
      
       - performance and compression ratio are not affected
      
       - we avoid introducing a new bitstream format
      
      In testing over 12.8M real-world files totalling 903 GB, three files
      were affected by this bug.  I also constructed 37M semi-random 64 KB
      files totalling 2.27 TB, and saw no affected files.  Finally I tested
      over files constructed to contain each of the ~1024 possible bad input
      sequences; for all of these cases, updated lzo-rle worked correctly.
      
      There is no significant impact to performance or compression ratio.
      
      Signed-off-by: default avatarDave Rodgman <dave.rodgman@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Dave Rodgman <dave.rodgman@arm.com>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Chao Yu <yuchao0@huawei.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200507100203.29785-1-dave.rodgman@arm.com
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b5265c81
  3. Jun 11, 2020
  4. Jun 09, 2020
  5. Jun 08, 2020
  6. Jun 06, 2020
    • Herbert Xu's avatar
      rhashtable: Drop raw RCU deref in nested_table_free · 4a3084aa
      Herbert Xu authored
      
      
      This patch replaces some unnecessary uses of rcu_dereference_raw
      in the rhashtable code with rcu_dereference_protected.
      
      The top-level nested table entry is only marked as RCU because it
      shares the same type as the tree entries underneath it.  So it
      doesn't need any RCU protection.
      
      We also don't need RCU protection when we're freeing a nested RCU
      table because by this stage we've long passed a memory barrier
      when anyone could change the nested table.
      
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4a3084aa
  7. Jun 05, 2020
  8. Jun 04, 2020
    • Daniel Axtens's avatar
      kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE · adb72ae1
      Daniel Axtens authored
      Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4.
      
      3 KASAN self-tests fail on a kernel with both KASAN and FORTIFY_SOURCE:
      memchr, memcmp and strlen.
      
      When FORTIFY_SOURCE is on, a number of functions are replaced with
      fortified versions, which attempt to check the sizes of the operands.
      However, these functions often directly invoke __builtin_foo() once they
      have performed the fortify check.  The compiler can detect that the
      results of these functions are not used, and knows that they have no other
      side effects, and so can eliminate them as dead code.
      
      Why are only memchr, memcmp and strlen affected?
      ================================================
      
      Of string and string-like functions, kasan_test tests:
      
       * strchr  ->  not affected, no fortified version
       * strrchr ->  likewise
       * strcmp  ->  likewise
       * strncmp ->  likewise
      
       * strnlen ->  not affected, the fortify source implementation calls the
                     underlying strnlen implementation which is instrumented, not
                     a builtin
      
       * strlen  ->  affected, the fortify souce implementation calls a __builtin
                     version which the compiler can determine is dead.
      
       * memchr  ->  likewise
       * memcmp  ->  likewise
      
       * memset ->   not affected, the compiler knows that memset writes to its
      	       first argument and therefore is not dead.
      
      Why does this not affect the functions normally?
      ================================================
      
      In string.h, these functions are not marked as __pure, so the compiler
      cannot know that they do not have side effects.  If relevant functions are
      marked as __pure in string.h, we see the following warnings and the
      functions are elided:
      
      lib/test_kasan.c: In function `kasan_memchr':
      lib/test_kasan.c:606:2: warning: statement with no effect [-Wunused-value]
        memchr(ptr, '1', size + 1);
        ^~~~~~~~~~~~~~~~~~~~~~~~~~
      lib/test_kasan.c: In function `kasan_memcmp':
      lib/test_kasan.c:622:2: warning: statement with no effect [-Wunused-value]
        memcmp(ptr, arr, size+1);
        ^~~~~~~~~~~~~~~~~~~~~~~~
      lib/test_kasan.c: In function `kasan_strings':
      lib/test_kasan.c:645:2: warning: statement with no effect [-Wunused-value]
        strchr(ptr, '1');
        ^~~~~~~~~~~~~~~~
      ...
      
      This annotation would make sense to add and could be added at any point,
      so the behaviour of test_kasan.c should change.
      
      The fix
      =======
      
      Make all the functions that are pure write their results to a global,
      which makes them live.  The strlen and memchr tests now pass.
      
      The memcmp test still fails to trigger, which is addressed in the next
      patch.
      
      [dja@axtens.net: drop patch 3]
        Link: http://lkml.kernel.org/r/20200424145521.8203-2-dja@axtens.net
      
      
      Fixes: 0c96350a ("lib/test_kasan.c: add tests for several string/memory API functions")
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Tested-by: default avatarDavid Gow <davidgow@google.com>
      Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Daniel Micay <danielmicay@gmail.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Link: http://lkml.kernel.org/r/20200423154503.5103-1-dja@axtens.net
      Link: http://lkml.kernel.org/r/20200423154503.5103-2-dja@axtens.net
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      adb72ae1
  9. Jun 03, 2020
    • Christophe Leroy's avatar
      lib/vdso: Force inlining of __cvdso_clock_gettime_common() · b91c8c42
      Christophe Leroy authored
      
      
      When adding gettime64() to a 32 bit architecture (namely powerpc/32)
      it has been noticed that GCC doesn't inline anymore
      __cvdso_clock_gettime_common() because it is called twice
      (Once by __cvdso_clock_gettime() and once by
      __cvdso_clock_gettime32).
      
      This has the effect of seriously degrading the performance:
      
      Before the implementation of gettime64(), gettime() runs in:
      
        clock-gettime-monotonic-raw:	    vdso: 1003 nsec/call
        clock-gettime-monotonic-coarse:   vdso:  592 nsec/call
        clock-gettime-monotonic:          vdso:  942 nsec/call
      
      When adding a gettime64() entry point, the standard gettime()
      performance is degraded by 30% to 50%:
      
        clock-gettime-monotonic-raw:      vdso: 1300 nsec/call
        clock-gettime-monotonic-coarse:   vdso:  900 nsec/call
        clock-gettime-monotonic:          vdso: 1232 nsec/call
      
      Adding __always_inline() to __cvdso_clock_gettime_common() regains the
      original performance.
      
      In terms of code size, the inlining increases the code size by only 176
      bytes. This is in the noise for a kernel image.
      
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/1ab6a62c356c3bec35d1623563ef9c636205bcda.1588079622.git.christophe.leroy@c-s.fr
      b91c8c42
  10. Jun 02, 2020
  11. Jun 01, 2020
Loading