- Apr 20, 2018
-
-
Stephan Mueller authored
During freeing of the internal buffers used by the DRBG, set the pointer to NULL. It is possible that the context with the freed buffers is reused. In case of an error during initialization where the pointers do not yet point to allocated memory, the NULL value prevents a double free. Cc: stable@vger.kernel.org Fixes: 3cfc3b97 ("crypto: drbg - use aligned buffers") Signed-off-by:
Stephan Mueller <smueller@chronox.de> Reported-by:
<syzbot+75397ee3df5c70164154@syzkaller.appspotmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Commit eb02c38f ("crypto: api - Keep failed instances alive") is making allocating crypto transforms sometimes fail with ELIBBAD, when multiple processes try to access encrypted files with fscrypt for the first time since boot. The problem is that the "request larval" for the algorithm is being mistaken for an algorithm which failed its tests. Fix it by only returning ELIBBAD for "non-larval" algorithms. Also don't leak a reference to the algorithm. Fixes: eb02c38f ("crypto: api - Keep failed instances alive") Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Apr 08, 2018
-
-
Eric Dumazet authored
syzbot reported : BUG: KMSAN: uninit-value in alg_bind+0xe3/0xd90 crypto/af_alg.c:162 We need to check addr_len before dereferencing sa (or uaddr) Fixes: bb30b884 ("crypto: af_alg - whitelist mask and type") Signed-off-by:
Eric Dumazet <edumazet@google.com> Reported-by:
syzbot <syzkaller@googlegroups.com> Cc: Stephan Mueller <smueller@chronox.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Apr 07, 2018
-
-
Masahiro Yamada authored
Our convention is to distinguish file types by suffixes with a period as a separator. *-asn1.[ch] is a different pattern from other generated sources such as *.lex.c, *.tab.[ch], *.dtb.S, etc. More confusing, files with '-asn1.[ch]' are generated files, but '_asn1.[ch]' are checked-in files: net/netfilter/nf_conntrack_h323_asn1.c include/linux/netfilter/nf_conntrack_h323_asn1.h include/linux/sunrpc/gss_asn1.h Rename generated files to *.asn1.[ch] for consistency. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
Clean up these patterns from the top Makefile to omit 'clean-files' in each Makefile. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Masahiro Yamada authored
These are common patterns where source files are parsed by the asn1_compiler. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
- Mar 30, 2018
-
-
Herbert Xu authored
When we have an unaligned SG list entry where there is no leftover aligned data, the hash walk code will incorrectly return zero as if the entire SG list has been processed. This patch fixes it by moving onto the next page instead. Reported-by:
Eli Cooper <elicooper@gmx.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The buffer rctx->ext contains potentially sensitive data and should be freed with kzfree. Cc: <stable@vger.kernel.org> Fixes: 700cb3f5 ("crypto: lrw - Convert to skcipher") Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Andy Shevchenko authored
Deduplicate le32_to_cpu_array() and cpu_to_le32_array() by moving them to the generic header. No functional change implied. Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch reverts commit 9c521a20 ("crypto: api - remove instance when test failed") and fixes the underlying problem in a different way. To recap, prior to the reverted commit, an instance that fails a self-test is kept around. However, it would satisfy any new lookups against its name and therefore the system may accumlulate an unbounded number of failed instances for the same algorithm name. The reverted commit fixed it by unregistering the instance. Hoever, this still does not prevent the creation of the same failed instance over and over again each time the name is looked up. This patch fixes it by keeping the failed instance around, just as we would if it were a normal algorithm. However, the lookup code has been udpated so that we do not attempt to create another instance as long as this failed one is still registered. Of course, you could still force a new creation by deleting the instance from user-space. A new error (ELIBBAD) has been commandeered for this purpose and will be returned when all registered algorithm of a given name have failed the self-test. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The function crypto_alg_lookup is only usd within the crypto API and should be not be exported to the modules. This patch marks it as a static function. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The lookup function in crypto_type was only used for the implicit IV generators which have been completely removed from the crypto API. This patch removes the lookup function as it is now useless. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 16, 2018
-
-
Ard Biesheuvel authored
In order to be able to test yield support under preempt, add a test vector for CRC-T10DIF that is long enough to take multiple iterations (and thus possible preemption between them) of the primary loop of the accelerated x86 and arm64 implementations. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
On the quest to remove all VLAs from the kernel[1], this switches to a pair of kmalloc regions instead of using the stack. This also moves the get_random_bytes() after all allocations (and drops the needless "nbytes" variable). [1] https://lkml.org/lkml/2018/3/7/621 Signed-off-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Add testmgr tests for the newly introduced SM4 ECB symmetric cipher. Signed-off-by:
Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Introduce the SM4 cipher algorithms (OSCCA GB/T 32907-2016). SM4 (GBT.32907-2016) is a cryptographic standard issued by the Organization of State Commercial Administration of China (OSCCA) as an authorized cryptographic algorithms for the use within China. SMS4 was originally created for use in protecting wireless networks, and is mandated in the Chinese National Standard for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure) (GB.15629.11-2003). Signed-off-by:
Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 09, 2018
-
-
David Howells authored
Remove the MN10300 arch as the hardware is defunct. Suggested-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
David Howells <dhowells@redhat.com> cc: Masahiro Yamada <yamada.masahiro@socionext.com> cc: linux-am33-list@redhat.com Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
James Bottomley authored
Apparently the ecdh use case was in bluetooth which always has single element scatterlists, so the ecdh module was hard coded to expect them. Now we're using this in TPM, we need multi-element scatterlists, so remove this limitation. Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
James Bottomley authored
TPM security routines require encryption and decryption with AES in CFB mode, so add it to the Linux Crypto schemes. CFB is basically a one time pad where the pad is generated initially from the encrypted IV and then subsequently from the encrypted previous block of ciphertext. The pad is XOR'd into the plain text to get the final ciphertext. https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#CFB Signed-off-by:
James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 02, 2018
-
-
Eric Biggers authored
All users of ablk_helper have been converted over to crypto_simd, so remove ablk_helper. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Now that all users of lrw_crypt() have been removed in favor of the LRW template wrapping an ECB mode algorithm, remove lrw_crypt(). Also remove crypto/lrw.h as that is no longer needed either; and fold 'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into setkey(), and lrw_free_table() into exit_tfm(). Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Now that all users of xts_crypt() have been removed in favor of the XTS template wrapping an ECB mode algorithm, remove xts_crypt(). Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AESNI AVX and AESNI AVX2 implementations of Camellia from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the x86 asm implementation of Camellia from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-camellia-asm algorithm which did this. Users who request xts(camellia) and previously would have gotten xts-camellia-asm will now get xts(ecb-camellia-asm) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-asm algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-asm will now get lrw(ecb-camellia-asm) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni-avx2 algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni-avx2 will now get lrw(ecb-camellia-aesni-avx2) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-camellia-aesni algorithm which did this. Users who request lrw(camellia) and previously would have gotten lrw-camellia-aesni will now get lrw(ecb-camellia-aesni) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the x86 asm implementation of Triple DES from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the x86 asm implementation of Blowfish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-cast6-avx algorithm which did this. Users who request lrw(cast6) and previously would have gotten lrw-cast6-avx will now get lrw(ecb-cast6-avx) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX implementation of CAST5 from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX implementation of Twofish from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-avx algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-avx will now get lrw(ecb-twofish-avx) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the 3-way implementation of Twofish from the (deprecated) blkcipher interface over to the skcipher interface. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The XTS template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic XTS code themselves via xts_crypt(). Remove the xts-twofish-3way algorithm which did this. Users who request xts(twofish) and previously would have gotten xts-twofish-3way will now get xts(ecb-twofish-3way) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-twofish-3way algorithm which did this. Users who request lrw(twofish) and previously would have gotten lrw-twofish-3way will now get lrw(ecb-twofish-3way) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Convert the AVX and AVX2 implementations of Serpent from the (deprecated) ablkcipher and blkcipher interfaces over to the skcipher interface. Note that this includes replacing the use of ablk_helper with crypto_simd. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The LRW template now wraps an ECB mode algorithm rather than the block cipher directly. Therefore it is now redundant for crypto modules to wrap their ECB code with generic LRW code themselves via lrw_crypt(). Remove the lrw-serpent-avx algorithm which did this. Users who request lrw(serpent) and previously would have gotten lrw-serpent-avx will now get lrw(ecb-serpent-avx) instead, which is just as fast. Signed-off-by:
Eric Biggers <ebiggers@google.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-