- Nov 12, 2010
-
-
Steffen Klassert authored
kobject_put is called from padata_free for the padata kobject. The kobject's release function frees the padata instance, so don't call kobject_put for the padata kobject from pcrypt. Reported-and-tested-by:
Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Oct 26, 2010
-
-
Peter Zijlstra authored
Ensure kmap_atomic() usage is strictly nested Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Reviewed-by:
Rik van Riel <riel@redhat.com> Acked-by:
Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Oct 07, 2010
-
-
Dan Williams authored
The prompt for "Self test for hardware accelerated raid6 recovery" does not belong in the top level configuration menu. All the options in crypto/async_tx/Kconfig are selected and do not depend on CRYPTO. Kconfig.debug seems like a reasonable fit. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
David Howells authored
Rename the PC2() symbol in the generic DES crypto module to be prefixed with DES_ to avoid collision with arch code (Blackfin in this case). Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Mike Frysinger <vapier@gentoo.org>
-
- Sep 20, 2010
-
-
Adrian Hoban authored
This patch adds AEAD support into the cryptd framework. Having AEAD support in cryptd enables crypto drivers that use the AEAD interface type (such as the patch for AEAD based RFC4106 AES-GCM implementation using Intel New Instructions) to leverage cryptd for asynchronous processing. Signed-off-by:
Adrian Hoban <adrian.hoban@intel.com> Signed-off-by:
Tadeusz Struk <tadeusz.struk@intel.com> Signed-off-by:
Gabriele Paoloni <gabriele.paoloni@intel.com> Signed-off-by:
Aidan O'Mahony <aidan.o.mahony@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Sep 12, 2010
-
-
Justin P. Mattock authored
Below is a patch to update the broken web addresses, in crypto/* that I could locate. Some are just simple typos that needed to be fixed, and some had a change in location altogether.. let me know if any of them need to be changed and such. Signed-off-by:
Justin P. Mattock <justinmattock@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Sep 03, 2010
-
-
Chuck Ebbert authored
Signed-off-by:
Chuck Ebbert <cebbert@redhat.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Aug 06, 2010
-
-
Herbert Xu authored
On Thu, Aug 05, 2010 at 07:01:21PM -0700, Linus Torvalds wrote: > On Thu, Aug 5, 2010 at 6:40 PM, Herbert Xu <herbert@gondor.hengli.com.au> wrote: > > > > -config CRYPTO_MANAGER_TESTS > > - bool "Run algolithms' self-tests" > > - default y > > - depends on CRYPTO_MANAGER2 > > +config CRYPTO_MANAGER_DISABLE_TESTS > > + bool "Disable run-time self tests" > > + depends on CRYPTO_MANAGER2 && EMBEDDED > > Why do you still want to force-enable those tests? I was going to > complain about the "default y" anyway, now I'm _really_ complaining, > because you've now made it impossible to disable those tests. Why? As requested, this patch sets the default to y and removes the EMBEDDED dependency. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch fixes a serious bug in the test disabling patch where it can cause an spurious load of the cryptomgr module even when it's compiled in. It also negates the test disabling option so that its absence causes tests to be enabled. The Kconfig option is also now behind EMBEDDED. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Szilveszter Ördög authored
If a scatterwalk chain contains an entry with an unaligned offset then hash_walk_next() will cut off the next step at the next alignment point. However, if the entry ends before the next alignment point then we a loop, which leads to a kernel oops. Fix this by checking whether the next aligment point is before the end of the current entry. Signed-off-by:
Szilveszter Ördög <slipszi@gmail.com> Acked-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jul 31, 2010
-
-
Steffen Klassert authored
The padata cpumask change notifier passes a padata_cpumask to the notifier chain. So we use this cpumask instead of asking padata for the cpumask. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
In the crypto-layer an instance refers usually to a crypto instance. The struct pcrypt_instance is not related to a crypto instance. It rather contains the padata informations, so we rename it to padata_pcrypt. The functions that handle this struct are renamed accordingly. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
We rename padata_alloc to padata_alloc_possible because this function allocates a padata_instance and uses the cpu_possible mask for parallel and serial workers. Also we rename __padata_alloc to padata_alloc to avoid to export underlined functions. Underlined functions are considered to be private to padata. Users are updated accordingly. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jul 26, 2010
-
-
Steffen Klassert authored
If the callback cpumask is empty, we crash with a division by zero when we try to calculate a callback cpu. So we don't update the callback cpu in pcrypt_do_parallel if the callback cpumask is empty. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jul 19, 2010
-
-
Dan Kruchinin authored
Added sysfs interface to pcrypt. Now pcrypt subsystem creates two sysfs directories with corresponding padata sysfs objects: /sys/kernel/pcrypt/[pencrypt|pdecrypt] Signed-off-by:
Dan Kruchinin <dkruchinin@acm.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Dan Kruchinin authored
The aim of this patch is to make two separate cpumasks for padata parallel and serial workers respectively. It allows user to make more thin and sophisticated configurations of padata framework. For example user may bind parallel and serial workers to non-intersecting CPU groups to gain better performance. Also each padata instance has notifiers chain for its cpumasks now. If either parallel or serial or both masks were changed all interested subsystems will get notification about that. It's especially useful if padata user uses algorithm for callback CPU selection according to serial cpumask. Signed-off-by:
Dan Kruchinin <dkruchinin@acm.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jul 14, 2010
-
-
Steffen Klassert authored
To return -EINPROGRESS on success in padata_do_parallel was considered to be odd. This patch changes this to return zero on success. Also the only user of padata, pcrypt is adapted to convert a return of zero to -EINPROGRESS within the crypto layer. This also removes the pcrypt fallback if padata_do_parallel was called on a not running padata instance as we can't handle it anymore. This fallback was unused, so it's save to remove it. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
This patch introduces the PADATA_INVALID flag which is checked on padata start. This will be used to mark a padata instance as invalid, if the padata cpumask does not intersect with the active cpumask. we change padata_start to return an error if the PADATA_INVALID is set. Also we adapt the only padata user, pcrypt to this change. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jun 23, 2010
-
-
Jiri Slaby authored
Stanse found a potential NULL dereference in ablkcipher_next_slow. Even though kmalloc fails, its retval is dereferenced later. Return from that function properly earlier. Signed-off-by:
Jiri Slaby <jslaby@suse.cz> Acked-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Jun 03, 2010
-
-
Joachim Fritschi authored
This fixes the broken autoloading of the corresponding twofish assembler ciphers on x86 and x86_64 if they are available. The module name of the generic implementation was in conflict with the alias in the assembler modules. The generic twofish c implementation is renamed to twofish_generic according to the other algorithms with assembler implementations and an module alias is added for 'twofish'. You can now load 'twofish' giving you the best implementation by priority, 'twofish-generic' to get the c implementation or 'twofish-asm' to get the assembler version of cipher. Signed-off-by:
Joachim Fritschi <jfritschi@freenet.de> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Alexander Shishkin authored
By default, CONFIG_CRYPTO_MANAGER_TESTS will be enabled and thus self-tests will still run, but it is now possible to disable them to gain some time during bootup. Signed-off-by:
Alexander Shishkin <virtuoso@slind.org> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The PCOMP Kconfig entry current allows the following combination which is illegal: ZLIB=y PCOMP=y ALGAPI=m ALGAPI2=y MANAGER=m MANAGER2=m This patch fixes this by adding PCOMP2 so that PCOMP can select ALGAPI to propagate the setting to MANAGER2. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- May 26, 2010
-
-
Julia Lawall authored
Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more clear what is the purpose of the operation, which otherwise looks like a no-op. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/ ) // <smpl> @@ type T; T x; identifier f; @@ T f (...) { <+... - ERR_PTR(PTR_ERR(x)) + x ...+> } @@ expression x; @@ - ERR_PTR(PTR_ERR(x)) + ERR_CAST(x) // </smpl> Signed-off-by:
Julia Lawall <julia@diku.dk> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- May 20, 2010
-
-
Shikhar Khattar authored
This patch (applied against 2.6.34) fixes the calculation of the length of the ABLKCIPHER decrypt request ("cryptlen") after an asynchronous hash request has been completed in the AUTHENC interface. Signed-off-by:
Shikhar Khattar <shikhark@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- May 19, 2010
-
-
David S. Miller authored
These are akin to the blkcipher_walk helpers. The main differences in the async variant are: 1) Only physical walking is supported. We can't hold on to kmap mappings across the async operation to support virtual ablkcipher_walk operations anyways. 2) Bounce buffers used for async more need to be persistent and freed at a later point in time when the async op completes. Therefore we maintain a list of writeback buffers and require that the ablkcipher_walk user call the 'complete' operation so we can copy the bounce buffers out to the real buffers and free up the bounce buffer chunks. These interfaces will be used by the new Niagara2 crypto driver. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
David S. Miller authored
Extend testmgr such that it tests async hash algorithms, and that for both sync and async hashes it tests both ->digest() and ->update()/->final() sequences. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
David S. Miller authored
These are invoked in the 'mode' range of 400 to 499. The cost of async vs. sync for the software algorithm implementations varies. It can be as low as 16 cycles but as much as a couple hundred. Here two runs of md5 testing, async then sync: testing speed of async md5 test 0 ( 16 byte blocks, 16 bytes per update, 1 updates): 2448 cycles/operation, 153 cycles/byte test 1 ( 64 byte blocks, 16 bytes per update, 4 updates): 4992 cycles/operation, 78 cycles/byte test 2 ( 64 byte blocks, 64 bytes per update, 1 updates): 3808 cycles/operation, 59 cycles/byte test 3 ( 256 byte blocks, 16 bytes per update, 16 updates): 14000 cycles/operation, 54 cycles/byte test 4 ( 256 byte blocks, 64 bytes per update, 4 updates): 8480 cycles/operation, 33 cycles/byte test 5 ( 256 byte blocks, 256 bytes per update, 1 updates): 7280 cycles/operation, 28 cycles/byte test 6 ( 1024 byte blocks, 16 bytes per update, 64 updates): 50016 cycles/operation, 48 cycles/byte test 7 ( 1024 byte blocks, 256 bytes per update, 4 updates): 22496 cycles/operation, 21 cycles/byte test 8 ( 1024 byte blocks, 1024 bytes per update, 1 updates): 21232 cycles/operation, 20 cycles/byte test 9 ( 2048 byte blocks, 16 bytes per update, 128 updates): 117184 cycles/operation, 57 cycles/byte test 10 ( 2048 byte blocks, 256 bytes per update, 8 updates): 43008 cycles/operation, 21 cycles/byte test 11 ( 2048 byte blocks, 1024 bytes per update, 2 updates): 40176 cycles/operation, 19 cycles/byte test 12 ( 2048 byte blocks, 2048 bytes per update, 1 updates): 39888 cycles/operation, 19 cycles/byte test 13 ( 4096 byte blocks, 16 bytes per update, 256 updates): 194176 cycles/operation, 47 cycles/byte test 14 ( 4096 byte blocks, 256 bytes per update, 16 updates): 84096 cycles/operation, 20 cycles/byte test 15 ( 4096 byte blocks, 1024 bytes per update, 4 updates): 78336 cycles/operation, 19 cycles/byte test 16 ( 4096 byte blocks, 4096 bytes per update, 1 updates): 77120 cycles/operation, 18 cycles/byte test 17 ( 8192 byte blocks, 16 bytes per update, 512 updates): 403056 cycles/operation, 49 cycles/byte test 18 ( 8192 byte blocks, 256 bytes per update, 32 updates): 166112 cycles/operation, 20 cycles/byte test 19 ( 8192 byte blocks, 1024 bytes per update, 8 updates): 154768 cycles/operation, 18 cycles/byte test 20 ( 8192 byte blocks, 4096 bytes per update, 2 updates): 151904 cycles/operation, 18 cycles/byte test 21 ( 8192 byte blocks, 8192 bytes per update, 1 updates): 155456 cycles/operation, 18 cycles/byte testing speed of md5 test 0 ( 16 byte blocks, 16 bytes per update, 1 updates): 2208 cycles/operation, 138 cycles/byte test 1 ( 64 byte blocks, 16 bytes per update, 4 updates): 5008 cycles/operation, 78 cycles/byte test 2 ( 64 byte blocks, 64 bytes per update, 1 updates): 3600 cycles/operation, 56 cycles/byte test 3 ( 256 byte blocks, 16 bytes per update, 16 updates): 14080 cycles/operation, 55 cycles/byte test 4 ( 256 byte blocks, 64 bytes per update, 4 updates): 8560 cycles/operation, 33 cycles/byte test 5 ( 256 byte blocks, 256 bytes per update, 1 updates): 7040 cycles/operation, 27 cycles/byte test 6 ( 1024 byte blocks, 16 bytes per update, 64 updates): 50592 cycles/operation, 49 cycles/byte test 7 ( 1024 byte blocks, 256 bytes per update, 4 updates): 22736 cycles/operation, 22 cycles/byte test 8 ( 1024 byte blocks, 1024 bytes per update, 1 updates): 24960 cycles/operation, 24 cycles/byte test 9 ( 2048 byte blocks, 16 bytes per update, 128 updates): 99312 cycles/operation, 48 cycles/byte test 10 ( 2048 byte blocks, 256 bytes per update, 8 updates): 43520 cycles/operation, 21 cycles/byte test 11 ( 2048 byte blocks, 1024 bytes per update, 2 updates): 40704 cycles/operation, 19 cycles/byte test 12 ( 2048 byte blocks, 2048 bytes per update, 1 updates): 39552 cycles/operation, 19 cycles/byte test 13 ( 4096 byte blocks, 16 bytes per update, 256 updates): 196720 cycles/operation, 48 cycles/byte test 14 ( 4096 byte blocks, 256 bytes per update, 16 updates): 85152 cycles/operation, 20 cycles/byte test 15 ( 4096 byte blocks, 1024 bytes per update, 4 updates): 79408 cycles/operation, 19 cycles/byte test 16 ( 4096 byte blocks, 4096 bytes per update, 1 updates): 76816 cycles/operation, 18 cycles/byte test 17 ( 8192 byte blocks, 16 bytes per update, 512 updates): 391520 cycles/operation, 47 cycles/byte test 18 ( 8192 byte blocks, 256 bytes per update, 32 updates): 168464 cycles/operation, 20 cycles/byte test 19 ( 8192 byte blocks, 1024 bytes per update, 8 updates): 156912 cycles/operation, 19 cycles/byte test 20 ( 8192 byte blocks, 4096 bytes per update, 2 updates): 154016 cycles/operation, 18 cycles/byte test 21 ( 8192 byte blocks, 8192 bytes per update, 1 updates): 153856 cycles/operation, 18 cycles/byte We can ditch the sync hash code at some point if we feel that makes sense. For now I've left it there. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
David S. Miller authored
We are done with the scattergather entry when the walk offset goes past sg->offset + sg->length, not when it crosses a page boundary. There is a similarly queer test in the second half of scatterwalk_pagedone() that probably needs some scrutiny. Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The macro CRYPTO_MINALIGN is not meant to be used directly. This patch replaces it with crypto_tfm_ctx_alignment. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- May 17, 2010
-
-
Dan Williams authored
Saves 24 bytes per descriptor (64-bit) when the channel-switching capabilities of async_tx are not required. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- May 05, 2010
-
-
Dan Williams authored
The raid6 recovery code should immediately drop back to the optimized synchronous path when a p+q dma resource is not available. Otherwise we run the non-optimized/multi-pass async code in sync mode. Verified with raid6test (NDISKS=255) Applies to kernels >= 2.6.32. Cc: <stable@kernel.org> Acked-by:
NeilBrown <neilb@suse.de> Reported-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- May 03, 2010
-
-
Dan Carpenter authored
We don't check "frontend" consistently in crypto_init_spawn2(). We check it at the start of the function but then we dereference it unconditionally in the parameter list when we call crypto_init_spawn(). I looked at the places that call crypto_init_spawn2() and "frontend" is always a valid pointer so I removed the check for null. Signed-off-by:
Dan Carpenter <error27@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Apr 26, 2010
-
-
Herbert Xu authored
When Steffen originally wrote the authenc async hash patch, he correctly had EINPROGRESS checks in place so that we did not invoke the original completion handler with it. Unfortuantely I told him to remove it before the patch was applied. As only MAY_BACKLOG request completion handlers are required to handle EINPROGRESS completions, those checks are really needed. This patch restores them. Reported-by:
Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 30, 2010
-
-
Tejun Heo authored
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by:
Tejun Heo <tj@kernel.org> Guess-its-ok-by:
Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- Mar 29, 2010
-
-
Gilles Espinasse authored
Signed-off-by:
Gilles Espinasse <g.esp@free.fr> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
- Mar 24, 2010
-
-
Dan Carpenter authored
I was concerned about the error handling for crypto_get_attr_type() in pcrypt_alloc_aead(). Steffen Klassert pointed out that we could simply avoid calling crypto_get_attr_type() if we passed the type and mask as a parameters. Signed-off-by:
Dan Carpenter <error27@gmail.com> Acked-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 18, 2010
-
-
Shane Wang authored
This patch is to fix the vmac algorithm, add more test cases for vmac, and fix the test failure on some big endian system like s390. Signed-off-by:
Shane Wang <shane.wang@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 10, 2010
-
-
Huang Ying authored
Because ghash needs setkey, the setkey and keysize template support for test_hash_speed is added. Signed-off-by:
Huang Ying <ying.huang@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Richard Hartmann authored
Signed-off-by:
Richard Hartmann <richih.mailinglist@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Mar 03, 2010
-
-
Steffen Klassert authored
In crypto_authenc_encrypt() we save the IV behind the ablkcipher request. To save space on the request, we overwrite the ablkcipher request with a ahash request after encryption. So the IV may be overwritten by the ahash request. This patch fixes this by placing the IV in front of the ablkcipher/ahash request. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-