- Dec 17, 2009
-
-
Dan Williams authored
Add explicit 11 and 12 disks cases to exercise the 0 < src_cnt % 8 < 3 corner case in the ioatdma driver. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Nov 23, 2009
-
-
Jaswinder Singh Rajput authored
fips_cprng_get_random and fips_cprng_reset is used only by CONFIG_CRYPTO_FIPS. This also fixes compilation warnings: crypto/ansi_cprng.c:360: warning: ‘fips_cprng_get_random’ defined but not used crypto/ansi_cprng.c:393: warning: ‘fips_cprng_reset’ defined but not used Signed-off-by:
Jaswinder Singh Rajput <jaswinderrajput@gmail.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Youquan, Song authored
Add ghash algorithm test before provide it to users Signed-off-by:
Youquan, Song <youquan.song@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Nov 20, 2009
-
-
Dan Williams authored
ioat3.2 does not support asynchronous error notifications which makes the driver experience latencies when non-zero pq validate results are expected. Provide a mechanism for turning off async_xor_val and async_syndrome_val via Kconfig. This approach is generally useful for any driver that specifies ASYNC_TX_DISABLE_CHANNEL_SWITCH and would like to force the async_tx api to fall back to the synchronous path for certain operations. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Nov 18, 2009
-
-
Eric W. Biederman authored
For consistency drop & in front of every proc_handler. Explicity taking the address is unnecessary and it prevents optimizations like stubbing the proc_handlers to NULL. Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Joe Perches <joe@perches.com> Signed-off-by:
Eric W. Biederman <ebiederm@xmission.com>
-
- Nov 16, 2009
-
-
Huang Ying authored
The flow of the complete function (xxx_done) in gcm.c is as follow: void complete(struct crypto_async_request *areq, int err) { struct aead_request *req = areq->data; if (!err) { err = async_next_step(); if (err == -EINPROGRESS || err == -EBUSY) return; } complete_for_next_step(areq, err); } But *areq may be destroyed in async_next_step(), this makes complete_for_next_step() can not work properly. To fix this, one of following methods is used for each complete function. - Add a __complete() for each complete(), which accept struct aead_request *req instead of areq, so avoid using areq after it is destroyed. - Expand complete_for_next_step(). The fixing method is based on the idea of Herbert Xu. Signed-off-by:
Huang Ying <ying.huang@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Nov 12, 2009
-
-
Eric W. Biederman authored
Now that sys_sysctl is a generic wrapper around /proc/sys .ctl_name and .strategy members of sysctl tables are dead code. Remove them. Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Eric W. Biederman <ebiederm@xmission.com>
-
- Oct 27, 2009
-
-
Huang Ying authored
CLMUL-NI accelerated GHASH should be turned off on non-x86_64 machine. Reported-by:
Dave Young <hidave.darkstar@gmail.com> Signed-off-by:
Huang Ying <ying.huang@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Felipe Contreras authored
crypto/testmgr.c: In function ‘test_cprng’: crypto/testmgr.c:1204: warning: ‘err’ may be used uninitialized in this function Signed-off-by:
Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Roel Kluin authored
size_t nbytes cannot be less than 0 and the test was redundant. Signed-off-by:
Roel Kluin <roel.kluin@gmail.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Oct 20, 2009
-
-
Dan Williams authored
The raid6 recovery code currently requires special handling of the 4-disk and 5-disk recovery scenarios for the native layout. Quoting from commit 0a82a623: In these situations the default N-disk algorithm will present 0-source or 1-source operations to dma devices. To cover for dma devices where the minimum source count is 2 we implement 4-disk and 5-disk handling in the recovery code. The ddf layout presents disks=6 and disks=7 to the recovery code in these situations. Instead of looking at the number of disks count the number of non-zero sources in the list and call the special case code when the number of non-failed sources is 0 or 1. [neilb@suse.de: replace 'ddf' flag with counting good sources] Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
The global scribble page is used as a temporary destination buffer when disabling the P or Q result is requested. The local scribble buffer contains memory for performing address conversions. Rename the global variable to avoid confusion. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
- update the kernel doc for async_syndrome to indicate what NULL in the source list means - whitespace fixups Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Oct 19, 2009
-
-
Benjamin Gilbert authored
Remove special handling of old-style digest algorithms from the procfs show handler. Signed-off-by:
Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Benjamin Gilbert authored
6941c3a0 disabled compilation of the legacy digest code but didn't actually remove it. Rectify this. Also, remove the crypto_hash_type extern declaration from algapi.h now that the struct is gone. Signed-off-by:
Benjamin Gilbert <bgilbert@cs.cmu.edu> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Neil Horman authored
Patch to add fips(ansi_cprng) alg, which is ansi_cprng plus a continuous test Signed-off-by:
Neil Horman <nhorman@tuxdriver.com> Acked-by:
Jarod Wilson <jarod@redhat.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Huang Ying authored
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH, carry-less multiplication. More information about PCLMULQDQ can be found at: http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/ Because PCLMULQDQ changes XMM state, its usage must be enclosed with kernel_fpu_begin/end, which can be used only in process context, the acceleration is implemented as crypto_ahash. That is, request in soft IRQ context will be defered to the cryptd kernel thread. Signed-off-by:
Huang Ying <ying.huang@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Oct 16, 2009
-
-
NeilBrown authored
async_syndrome_val check the P and Q blocks used for RAID6 calculations. With DDF raid6, some of the data blocks might be NULL, so this needs to be handled in the same way that async_gen_syndrome handles it. As async_syndrome_val calls async_xor, also enhance async_xor to detect and skip NULL blocks in the list. Signed-off-by:
NeilBrown <neilb@suse.de>
-
NeilBrown authored
md/raid6 passes a list of 'struct page *' to the async_tx routines, which then either DMA map them for offload, or take the page_address for CPU based calculations. For RAID6 we sometime leave 'blanks' in the list of pages. For CPU based calcs, we want to treat theses as a page of zeros. For offloaded calculations, we simply don't pass a page to the hardware. Currently the 'blanks' are encoded as a pointer to raid6_empty_zero_page. This is a 4096 byte memory region, not a 'struct page'. This is mostly handled correctly but is rather ugly. So change the code to pass and expect a NULL pointer for the blanks. When taking page_address of a page, we need to check for a NULL and in that case use raid6_empty_zero_page. Signed-off-by:
NeilBrown <neilb@suse.de>
-
- Oct 11, 2009
-
-
Alexey Dobriyan authored
After m68k's task_thread_info() doesn't refer to current, it's possible to remove sched.h from interrupt.h and not break m68k! Many thanks to Heiko Carstens for allowing this. Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com>
-
- Oct 03, 2009
-
-
Christoph Lameter authored
Just a slight optimization that removes one array lookup. The processor number is needed for other things as well so the get/put_cpu cannot be removed. Acked-by:
Tejun Heo <tj@kernel.org> Cc: Huang Ying <ying.huang@intel.com> Signed-off-by:
Christoph Lameter <cl@linux-foundation.org> Signed-off-by:
Tejun Heo <tj@kernel.org>
-
- Sep 21, 2009
-
-
Dan Williams authored
If we are unable to offload async_mult() or async_sum_product(), then unmap the buffers before falling through to the synchronous path. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Sep 17, 2009
-
-
Dan Williams authored
Testing on x86_64 with NDISKS=255 yields: do_IRQ: modprobe near stack overflow (cur:ffff88007d19c000,sp:ffff88007d19c128) ...and eventually general protection fault: 0000 [#1] Moving the scribble buffers off the stack allows the test to complete successfully. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Sep 09, 2009
-
-
Dan Williams authored
Some engines have transfer size and address alignment restrictions. Add a per-operation alignment property to struct dma_device that the async routines and dmatest can use to check alignment capabilities. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Channel switching is problematic for some dmaengine drivers as the architecture precludes separating the ->prep from ->submit. In these cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify the async_tx allocator to only return channels that support all of the required asynchronous operations. For example MD_RAID456=y selects support for asynchronous xor, xor validate, pq, pq validate, and memcpy. When ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to quickly locate compatible channels with the guarantee that dependency chains will remain on one channel. When ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select channels that lead to operation chains that need to cross channel boundaries using the async_tx channel switch capability. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Some engines optimize operation by reading ahead in the descriptor chain such that descriptor2 may start execution before descriptor1 completes. If descriptor2 depends on the result from descriptor1 then a fence is required (on descriptor2) to disable this optimization. The async_tx api could implicitly identify dependencies via the 'depend_tx' parameter, but that would constrain cases where the dependency chain only specifies a completion order rather than a data dependency. So, provide an ASYNC_TX_FENCE to explicitly identify data dependencies. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Sep 02, 2009
-
-
Shane Wang authored
This patch adds VMAC (a fast MAC) support into crypto framework. Signed-off-by:
Shane Wang <shane.wang@intel.com> Signed-off-by:
Joseph Cihula <joseph.cihula@intel.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Aug 31, 2009
-
-
Herbert Xu authored
We have a mechanism where newly registered algorithms of a higher priority can displace existing instances that use a different implementation of the same algorithm with a lower priority. Unfortunately the same mechanism can cause a newly registered algorithm to displace itself if it depends on an existing version of the same algorithm. This patch fixes this by keeping all algorithms that the newly reigstered algorithm depends on, thus protecting them from being removed. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Aug 30, 2009
-
-
Dan Williams authored
Port drivers/md/raid6test/test.c to use the async raid6 recovery routines. This is meant as a unit test for raid6 acceleration drivers. In addition to the 16-drive test case this implements tests for the 4-disk and 5-disk special cases (dma devices can not generically handle less than 2 sources), and adds a test for the D+Q case. Reviewed-by:
Andre Noll <maan@systemlinux.org> Acked-by:
Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
async_raid6_2data_recov() recovers two data disk failures async_raid6_datap_recov() recovers a data disk and the P disk These routines are a port of the synchronous versions found in drivers/md/raid6recov.c. The primary difference is breaking out the xor operations into separate calls to async_xor. Two helper routines are introduced to perform scalar multiplication where needed. async_sum_product() multiplies two sources by scalar coefficients and then sums (xor) the result. async_mult() simply multiplies a single source by a scalar. This implemention also includes, in contrast to the original synchronous-only code, special case handling for the 4-disk and 5-disk array cases. In these situations the default N-disk algorithm will present 0-source or 1-source operations to dma devices. To cover for dma devices where the minimum source count is 2 we implement 4-disk and 5-disk handling in the recovery code. [ Impact: asynchronous raid6 recovery routines for 2data and datap cases ] Cc: Yuri Tikhonov <yur@emcraft.com> Cc: Ilya Yanok <yanok@emcraft.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: David Woodhouse <David.Woodhouse@intel.com> Reviewed-by:
Andre Noll <maan@systemlinux.org> Acked-by:
Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
[ Based on an original patch by Yuri Tikhonov ] This adds support for doing asynchronous GF multiplication by adding two additional functions to the async_tx API: async_gen_syndrome() does simultaneous XOR and Galois field multiplication of sources. async_syndrome_val() validates the given source buffers against known P and Q values. When a request is made to run async_pq against more than the hardware maximum number of supported sources we need to reuse the previous generated P and Q values as sources into the next operation. Care must be taken to remove Q from P' and P from Q'. For example to perform a 5 source pq op with hardware that only supports 4 sources at a time the following approach is taken: p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08})) p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10})) p' = p + q + q + src4 = p + src4 q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4 Note: 4 is the minimum acceptable maxpq otherwise we punt to synchronous-software path. The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as sources (in the above manner) and fill the remaining slots up to maxpq with the new sources/coefficients. Note1: Some devices have native support for P+Q continuation and can skip this extra work. Devices with this capability can advertise it with dma_set_maxpq. It is up to each driver how to handle the DMA_PREP_CONTINUE flag. Note2: The api supports disabling the generation of P when generating Q, this is ignored by the synchronous path but is implemented by some dma devices to save unnecessary writes. In this case the continuation algorithm is simplified to only reuse Q as a source. Cc: H. Peter Anvin <hpa@zytor.com> Cc: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by:
Yuri Tikhonov <yur@emcraft.com> Signed-off-by:
Ilya Yanok <yanok@emcraft.com> Reviewed-by:
Andre Noll <maan@systemlinux.org> Acked-by:
Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
We currently walk the parent chain when waiting for a given tx to complete however this walk may race with the driver cleanup routine. The routines in async_raid6_recov.c may fall back to the synchronous path at any point so we need to be prepared to call async_tx_quiesce() (which calls dma_wait_for_async_tx). To remove the ->parent walk we guarantee that every time a dependency is attached ->issue_pending() is invoked, then we can simply poll the initial descriptor until completion. This also allows for a lighter weight 'issue pending' implementation as there is no longer a requirement to iterate through all the channels' ->issue_pending() routines as long as operations have been submitted in an ordered chain. async_tx_issue_pending() is added for this case. Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
If module_init and module_exit are nops then neither need to be defined. [ Impact: pure cleanup ] Reviewed-by:
Andre Noll <maan@systemlinux.org> Acked-by:
Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
Dan Williams authored
Replace the flat zero_sum_result with a collection of flags to contain the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed solomon syndrome) zero-sum result. Use the SUM_CHECK_ namespace instead of DMA_ since these flags will be used on non-dma-zero-sum enabled platforms. Reviewed-by:
Andre Noll <maan@systemlinux.org> Acked-by:
Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- Aug 29, 2009
-
-
Herbert Xu authored
As struct skcipher_givcrypt_request includes struct crypto_request at a non-zero offset, testing for NULL after converting the pointer returned by crypto_dequeue_request does not work. This can result in IPsec crashes when the queue is depleted. This patch fixes it by doing the pointer conversion only when the return value is non-NULL. In particular, we create a new function __crypto_dequeue_request that does the pointer conversion. Reported-by:
Brad Bosch <bradbosch@comcast.net> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Steffen Klassert authored
Return the value we got from crypto_register_alg() instead of returning 0 in any case. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Acked-by:
Neil Horman <nhorman@tuxdriver.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Aug 20, 2009
-
-
Steffen Klassert authored
The alignment calculation of xcbc_tfm_ctx uses alg->cra_alignmask and not alg->cra_alignmask + 1 as it should. This led to frequent crashes during the selftest of xcbc(aes-asm) on x86_64 machines. This patch fixes this. Also we use the alignmask of xcbc and not the alignmask of the underlying algorithm for the alignmnent calculation in xcbc_create now. Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Neil Horman authored
What about something like this? It defaults the CPRNG to m and makes FIPS dependent on the CPRNG. That way you get a module build by default, but you can change it to y manually during config and still satisfy the dependency, and if you select N it disables FIPS as well. I rather like that better than making FIPS a tristate. I just tested it out here and it seems to work well. Let me know what you think Signed-off-by:
Neil Horman <nhorman@tuxdriver.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Aug 14, 2009
-
-
Herbert Xu authored
Recently we switched to using eseqiv on SMP machines in preference over chainiv. However, eseqiv does not support stream ciphers so they should still default to chainiv. This patch applies the same check as done by eseqiv to weed out the stream ciphers. In particular, all algorithms where the IV size is not equal to the block size will now default to chainiv. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- Aug 13, 2009
-
-
Herbert Xu authored
Raw counter mode only works with chainiv, which is no longer the default IV generator on SMP machines. This broke raw counter mode as it can no longer instantiate as a givcipher. This patch fixes it by always picking chainiv on raw counter mode. This is based on the diagnosis and a patch by Huang Ying. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-