- Dec 10, 2009
-
-
Milan Broz authored
This patch separates the construction of IV from its initialisation. (For ESSIV it is a hash calculation based on volume key.) Constructor code now preallocates hash tfm and salt array and saves it in a private IV structure. The next patch requires this to reinitialise the wiped IV without reallocating memory when resuming a suspended device. Cc: stable@kernel.org Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Use kzfree for salt deallocation because it is derived from the volume key. Use a common error path in ESSIV constructor. Required by a later patch which fixes the way key material is wiped from memory. Cc: stable@kernel.org Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Define private structures for IV so it's easy to add further attributes in a following patch which fixes the way key material is wiped from memory. Also move ESSIV destructor and remove unnecessary 'status' operation. There are no functional changes in this patch. Cc: stable@kernel.org Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
The "wipe key" message is used to wipe a volume key from memory temporarily, for example when suspending to RAM. There are two instances of the key in memory (inside crypto tfm) but only one got wiped. This patch wipes them both. Cc: stable@kernel.org Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Nov 09, 2009
-
-
Dirk Hohndel authored
something-bility is spelled as something-blity so a grep for 'blit' would find these lines this is so trivial that I didn't split it by subsystem / copy additional maintainers - all changes are to comments The only purpose is to get fewer false positives when grepping around the kernel sources. Signed-off-by:
Dirk Hohndel <hohndel@infradead.org> Signed-off-by:
Jiri Kosina <jkosina@suse.cz>
-
- Jul 23, 2009
-
-
Mike Snitzer authored
Incorrect device area lengths are being passed to device_area_is_valid(). The regression appeared in 2.6.31-rc1 through commit 754c5fc7. With the dm-stripe target, the size of the target (ti->len) was used instead of the stripe_width (ti->len/#stripes). An example of a consequent incorrect error message is: device-mapper: table: 254:0: sdb too small for target Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Jul 10, 2009
-
-
Jens Axboe authored
Commit 1faa16d2 accidentally broke the bdi congestion wait queue logic, causing us to wait on congestion for WRITE (== 1) when we really wanted BLK_RW_ASYNC (== 0) instead. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- Jun 22, 2009
-
-
Mike Snitzer authored
Add .iterate_devices to 'struct target_type' to allow a function to be called for all devices in a DM target. Implemented it for all targets except those in dm-snap.c (origin and snapshot). (The raid1 version number jumps to 1.12 because we originally reserved 1.1 to 1.11 for 'block_on_error' but ended up using 'handle_errors' instead.) Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Cc: martin.petersen@oracle.com
-
Mikulas Patocka authored
Flush support for dm-crypt target. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Apr 02, 2009
-
-
Johannes Weiner authored
Use kzfree() instead of memset() + kfree(). Signed-off-by:
Johannes Weiner <hannes@cmpxchg.org> Reviewed-by:
Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Mar 16, 2009
-
-
Milan Broz authored
The following oops has been reported when dm-crypt runs over a loop device. ... [ 70.381058] Process loop0 (pid: 4268, ti=cf3b2000 task=cf1cc1f0 task.ti=cf3b2000) ... [ 70.381058] Call Trace: [ 70.381058] [<d0d76601>] ? crypt_dec_pending+0x5e/0x62 [dm_crypt] [ 70.381058] [<d0d767b8>] ? crypt_endio+0xa2/0xaa [dm_crypt] [ 70.381058] [<d0d76716>] ? crypt_endio+0x0/0xaa [dm_crypt] [ 70.381058] [<c01a2f24>] ? bio_endio+0x2b/0x2e [ 70.381058] [<d0806530>] ? dec_pending+0x224/0x23b [dm_mod] [ 70.381058] [<d08066e4>] ? clone_endio+0x79/0xa4 [dm_mod] [ 70.381058] [<d080666b>] ? clone_endio+0x0/0xa4 [dm_mod] [ 70.381058] [<c01a2f24>] ? bio_endio+0x2b/0x2e [ 70.381058] [<c02bad86>] ? loop_thread+0x380/0x3b7 [ 70.381058] [<c02ba8a1>] ? do_lo_send_aops+0x0/0x165 [ 70.381058] [<c013754f>] ? autoremove_wake_function+0x0/0x33 [ 70.381058] [<c02baa06>] ? loop_thread+0x0/0x3b7 When a table is being replaced, it waits for I/O to complete before destroying the mempool, but the endio function doesn't call mempool_free() until after completing the bio. Fix it by swapping the order of those two operations. The same problem occurs in dm.c with md referenced after dec_pending. Again, we swap the order. Cc: stable@kernel.org Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Huang Ying authored
In the async encryption-complete function (kcryptd_async_done), the crypto_async_request passed in may be different from the one passed to crypto_ablkcipher_encrypt/decrypt. Only crypto_async_request->data is guaranteed to be same as the one passed in. The current kcryptd_async_done uses the passed-in crypto_async_request directly which may cause the AES-NI-based AES algorithm implementation to panic. This patch fixes this bug by only using crypto_async_request->data, which points to dm_crypt_request, the crypto_async_request passed in. The original data (convert_context) is gotten from dm_crypt_request. [mbroz@redhat.com: reworked] Cc: stable@kernel.org Signed-off-by:
Huang Ying <ying.huang@intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Jan 06, 2009
-
-
Mikulas Patocka authored
Change dm_unregister_target to return void and use BUG() for error reporting. dm_unregister_target can only fail because of programming bug in the target driver. It can't fail because of user's behavior or disk errors. This patch changes unregister_target to return void and use BUG if someone tries to unregister non-registered target or unregister target that is in use. This patch removes code duplication (testing of error codes in all dm targets) and reports bugs in just one place, in dm_unregister_target. In some target drivers, these return codes were ignored, which could lead to a situation where bugs could be missed. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Dec 29, 2008
-
-
Jens Axboe authored
Instead of having a global bio slab cache, add a reference to one in each bio_set that is created. This allows for personalized slabs in each bio_set, so that they can have bios of different sizes. This means we can personalize the bios we return. File systems may want to embed the bio inside another structure, to avoid allocation more items (and stuffing them in ->bi_private) after the get a bio. Or we may want to embed a number of bio_vecs directly at the end of a bio, to avoid doing two allocations to return a bio. This is now possible. Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- Oct 21, 2008
-
-
Milan Broz authored
Remove waitqueue no longer needed with the async crypto interface. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
When writing io, dm-crypt has to allocate a new cloned bio and encrypt the data into newly-allocated pages attached to this bio. In rare cases, because of hw restrictions (e.g. physical segment limit) or memory pressure, sometimes more than one cloned bio has to be used, each processing a different fragment of the original. Currently there is one waitqueue which waits for one fragment to finish and continues processing the next fragment. But when using asynchronous crypto this doesn't work, because several fragments may be processed asynchronously or in parallel and there is only one crypt context that cannot be shared between the bio fragments. The result may be corruption of the data contained in the encrypted bio. The patch fixes this by allocating new dm_crypt_io structs (with new crypto contexts) and running them independently. The fragments contains a pointer to the base dm_crypt_io struct to handle reference counting, so the base one is properly deallocated after all the fragments are finished. In a low memory situation, this only uses one additional object from the mempool. If the mempool is empty, the next allocation simple waits for previous fragments to complete. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Prepare local sector variable (offset) for later patch. Do not update io->sector for still-running I/O. No functional change. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Mikulas Patocka authored
Change #include "dm.h" to #include <linux/device-mapper.h> in all targets. Targets should not need direct access to internal DM structures. Signed-off-by:
Mikulas Patocka <mpatocka@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Oct 10, 2008
-
-
Milan Broz authored
Don't wait between submitting crypt requests for a bio unless we are short of memory. There are two situations when we must split an encrypted bio: 1) there are no free pages; 2) the new bio would violate underlying device restrictions (e.g. max hw segments). In case (2) we do not need to wait. Add output variable to crypt_alloc_buffer() to distinguish between these cases. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Move the initialisation of ctx->pending into one place, at the start of crypt_convert(). Introduce crypt_finished to indicate whether or not the encryption is finished, for use in a later patch. No functional change. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
The pending reference count must be incremented *before* the async work is queued to another thread, not after. Otherwise there's a race if the work completes and decrements the reference count before it gets incremented. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Make kcryptd_crypt_write_io_submit() responsible for decrementing the pending count after an error. Also fixes a bug in the async path that forgot to decrement it. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Alasdair G Kergon authored
Make the caller reponsible for incrementing the pending count before calling kcryptd_crypt_write_io_submit() in the non-async case to bring it into line with the async case. Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Move kcryptd_crypt_write_convert_loop inside kcryptd_crypt_write_convert. This change is needed for a later patch. No functional change. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Factor out crypt io allocation code. Later patches will call it from another place. No functional change. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Move io pending to one place. No functional change, usefull to simplify debugging. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Jul 21, 2008
-
-
Milan Broz authored
This patch implements biovec merge function for crypt target. If the underlying device has merge function defined, call it. If not, keep precomputed value. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Jul 02, 2008
-
-
Milan Broz authored
Add cond_resched() to prevent monopolising CPU when processing large bios. dm-crypt processes encryption of bios in sector units. If the bio request is big it can spend a long time in the encryption call. Signed-off-by:
Milan Broz <mbroz@redhat.com> Tested-by:
Yan Li <elliot.li.tech@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
- Mar 28, 2008
-
-
Milan Broz authored
Fix regression in dm-crypt introduced in commit 3a7f6c99 ("dm crypt: use async crypto"). If write requests need to be split into pieces, the code must not process them in parallel because the crypto context cannot be shared. So there can be parallel crypto operations on one part of the write, but only one write bio can be processed at a time. This is not optimal and the workqueue code needs to be optimized for parallel processing, but for now it solves the problem without affecting the performance of synchronous crypto operation (most of current dm-crypt users). http://bugzilla.kernel.org/show_bug.cgi?id=10242 http://bugzilla.kernel.org/show_bug.cgi?id=10207 Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Feb 08, 2008
-
-
Milan Broz authored
dm-crypt: Use crypto ablkcipher interface Move encrypt/decrypt core to async crypto call. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
dm-crypt: Use crypto ablkcipher interface Prepare callback function for async crypto operation. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
dm-crypt: Use crypto ablkcipher interface Prepare completion for async crypto request. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
dm-crypt: Use crypto ablkcipher interface Introduce mempool for async crypto requests. cc->req is used mainly during synchronous operations (to prevent allocation and deallocation of the same object). Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
dm-crypt: Use crypto ablkcipher interface Move scatterlists to separate dm_crypt_struct and pick out block processing from crypt_convert. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Make io reference counting more obvious. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Introduce crypt_write_io_loop(). Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Process write request in separate function and queue final bio through io workqueue. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Add sector into dm_crypt_io instead of using local variable. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Alasdair G Kergon authored
Reorder kcryptd functions for clarity. Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-
Milan Broz authored
Rename functions to follow calling convention. Prepare write io error processing function skeleton. Signed-off-by:
Milan Broz <mbroz@redhat.com> Signed-off-by:
Alasdair G Kergon <agk@redhat.com>
-