- Sep 02, 2009
-
-
David Howells authored
Add a config option (CONFIG_DEBUG_CREDENTIALS) to turn on some debug checking for credential management. The additional code keeps track of the number of pointers from task_structs to any given cred struct, and checks to see that this number never exceeds the usage count of the cred struct (which includes all references, not just those from task_structs). Furthermore, if SELinux is enabled, the code also checks that the security pointer in the cred struct is never seen to be invalid. This attempts to catch the bug whereby inode_has_perm() faults in an nfsd kernel thread on seeing cred->security be a NULL pointer (it appears that the credential struct has been previously released): http://www.kerneloops.org/oops.php?number=252883 Signed-off-by:
David Howells <dhowells@redhat.com> Signed-off-by:
James Morris <jmorris@namei.org>
-
- Aug 29, 2009
-
-
Joe Perches authored
Signed-off-by:
Joe Perches <joe@perches.com> Tested-by:
Jens Rosenboom <jens@mcbone.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Aug 27, 2009
-
-
Benjamin Herrenschmidt authored
We call lmb_end_of_DRAM() to test whether a DMA mask is ok on a machine without IOMMU, but this function is marked as __init. I don't think there's a clean way to get the top of RAM max_pfn doesn't appear to include highmem or I missed (or we have a bug :-) so for now, let's just avoid having a broken 2.6.31 by making this function non-__init and we can revisit later. Signed-off-by:
Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
It's problematic to allow signed element_nr's or total's to be passed as part of the flex array API. flex_array_alloc() allows total_nr_elements to be set to a negative quantity, which is obviously erroneous. flex_array_get() and flex_array_put() allows negative array indices in dereferencing an array part, which could address memory mapped before struct flex_array. The fix is to convert all existing element_nr formals to be qualified as unsigned. Existing checks to compare it to total_nr_elements or the max array size based on element_size need not be changed. Signed-off-by:
David Rientjes <rientjes@google.com> Cc: Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
flex_array_free_parts() does not take `src' or `element_nr' formals, so remove their respective comments. Signed-off-by:
David Rientjes <rientjes@google.com> Acked-by:
Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Rientjes authored
If all array elements fit into the base structure and data is copied using flex_array_put() starting at a non-zero index, flex_array_get() will fail to return the data. This fixes the bug by only checking for NULL parts when all elements do not fit in the base structure when flex_array_get() is used. Otherwise, fa_element_to_part_nr() will always be 0 since there are no parts structures needed and such element may never have been put. Thus, it will remain NULL due to the kzalloc() of the base. Additionally, flex_array_put() now only checks for a NULL part when all elements do not fit in the base structure. This is otherwise unnecessary since the base structure is guaranteed to exist (or we would have already hit a NULL pointer). Signed-off-by:
David Rientjes <rientjes@google.com> Acked-by:
Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 23, 2009
-
-
Paul E. McKenney authored
Now that CONFIG_TREE_PREEMPT_RCU is in place, there is no further need for CONFIG_PREEMPT_RCU. Remove it, along with whatever subtle bugs it may (or may not) contain. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: laijs@cn.fujitsu.com Cc: dipankar@in.ibm.com Cc: akpm@linux-foundation.org Cc: mathieu.desnoyers@polymtl.ca Cc: josht@linux.vnet.ibm.com Cc: dvhltc@us.ibm.com Cc: niv@us.ibm.com Cc: peterz@infradead.org Cc: rostedt@goodmis.org LKML-Reference: <125097461396-git-send-email-> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Paul E. McKenney authored
Create a kernel/rcutree_plugin.h file that contains definitions for preemptable RCU (or, under the #else branch of the #ifdef, empty definitions for the classic non-preemptable semantics). These definitions fit into plugins defined in kernel/rcutree.c for this purpose. This variant of preemptable RCU uses a new algorithm whose read-side expense is roughly that of classic hierarchical RCU under CONFIG_PREEMPT. This new algorithm's update-side expense is similar to that of classic hierarchical RCU, and, in absence of read-side preemption or blocking, is exactly that of classic hierarchical RCU. Perhaps more important, this new algorithm has a much simpler implementation, saving well over 1,000 lines of code compared to mainline's implementation of preemptable RCU, which will hopefully be retired in favor of this new algorithm. The simplifications are obtained by maintaining per-task nesting state for running tasks, and using a simple lock-protected algorithm to handle accounting when tasks block within RCU read-side critical sections, making use of lessons learned while creating numerous user-level RCU implementations over the past 18 months. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: laijs@cn.fujitsu.com Cc: dipankar@in.ibm.com Cc: akpm@linux-foundation.org Cc: mathieu.desnoyers@polymtl.ca Cc: josht@linux.vnet.ibm.com Cc: dvhltc@us.ibm.com Cc: niv@us.ibm.com Cc: peterz@infradead.org Cc: rostedt@goodmis.org LKML-Reference: <12509746134003-git-send-email-> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Aug 21, 2009
-
-
Linus Torvalds authored
When 'and'ing two bitmasks (where 'andnot' is a variation on it), some cases want to know whether the result is the empty set or not. In particular, the TLB IPI sending code wants to do cpumask operations and determine if there are any CPU's left in the final set. So this just makes the bitmask (and cpumask) functions return a boolean for whether the result has any bits set. Cc: stable@kernel.org (2.6.30, needed by TLB shootdown fix) Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Casey Dahlin authored
swiotlb_full() in lib/swiotlb.c throws one of two panic messages based on whether the direction of transfer is from the device or to the device. The logic around this is somewhat weird in the case of bidirectional transfers. It appears to want to throw both in succession, but since its a panic only the first makes it. This patch adds a third, separate error for DMA_BIDIRECTIONAL to make things a bit clearer. Signed-off-by:
Casey Dahlin <cdahlin@redhat.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Becky Bruce <beckyb@kernel.crashing.org> [ further fixed the error message ] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <200908202327.n7KNRuqK001504@imap1.linux-foundation.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Kyle McMartin authored
While it's debatable whether or not a NULL device argument to the DMA API functions is valid... since it certainly isn't valid on devices with an IOMMU... dma-debug really shouldn't be dereferencing null pointers either. Guard against that in err_printk and the driver_filter functions. A Fedora rawhide user was seeing this in one of the dvb drivers resulting in an oops on boot. [ A patch has been sent for testing to the driver, but I feel the dma debugging support should be fixed as well. (There's still a pile of legacy garbage in the kernel passing null pointers to dma_{alloc,free}_*. :( ] Signed-off-by:
Kyle McMartin <kyle@redhat.com> Cc: mchehab@infradead.org Cc: Joerg Roedel <joerg.roedel@amd.com> LKML-Reference: <20090820011708.GP25206@bombadil.infradead.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Aug 07, 2009
-
-
Albin Tonnerre authored
These includes were added by 079effb6 ("kmemtrace, kbuild: fix slab.h dependency problem in lib/decompress_inflate.c") to fix the build when using kmemtrace. However this is not necessary when used to create a compressed kernel, and actually creates issues (brings a lot of things unavailable in the decompression environment), so don't include it if STATIC is defined. Signed-off-by:
Albin Tonnerre <albin.tonnerre@free-electrons.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro> Cc: Phillip Lougher <phillip@lougher.demon.co.uk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Phillip Lougher authored
decompress_bunzip2 and decompress_unlzma have a nasty hack that subtracts 4 from the input length if being called in the pre-boot environment. This is a nasty hack because it relies on the fact that flush = NULL only when called from the pre-boot environment (i.e. arch/x86/boot/compressed/misc.c). initramfs.c/do_mounts_rd.c pass in a flush buffer (flush != NULL). This hack prevents the decompressors from being used with flush = NULL by other callers unless knowledge of the hack is propagated to them. This patch removes the hack by making decompress (called only from the pre-boot environment) a wrapper function that subtracts 4 from the input length before calling the decompressor. Signed-off-by:
Phillip Lougher <phillip@lougher.demon.co.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Phillip Lougher authored
Fix and improve comments in decompress/generic.h that describe the decompressor API. Also remove an unused definition, and rename INBUF_LEN in lib/decompress_inflate.c to conform to bzip2/lzma naming. Signed-off-by:
Phillip Lougher <phillip@lougher.demon.co.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 04, 2009
-
-
Jonathan Corbet authored
flex_array_get() calculates an index value, then drops it on the floor; simply remove it. Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Acked-by:
Dave Hansen <dave@linux.vnet.ibm.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jul 31, 2009
-
-
Sebastian Andrzej Siewior authored
sg_miter_start() is currently unaware of the direction of the copy process (to or from the scatter list). It is important to know the direction because the page has to be flushed in case the data written is seen on a different mapping in user land on cache incoherent architectures. Signed-off-by:
Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Acked-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Pierre Ossman <pierre@ossman.eu>
-
- Jul 30, 2009
-
-
Dave Hansen authored
Once a structure goes over PAGE_SIZE*2, we see occasional allocation failures. Some people have chosen to switch over to things like vmalloc() that will let them keep array-like access to such a large structures. But, vmalloc() has plenty of downsides. Here's an alternative. I think it's what Andrew was suggesting here: http://lkml.org/lkml/2009/7/2/518 I call it a flexible array. It does all of its work in PAGE_SIZE bits, so never does an order>0 allocation. The base level has PAGE_SIZE-2*sizeof(int) bytes of storage for pointers to the second level. So, with a 32-bit arch, you get about 4MB (4183112 bytes) of total storage when the objects pack nicely into a page. It is half that on 64-bit because the pointers are twice the size. There's a table detailing this in the code. There are kerneldocs for the functions, but here's an overview: flex_array_alloc() - dynamically allocate a base structure flex_array_free() - free the array and all of the second-level pages flex_array_free_parts() - free the second-level pages, but not the base (for static bases) flex_array_put() - copy into the array at the given index flex_array_get() - copy out of the array at the given index flex_array_prealloc() - preallocate the second-level pages between the given indexes to guarantee no allocs will occur at put() time. We could also potentially just pass the "element_size" into each of the API functions instead of storing it internally. That would get us one more base pointer on 32-bit. I've been testing this by running it in userspace. The header and patch that I've been using are here, as well as the little script I'm using to generate the size table which goes in the kerneldocs. http://sr71.net/~dave/linux/flexarray/ [akpm@linux-foundation.org: coding-style fixes] Signed-off-by:
Dave Hansen <dave@linux.vnet.ibm.com> Reviewed-by:
KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Roland Dreier authored
The generic atomic64_t implementation in lib/ did not export the functions it defined, which means that modules that use atomic64_t would not link on platforms (such as 32-bit powerpc). For example, trying to build a kernel with CONFIG_NET_RDS on such a platform would fail with: ERROR: "atomic64_read" [net/rds/rds.ko] undefined! ERROR: "atomic64_set" [net/rds/rds.ko] undefined! Fix this by exporting the atomic64_t functions to modules. (I export the entire API even if it's not all currently used by in-tree modules to avoid having to continue fixing this in dribs and drabs) Signed-off-by:
Roland Dreier <rolandd@cisco.com> Acked-by:
Paul Mackerras <paulus@samba.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jul 28, 2009
-
-
Roel Kluin authored
The member was intended, not the local variable. Signed-off-by:
Roel Kluin <roel.kluin@gmail.com> Cc: Jason Baron <jbaron@redhat.com> Cc: Greg Banks <gnb@sgi.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
FUJITA Tomonori authored
This converts swiotlb to use phys_to_dma and dma_to_phys instead of swiotlb_phys_to_bus() and swiotlb_bus_to_phys(). swiotlb_phys_to_bus() and swiotlb_bus_to_phys() are not necessary so this patch also removes them. Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Becky Bruce <beckyb@kernel.crashing.org>
-
FUJITA Tomonori authored
This converts swiotlb to use dma_capable() instead of swiotlb_arch_address_needs_mapping() and is_buffer_dma_capable(). Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Becky Bruce <beckyb@kernel.crashing.org>
-
FUJITA Tomonori authored
swiotlb_bus_to_virt is unncessary; we can use swiotlb_bus_to_phys and phys_to_virt instead. Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Becky Bruce <beckyb@kernel.crashing.org>
-
FUJITA Tomonori authored
Nobody uses swiotlb_arch_range_needs_mapping(). Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Becky Bruce <beckyb@kernel.crashing.org>
-
FUJITA Tomonori authored
Nobody uses swiotlb_alloc(). Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Becky Bruce <beckyb@kernel.crashing.org>
-
FUJITA Tomonori authored
Nobody uses swiotlb_alloc_boot(). Signed-off-by:
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by:
Becky Bruce <beckyb@kernel.crashing.org>
-
- Jul 16, 2009
-
-
Oleg Nesterov authored
is_current_single_threaded() can safely miss a freshly forked CLONE_VM task, but in this case it must not miss its parent. That is why we take mm->mmap_sem for writing to make sure a thread/task with the same ->mm can't pass exit_mm() and disappear. However we can avoid ->mmap_sem and rely on rcu/barriers: - if we do not see the exiting parent on thread/process list we see the result of list_del_rcu(), in this case we must also see the result of list_add_rcu() which does wmb(). - if we do see the parent but its ->mm == NULL, we need rmb() to make sure we can't miss the child. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Acked-by:
David Howells <dhowells@redhat.com> Signed-off-by:
James Morris <jmorris@namei.org>
-
Oleg Nesterov authored
- is_single_threaded(task) is not safe unless task == current, we can't use task->signal or task->mm. - it doesn't make sense unless task == current, the task can fork right after the check. Rename it to current_is_single_threaded() and kill the argument. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Acked-by:
David Howells <dhowells@redhat.com> Signed-off-by:
James Morris <jmorris@namei.org>
-
Oleg Nesterov authored
- Fix the comment, is_single_threaded(p) actually means that nobody shares ->mm with p. I think this helper should be renamed, and it should not have arguments. With or without this patch it must not be used unless p == current, otherwise we can't safely use p->signal or p->mm. - "if (atomic_read(&p->signal->count) != 1)" is not right when we have a zombie group leader, use signal->live instead. - Add PF_KTHREAD check to skip kernel threads which may borrow p->mm, otherwise we can return the wrong "false". - Use for_each_process() instead of do_each_thread(), all threads must use the same ->mm. - Use down_write(mm->mmap_sem) + rcu_read_lock() instead of tasklist_lock to iterate over the process list. If there is another CLONE_VM process it can't pass exit_mm() which takes the same mm->mmap_sem. We can miss a freshly forked CLONE_VM task, but this doesn't matter because we must see its parent and return false. Signed-off-by:
Oleg Nesterov <oleg@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: James Morris <jmorris@namei.org> Cc: Roland McGrath <roland@redhat.com> Cc: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
James Morris <jmorris@namei.org>
-
- Jul 10, 2009
-
-
Ingo Molnar authored
Linus noticed how unclean and buggy the overlap() function is: - It uses convoluted (and bug-causing) positive checks for range overlap - instead of using a more natural negative check. - Even the positive checks are buggy: a positive intersection check has four natural cases while we checked only for three, missing the (addr < start && addr2 == end) case for example. - The variables are mis-named, making it non-obvious how the check was done. - It needlessly uses u64 instead of unsigned long. Since these are kernel memory pointers and we explicitly exclude highmem ranges anyway we cannot ever overflow 32 bits, even if we could. (and on 64-bit it doesnt matter anyway) All in one, this function needs a total revamp. I used Linus's suggestions minus the paranoid checks (we cannot overflow really because if we get totally bad DMA ranges passed far more things break in the systems than just DMA debugging). I also fixed a few other small details i noticed. Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Jun 25, 2009
-
-
Catalin Marinas authored
(feature suggested by Sergey Senozhatsky) Kmemleak needs to track all the memory allocations but some of these happen before kmemleak is initialised. These are stored in an internal buffer which may be exceeded in some kernel configurations. This patch adds a configuration option with a default value of 400 and also removes the stack dump when the early log buffer is exceeded. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Sergey Senozhatsky <sergey.senozhatsky@mail.by>
-
- Jun 23, 2009
-
-
Catalin Marinas authored
Selecting DEBUG_SLAB or SLUB_DEBUG by the KMEMLEAK menu entry may cause issues with other dependencies (KMEMCHECK). These configuration options aren't strictly needed by kmemleak but they may increase the chances of finding leaks. This patch also updates the KMEMLEAK config entry help text. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Pekka Enberg <penberg@cs.helsinki.fi>
-
- Jun 21, 2009
-
-
Peter Zijlstra authored
x86 stack traces are a piece of crap without frame pointers, and its not like the 'performance gain' of not having stack pointers matters when you selected lockdep. Reported-by:
Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <new-submission> Cc: <stable@kernel.org> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- Jun 19, 2009
-
-
Arnd Bergmann authored
The new generic checksum code has a small dependency on endianess and worked only on big-endian systems. I could not find a nice efficient way to express this, so I added an #ifdef. Using 'result += le16_to_cpu(*buff);' would have worked as well, but would be slightly less efficient on big-endian systems and IMHO would not be clearer. Also fix a bug that prevents this from working on 64-bit machines. If you have a 64-bit CPU and want to use the generic checksum code, you should probably do some more optimizations anyway, but at least the code should not break. Reported-by:
Mike Frysinger <vapier@gentoo.org> Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Jun 18, 2009
-
-
Florian Fainelli authored
This patch adds lib/gcd.c which contains a greatest common divider implementation taken from sound/core/pcm_timer.c Several usages of this new library function will be sent to subsystem maintainers. [akpm@linux-foundation.org: use swap() (pointed out by Joe)] [akpm@linux-foundation.org: just add gcd.o to obj-y, remove Kconfig changes] Signed-off-by:
Florian Fainelli <florian@openwrt.org> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Simon Horman <horms@verge.net.au> Cc: Julius Volz <juliusv@google.com> Cc: David S. Miller <davem@davemloft.net> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Jun 17, 2009
-
-
Ingo Molnar authored
Alan Cox reported that lockdep runs out of its stack-trace entries with certain configs: BUG: MAX_STACK_TRACE_ENTRIES too low This happens because there are 1024 hash buckets, each with a separate lock. Lockdep puts each lock into a separate lock class and tracks them independently. But in reality we never take more than one of the buckets, so they really belong into a single lock-class. Annotate the has bucket lock init accordingly. [ Impact: reduce the lockdep footprint of dma-debug ] Reported-by:
Alan Cox <alan@linux.intel.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Joerg Roedel <joerg.roedel@amd.com>
-
Wolfram Strepp authored
Furthermore, notice that the initial checks: if (!node->rb_left) child = node->rb_right; else if (!node->rb_right) child = node->rb_left; else { ... } guarantee that old->rb_right is set in the final else branch, therefore we can omit checking that again. Signed-off-by:
Wolfram Strepp <wstrepp@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Wolfram Strepp authored
There are two cases when a node, having 2 childs, is erased: 'normal case': the successor is not the right-hand-child of the node to be erased 'special case': the successor is the right-hand child of the node to be erased Here some ascii-art, with following symbols (referring to the code): O: node to be deleted N: the successor of O P: parent of N C: child of N L: some other node normal case: O N / \ / \ / \ / \ L \ L \ / \ P ----> / \ P / \ / \ / / N C \ / \ \ C / \ special case: O|P N / \ / \ / \ / \ L \ L \ / \ N ----> / C \ / \ \ C / \ Notice that for the special case we don't have to reconnect C to N. Signed-off-by:
Wolfram Strepp <wstrepp@gmx.de> Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Wolfram Strepp authored
First, move some code around in order to make the next change more obvious. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Wolfram Strepp <wstrepp@gmx.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Zygo Blaxell authored
There is a call to write_lock() in gen_pool_destroy which is not balanced by any corresponding write_unlock(). This causes problems with preemption because the preemption-disable counter is incremented in the write_lock() call, but never decremented by any call to write_unlock(). This bug is gen_pool_destroy, and one of them is non-x86 arch-specific code. Signed-off-by:
Zygo Blaxell <zygo.blaxell@xandros.com> Cc: Jiri Kosina <trivial@kernel.org> Cc: Steve Wise <swise@opengridcomputing.com> Cc: <stable@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Li Zefan authored
For example: hex_dump_to_buffer("AB", 2, 16, 1, buf, 100, 0); pr_info("[%s]\n", buf); I'd expect the output to be "[41 42]", but actually it's "[41 42 ]" This patch also makes the required buf to be minimum. To print the hex format of "AB", a buf with size 6 should be sufficient, but hex_dump_to_buffer() required at least 8. Signed-off-by:
Li Zefan <lizf@cn.fujitsu.com> Acked-by:
Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-