Skip to content
  1. Dec 14, 2009
  2. Nov 25, 2009
    • J. Bruce Fields's avatar
      nfsd: simplify fh_verify access checks · 864f0f61
      J. Bruce Fields authored
      
      
      All nfsd security depends on the security checks in fh_verify, and
      especially on nfsd_setuser().
      
      It therefore bothers me that the nfsd_setuser call may be made from
      three different places, depending on whether the filehandle has already
      been mapped to a dentry, and on whether subtreechecking is in force.
      
      Instead, make an unconditional call in fh_verify(), so it's trivial to
      verify that the call always occurs.
      
      That leaves us with a redundant nfsd_setuser() call in the subtreecheck
      case--it needs the correct user set earlier in order to check execute
      permissions on the path to this filehandle--but I'm willing to accept
      that minor inefficiency in the subtreecheck case in return for more
      straightforward permission checking.
      
      Signed-off-by: default avatarJ. Bruce Fields <bfields@citi.umich.edu>
      864f0f61
  3. Nov 18, 2009
  4. Nov 17, 2009
    • Nathaniel W. Turner's avatar
      xfs: copy li_lsn before dropping AIL lock · 6c06f072
      Nathaniel W. Turner authored
      
      
      Access to log items on the AIL is generally protected by m_ail_lock;
      this is particularly needed when we're getting or setting the 64-bit
      li_lsn on a 32-bit platform.  This patch fixes a couple places where we
      were accessing the log item after dropping the AIL lock on 32-bit
      machines.
      
      This can result in a partially-zeroed log->l_tail_lsn if
      xfs_trans_ail_delete is racing with xfs_trans_ail_update, and in at
      least some cases, this can leave the l_tail_lsn with a zero cycle
      number, which means xlog_space_left will think the log is full (unless
      CONFIG_XFS_DEBUG is set, in which case we'll trip an ASSERT), leading to
      processes stuck forever in xlog_grant_log_space.
      
      Thanks to Adrian VanderSpek for first spotting the race potential and to
      Dave Chinner for debug assistance.
      
      Signed-off-by: default avatarNathaniel W. Turner <nate@houseofnate.net>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAlex Elder <aelder@sgi.com>
      6c06f072
    • Jan Rekorajski's avatar
      XFS bug in log recover with quota (bugzilla id 855) · 8ec6dba2
      Jan Rekorajski authored
      Hi,
      I was hit by a bug in linux 2.6.31 when XFS is not able to recover the
      log after a crash if fs was mounted with quotas. Gory details in XFS
      bugzilla: http://oss.sgi.com/bugzilla/show_bug.cgi?id=855
      
      .
      
      It looks like wrong struct is used in buffer length check, and the following
      patch should fix the problem.
      
      xfs_dqblk_t has a size of 104+32 bytes, while xfs_disk_dquot_t is 104 bytes
      long, and this is exactly what I see in system logs - "XFS: dquot too small
      (104) in xlog_recover_do_dquot_trans."
      
      Signed-off-by: default avatarJan Rekorajski <baggins@sith.mimuw.edu.pl>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAlex Elder <aelder@sgi.com>
      8ec6dba2
  5. Nov 16, 2009
  6. Nov 15, 2009
  7. Nov 14, 2009
  8. Nov 13, 2009
    • J. Bruce Fields's avatar
      nfsd: make fs/nfsd/vfs.h for common includes · 0a3adade
      J. Bruce Fields authored
      
      
      None of this stuff is used outside nfsd, so move it out of the common
      linux include directory.
      
      Actually, probably none of the stuff in include/linux/nfsd/nfsd.h really
      belongs there, so later we may remove that file entirely.
      
      Signed-off-by: default avatarJ. Bruce Fields <bfields@citi.umich.edu>
      0a3adade
    • Ryusuke Konishi's avatar
      nilfs2: fix lock order reversal in chcp operation · c1ea985c
      Ryusuke Konishi authored
      
      
      Will fix the following lock order reversal lockdep detected:
      
      =======================================================
      [ INFO: possible circular locking dependency detected ]
      2.6.32-rc6 #7
      -------------------------------------------------------
      chcp/30157 is trying to acquire lock:
       (&nilfs->ns_mount_mutex){+.+.+.}, at: [<fed7cfcc>] nilfs_cpfile_change_cpmode+0x46/0x752 [nilfs2]
      
      but task is already holding lock:
       (&nilfs->ns_segctor_sem){++++.+}, at: [<fed7ca32>] nilfs_transaction_begin+0xba/0x110 [nilfs2]
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #2 (&nilfs->ns_segctor_sem){++++.+}:
             [<c105799c>] __lock_acquire+0x109c/0x139d
             [<c1057d26>] lock_acquire+0x89/0xa0
             [<c14151e2>] down_read+0x31/0x45
             [<fed6d77b>] nilfs_attach_checkpoint+0x8f/0x16b [nilfs2]
             [<fed6e393>] nilfs_get_sb+0x3e7/0x653 [nilfs2]
             [<c10c0ccb>] vfs_kern_mount+0x8b/0x124
             [<c10c0db2>] do_kern_mount+0x37/0xc3
             [<c10d7517>] do_mount+0x64d/0x69d
             [<c10d75cd>] sys_mount+0x66/0x95
             [<c1002a14>] sysenter_do_call+0x12/0x32
      
      -> #1 (&type->s_umount_key#31/1){+.+.+.}:
             [<c105799c>] __lock_acquire+0x109c/0x139d
             [<c1057d26>] lock_acquire+0x89/0xa0
             [<c104c0f3>] down_write_nested+0x34/0x52
             [<c10c08fe>] sget+0x22e/0x389
             [<fed6e133>] nilfs_get_sb+0x187/0x653 [nilfs2]
             [<c10c0ccb>] vfs_kern_mount+0x8b/0x124
             [<c10c0db2>] do_kern_mount+0x37/0xc3
             [<c10d7517>] do_mount+0x64d/0x69d
             [<c10d75cd>] sys_mount+0x66/0x95
             [<c1002a14>] sysenter_do_call+0x12/0x32
      
      -> #0 (&nilfs->ns_mount_mutex){+.+.+.}:
             [<c1057727>] __lock_acquire+0xe27/0x139d
             [<c1057d26>] lock_acquire+0x89/0xa0
             [<c1414d63>] mutex_lock_nested+0x41/0x23e
             [<fed7cfcc>] nilfs_cpfile_change_cpmode+0x46/0x752 [nilfs2]
             [<fed801b2>] nilfs_ioctl+0x11a/0x7da [nilfs2]
             [<c10cca12>] vfs_ioctl+0x27/0x6e
             [<c10ccf93>] do_vfs_ioctl+0x491/0x4db
             [<c10cd022>] sys_ioctl+0x45/0x5f
             [<c1002a14>] sysenter_do_call+0x12/0x32
      
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      c1ea985c
  9. Nov 12, 2009
  10. Nov 11, 2009
    • Josef Bacik's avatar
      Btrfs: fix panic when trying to destroy a newly allocated · a6dbd429
      Josef Bacik authored
      There is a problem where iget5_locked will look for an inode, not find it, and
      then subsequently try to allocate it.  Another CPU will have raced in and
      allocated the inode instead, so when iget5_locked gets the inode spin lock again
      and does a search, it finds the new inode.  So it goes ahead and calls
      destroy_inode on the inode it just allocated.  The problem is we don't set
      BTRFS_I(inode)->root until the new inode is completely initialized.  This patch
      makes us set root to NULL when alloc'ing a new inode, so when we get to
      btrfs_destroy_inode and we see that root is NULL we can just free up the memory
      and continue on.  This fixes the panic
      
      http://www.kerneloops.org/submitresult.php?number=812690
      
      
      
      Thanks,
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      a6dbd429
    • Chris Mason's avatar
      Btrfs: allow more metadata chunk preallocation · 33b25808
      Chris Mason authored
      
      
      On an FS where all of the space has not been allocated into chunks yet,
      the enospc can return enospc just because the existing metadata chunks
      are full.
      
      We get around this by allowing more metadata chunks to be allocated up
      to a certain limit, and finding the right limit is a little fuzzy.  The
      problem is the reservations for delalloc would preallocate way too much
      of the FS as metadata.  We need to start saying no and just force some
      IO to happen.
      
      But we also need to let a reasonable amount of the FS become metadata.
      This bumps the hard limit up, later releases will have a better system.
      
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      33b25808
    • Josef Bacik's avatar
      Btrfs: fallback on uncompressed io if compressed io fails · f5a84ee3
      Josef Bacik authored
      
      
      Currently compressed IO does not deal with not having its entire extent able to
      be allocated.  So if we have enough free space to allocate for the extent, but
      its not contiguous, it will fail spectacularly.  This patch fixes this by
      falling back on uncompressed IO which lets us spread the delalloc extent across
      multiple extents.  I tested this by making us randomly think the reservation had
      failed to make it fallback on the uncompressed io way and it seemed to work
      fine.  Thanks,
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      f5a84ee3
    • Josef Bacik's avatar
      Btrfs: find ideal block group for caching · ccf0e725
      Josef Bacik authored
      
      
      This patch changes a few things.  Hopefully the comments are helpfull, but
      I'll try and be as verbose here.
      
      Problem:
      
      My fedora box was taking 1 minute and 21 seconds to boot with btrfs as root.
      Part of this problem was we pick the first block group we can find and start
      caching it, even if it may not have enough free space.  The other problem is
      we only search for cached block groups the first time around, which we won't
      find any cached block groups because this is a newly mounted fs, so we end up
      caching several block groups during bootup, which with alot of fragmentation
      takes around 30-45 seconds to complete, which bogs down the system.  So
      
      Solution:
      
      1) Don't cache block groups willy-nilly at first.  Instead try and figure out
      which block group has the most free, and therefore will take the least amount
      of time to cache.
      
      2) Don't be so picky about cached block groups.  The other problem is once
      we've filled up a cluster, if the block group isn't finished caching the next
      time we try and do the allocation we'll completely ignore the cluster and
      start searching from the beginning of the space, which makes us cache more
      block groups, which slows us down even more.  So instead of skipping block
      groups that are not finished caching when we have a hint, only skip the block
      group if it hasn't started caching yet.
      
      There is one other tweak in here.  Before if we allocated a chunk and still
      couldn't find new space, we'd end up switching the space info to force another
      chunk allocation.  This could make us end up with way too many chunks, so keep
      track of this particular case.
      
      With this patch and my previous cluster fixes my fedora box now boots in 43
      seconds, and according to the bootchart is not held up by our block group
      caching at all.
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      ccf0e725
    • Dan Carpenter's avatar
      Btrfs: avoid null deref in unpin_extent_cache() · 4eb3991c
      Dan Carpenter authored
      
      
      I re-orderred the checks to avoid dereferencing "em" if it was null.
      
      Found by smatch static checker.
      
      Signed-off-by: default avatarDan Carpenter <error27@gmail.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      4eb3991c
    • Li Dongyang's avatar
      Btrfs: skip btrfs_release_path in btrfs_update_root and btrfs_del_root · df66916e
      Li Dongyang authored
      
      
      We don't need to call btrfs_release_path because btrfs_free_path will do
      that for us.
      
      Signed-off-by: default avatarLi Dongyang <Jerry87905@gmail.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      df66916e
    • Josef Bacik's avatar
      Btrfs: fix some metadata enospc issues · 5df6a9f6
      Josef Bacik authored
      
      
      We weren't reserving metadata space for rename, rmdir and unlink, which could
      cause problems.
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      5df6a9f6
    • Josef Bacik's avatar
      Btrfs: fix how we set max_size for free space clusters · 01dea1ef
      Josef Bacik authored
      
      
      This patch fixes a problem where max_size can be set to 0 even though we
      filled the cluster properly.  We set max_size to 0 if we restart the cluster
      window, but if the new start entry is big enough to be our new cluster then we
      could return with a max_size set to 0, which will mean the next time we try to
      allocate from this cluster it will fail.  So set max_extent to the entry's
      size.  Tested this on my box and now we actually allocate from the cluster
      after we fill it.  Thanks,
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      01dea1ef
    • Josef Bacik's avatar
      Btrfs: cleanup transaction starting and fix journal_info usage · 249ac1e5
      Josef Bacik authored
      
      
      We use journal_info to tell if we're in a nested transaction to make sure we
      don't commit the transaction within a nested transaction.  We use another
      method to see if there are any outstanding ioctl trans handles, so if we're
      starting one do not set current->journal_info, since it will screw with other
      filesystems.  This patch also cleans up the starting stuff so there aren't any
      magic numbers.
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      249ac1e5
    • Josef Bacik's avatar
      Btrfs: fix data allocation hint start · 6346c939
      Josef Bacik authored
      
      
      Sometimes our start allocation hint when we cow a file can be either
      EXTENT_HOLE or some other such place holder, which is not optimal.  So if we
      find that our em->block_start is one of these special values, check to see
      where the first block of the inode is stored, and use that as a hint.  If that
      block is also a special value, just fallback on a hint of 0 and let the
      allocator figure out a good place to put the data.
      
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      6346c939
    • Tao Ma's avatar
      JBD/JBD2: free j_wbuf if journal init fails. · 7b02bec0
      Tao Ma authored
      
      
      If journal init fails, we need to free j_wbuf.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarTao Ma <tao.ma@oracle.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      7b02bec0
    • Jan Kara's avatar
      ext3: Wait for proper transaction commit on fsync · fe8bc91c
      Jan Kara authored
      
      
      We cannot rely on buffer dirty bits during fsync because pdflush can come
      before fsync is called and clear dirty bits without forcing a transaction
      commit. What we do is that we track which transaction has last changed
      the inode and which transaction last changed allocation and force it to
      disk on fsync.
      
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      fe8bc91c
    • Eric Sandeen's avatar
      ext3: retry failed direct IO allocations · ea0174a7
      Eric Sandeen authored
      
      
      On a 256M 4k block filesystem, doing this in a loop:
      
          dd if=/dev/zero of=test oflag=direct bs=1M count=64
          rm -f test
      
      eventually leads to spurious ENOSPC:
      
          dd: writing `test': No space left on device
      
      As with other block allocation callers, it looks like we need to
      potentially retry the allocations on the initial ENOSPC.
      
      A similar patch went into ext4 (commit
      fbbf6945)
      
      Signed-off-by: default avatarEric Sandeen <sandeen@redhat.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      ea0174a7
    • Trond Myklebust's avatar
      NFSv4: Fix a cache validation bug which causes getcwd() to return ENOENT · 96d25e53
      Trond Myklebust authored
      Changeset a65318bf (NFSv4: Simplify some
      cache consistency post-op GETATTRs) incorrectly changed the getattr
      bitmap for readdir().
      This causes the readdir() function to fail to return a
      fileid/inode number, which again exposed a bug in the NFS readdir code that
      causes spurious ENOENT errors to appear in applications (see
      http://bugzilla.kernel.org/show_bug.cgi?id=14541
      
      ).
      
      The immediate band aid is to revert the incorrect bitmap change, but more
      long term, we should change the NFS readdir code to cope with the
      fact that NFSv4 servers are not required to support fileids/inode numbers.
      
      Reported-by: default avatarDaniel J Blueman <daniel.blueman@gmail.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      96d25e53
  11. Nov 08, 2009
    • Theodore Ts'o's avatar
      ext4: partial revert to fix double brelse WARNING() · 1e424a34
      Theodore Ts'o authored
      
      
      This is a partial revert of commit 6487a9d3 (only the changes made to
      fs/ext4/namei.c), since it is causing the following brelse()
      double-free warning when running fsstress on a file system with 1k
      blocksize and we run into a block allocation failure while converting
      a single-block directory to a multi-block hash-tree indexed directory.
      
      WARNING: at fs/buffer.c:1197 __brelse+0x2e/0x33()
      Hardware name: 
      VFS: brelse: Trying to free free buffer
      Modules linked in:
      Pid: 2226, comm: jbd2/sdd-8 Not tainted 2.6.32-rc6-00577-g0003f55 #101
      Call Trace:
       [<c01587fb>] warn_slowpath_common+0x65/0x95
       [<c0158869>] warn_slowpath_fmt+0x29/0x2c
       [<c021168e>] __brelse+0x2e/0x33
       [<c0288a9f>] jbd2_journal_refile_buffer+0x67/0x6c
       [<c028a9ed>] jbd2_journal_commit_transaction+0x319/0x14d8
       [<c0164d73>] ? try_to_del_timer_sync+0x58/0x60
       [<c0175bcc>] ? sched_clock_cpu+0x12a/0x13e
       [<c017f6b4>] ? trace_hardirqs_off+0xb/0xd
       [<c0175c1f>] ? cpu_clock+0x3f/0x5b
       [<c017f6ec>] ? lock_release_holdtime+0x36/0x137
       [<c0664ad0>] ? _spin_unlock_irqrestore+0x44/0x51
       [<c0180af3>] ? trace_hardirqs_on_caller+0x103/0x124
       [<c0180b1f>] ? trace_hardirqs_on+0xb/0xd
       [<c0164d73>] ? try_to_del_timer_sync+0x58/0x60
       [<c0290d1c>] kjournald2+0x11a/0x310
       [<c017118e>] ? autoremove_wake_function+0x0/0x38
       [<c0290c02>] ? kjournald2+0x0/0x310
       [<c0170ee6>] kthread+0x66/0x6b
       [<c0170e80>] ? kthread+0x0/0x6b
       [<c01251b3>] kernel_thread_helper+0x7/0x10
      ---[ end trace 5579351b86af61e3 ]---
      
      Commit 6487a9d3 was an attempt some buffer head leaks in an ENOSPC
      error path, but in some cases it actually results in an excess ENOSPC,
      as shown above.  Fixing this means cleaning up who is responsible for
      releasing the buffer heads from the callee to the caller of
      add_dirent_to_buf().
      
      Since that's a relatively complex change, and we're late in the rcX
      development cycle, I'm reverting this now, and holding back a more
      complete fix until after 2.6.32 ships.  We've lived with this
      buffer_head leak on ENOSPC in ext3 and ext4 for a very long time; a
      few more months won't kill us.
      
      Signed-off-by: default avatar"Theodore Ts'o" <tytso@mit.edu>
      Cc: Curt Wohlgemuth <curtw@google.com>
      1e424a34
    • Ryusuke Konishi's avatar
      nilfs2: fix missing cleanup of gc cache on error cases · c083234f
      Ryusuke Konishi authored
      
      
      This fixes an -rc1 regression brought by the commit:
      1cf58fa8 ("nilfs2: shorten freeze
      period due to GC in write operation v3").
      
      Although the patch moved out a function call of
      nilfs_ioctl_move_blocks() to nilfs_ioctl_clean_segments() from
      nilfs_ioctl_prepare_clean_segments(), it didn't move corresponding
      cleanup job needed for the error case.
      
      This will move the missing cleanup job to the destination function.
      
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Acked-by: default avatarJiro SEKIBA <jir@unicus.jp>
      c083234f
    • Ryusuke Konishi's avatar
      nilfs2: fix kernel oops in error case of nilfs_ioctl_move_blocks · 5399dd1f
      Ryusuke Konishi authored
      
      
      This fixes a kernel oops reported by Markus Trippelsdorf in the email
      titled "[NILFS users] kernel Oops while running nilfs_cleanerd".
      
      The oops was caused by a bug of error path in
      nilfs_ioctl_move_blocks() function, which was inlined in
      nilfs_ioctl_clean_segments().
      
      nilfs_ioctl_move_blocks checks duplication of blocks which will be
      moved in garbage collection.  But, the check should have be done
      within nilfs_ioctl_move_inode_block() to prevent list corruption among
      buffers storing the target blocks.
      
      To fix the kernel oops, this moves forward the duplication check
      before the list insertion.
      
      I also tested this for stable trees [2.6.30, 2.6.31].
      
      Reported-by: default avatarMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Cc: stable <stable@kernel.org>
      5399dd1f
  12. Nov 06, 2009
    • Jeff Layton's avatar
      cifs: don't use CIFSGetSrvInodeNumber in is_path_accessible · f475f677
      Jeff Layton authored
      
      
      Because it's lighter weight, CIFS tries to use CIFSGetSrvInodeNumber to
      verify the accessibility of the root inode and then falls back to doing a
      full QPathInfo if that fails with -EOPNOTSUPP. I have at least a report
      of a server that returns NT_STATUS_INTERNAL_ERROR rather than something
      that translates to EOPNOTSUPP.
      
      Rather than trying to be clever with that call, just have
      is_path_accessible do a normal QPathInfo. That call is widely
      supported and it shouldn't increase the overhead significantly.
      
      Cc: Stable <stable@kernel.org>
      Signed-off-by: default avatarJeff Layton <jlayton@redhat.com>
      Signed-off-by: default avatarSteve French <sfrench@us.ibm.com>
      f475f677
    • Jeff Layton's avatar
      cifs: clean up handling when server doesn't consistently support inode numbers · ec06aedd
      Jeff Layton authored
      
      
      It's possible that a server will return a valid FileID when we query the
      FILE_INTERNAL_INFO for the root inode, but then zeroed out inode numbers
      when we do a FindFile with an infolevel of
      SMB_FIND_FILE_ID_FULL_DIR_INFO.
      
      In this situation turn off querying for server inode numbers, generate a
      warning for the user and just generate an inode number using iunique.
      Once we generate any inode number with iunique we can no longer use any
      server inode numbers or we risk collisions, so ensure that we don't do
      that in cifs_get_inode_info either.
      
      Cc: Stable <stable@kernel.org>
      Reported-by: default avatarTimothy Normand Miller <theosib@gmail.com>
      Signed-off-by: default avatarJeff Layton <jlayton@redhat.com>
      Signed-off-by: default avatarSteve French <sfrench@us.ibm.com>
      ec06aedd
    • Mingming's avatar
      ext4: Fix return value of ext4_split_unwritten_extents() to fix direct I/O · ba230c3f
      Mingming authored
      
      
      To prepare for a direct I/O write, we need to split the unwritten
      extents before submitting the I/O.  When no extents needed to be
      split, ext4_split_unwritten_extents() was incorrectly returning 0
      instead of the size of uninitialized extents. This bug caused the
      wrong return value sent back to VFS code when it gets called from
      async IO path, leading to an unnecessary fall back to buffered IO.
      
      This bug also hid the fact that the check to see whether or not a
      split would be necessary was incorrect; we can only skip splitting the
      extent if the write completely covers the uninitialized extent.
      
      Signed-off-by: default avatarMingming Cao <cmm@us.ibm.com>
      Signed-off-by: default avatar"Theodore Ts'o" <tytso@mit.edu>
      ba230c3f
Loading