Skip to content
  1. Sep 02, 2020
  2. Aug 31, 2020
    • Max Staudt's avatar
      affs: fix basic permission bits to actually work · d3a84a8d
      Max Staudt authored
      
      
      The basic permission bits (protection bits in AmigaOS) have been broken
      in Linux' AFFS - it would only set bits, but never delete them.
      Also, contrary to the documentation, the Archived bit was not handled.
      
      Let's fix this for good, and set the bits such that Linux and classic
      AmigaOS can coexist in the most peaceful manner.
      
      Also, update the documentation to represent the current state of things.
      
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMax Staudt <max@enpas.org>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d3a84a8d
  3. Aug 28, 2020
  4. Aug 27, 2020
    • Jens Axboe's avatar
      io_uring: don't bounce block based -EAGAIN retry off task_work · fdee946d
      Jens Axboe authored
      
      
      These events happen inline from submission, so there's no need to
      bounce them through the original task. Just set them up for retry
      and issue retry directly instead of going over task_work.
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      fdee946d
    • Jens Axboe's avatar
      io_uring: fix IOPOLL -EAGAIN retries · eefdf30f
      Jens Axboe authored
      
      
      This normally isn't hit, as polling is mostly done on NVMe with deep
      queue depths. But if we do run into request starvation, we need to
      ensure that retries are properly serialized.
      
      Reported-by: default avatarAndres Freund <andres@anarazel.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      eefdf30f
    • Dan Carpenter's avatar
      afs: Remove erroneous fallthough annotation · 210e799e
      Dan Carpenter authored
      
      
      The fall through annotation comes after a return statement so it's not
      reachable.
      
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      210e799e
    • Darrick J. Wong's avatar
      xfs: initialize the shortform attr header padding entry · 125eac24
      Darrick J. Wong authored
      
      
      Don't leak kernel memory contents into the shortform attr fork.
      
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarEric Sandeen <sandeen@redhat.com>
      Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      125eac24
    • Qu Wenruo's avatar
      btrfs: tree-checker: fix the error message for transid error · f96d6960
      Qu Wenruo authored
      
      
      The error message for inode transid is the same as for inode generation,
      which makes us unable to detect the real problem.
      
      Reported-by: default avatarTyler Richmond <t.d.richmond@gmail.com>
      Fixes: 496245ca ("btrfs: tree-checker: Verify inode item")
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Signed-off-by: default avatarQu Wenruo <wqu@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      f96d6960
    • Josef Bacik's avatar
      btrfs: set the lockdep class for log tree extent buffers · d3beaa25
      Josef Bacik authored
      
      
      These are special extent buffers that get rewound in order to lookup
      the state of the tree at a specific point in time.  As such they do not
      go through the normal initialization paths that set their lockdep class,
      so handle them appropriately when they are created and before they are
      locked.
      
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      d3beaa25
    • Josef Bacik's avatar
      btrfs: set the correct lockdep class for new nodes · ad244665
      Josef Bacik authored
      
      
      When flipping over to the rw_semaphore I noticed I'd get a lockdep splat
      in replace_path(), which is weird because we're swapping the reloc root
      with the actual target root.  Turns out this is because we're using the
      root->root_key.objectid as the root id for the newly allocated tree
      block when setting the lockdep class, however we need to be using the
      actual owner of this new block, which is saved in owner.
      
      The affected path is through btrfs_copy_root as all other callers of
      btrfs_alloc_tree_block (which calls init_new_buffer) have root_objectid
      == root->root_key.objectid .
      
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Reviewed-by: default avatarNikolay Borisov <nborisov@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      ad244665
    • Josef Bacik's avatar
      btrfs: allocate scrub workqueues outside of locks · e89c4a9c
      Josef Bacik authored
      
      
      I got the following lockdep splat while testing:
      
        ======================================================
        WARNING: possible circular locking dependency detected
        5.8.0-rc7-00172-g021118712e59 #932 Not tainted
        ------------------------------------------------------
        btrfs/229626 is trying to acquire lock:
        ffffffff828513f0 (cpu_hotplug_lock){++++}-{0:0}, at: alloc_workqueue+0x378/0x450
      
        but task is already holding lock:
        ffff889dd3889518 (&fs_info->scrub_lock){+.+.}-{3:3}, at: btrfs_scrub_dev+0x11c/0x630
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #7 (&fs_info->scrub_lock){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 btrfs_scrub_dev+0x11c/0x630
      	 btrfs_dev_replace_by_ioctl.cold.21+0x10a/0x1d4
      	 btrfs_ioctl+0x2799/0x30a0
      	 ksys_ioctl+0x83/0xc0
      	 __x64_sys_ioctl+0x16/0x20
      	 do_syscall_64+0x50/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #6 (&fs_devs->device_list_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 btrfs_run_dev_stats+0x49/0x480
      	 commit_cowonly_roots+0xb5/0x2a0
      	 btrfs_commit_transaction+0x516/0xa60
      	 sync_filesystem+0x6b/0x90
      	 generic_shutdown_super+0x22/0x100
      	 kill_anon_super+0xe/0x30
      	 btrfs_kill_super+0x12/0x20
      	 deactivate_locked_super+0x29/0x60
      	 cleanup_mnt+0xb8/0x140
      	 task_work_run+0x6d/0xb0
      	 __prepare_exit_to_usermode+0x1cc/0x1e0
      	 do_syscall_64+0x5c/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #5 (&fs_info->tree_log_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 btrfs_commit_transaction+0x4bb/0xa60
      	 sync_filesystem+0x6b/0x90
      	 generic_shutdown_super+0x22/0x100
      	 kill_anon_super+0xe/0x30
      	 btrfs_kill_super+0x12/0x20
      	 deactivate_locked_super+0x29/0x60
      	 cleanup_mnt+0xb8/0x140
      	 task_work_run+0x6d/0xb0
      	 __prepare_exit_to_usermode+0x1cc/0x1e0
      	 do_syscall_64+0x5c/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #4 (&fs_info->reloc_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 btrfs_record_root_in_trans+0x43/0x70
      	 start_transaction+0xd1/0x5d0
      	 btrfs_dirty_inode+0x42/0xd0
      	 touch_atime+0xa1/0xd0
      	 btrfs_file_mmap+0x3f/0x60
      	 mmap_region+0x3a4/0x640
      	 do_mmap+0x376/0x580
      	 vm_mmap_pgoff+0xd5/0x120
      	 ksys_mmap_pgoff+0x193/0x230
      	 do_syscall_64+0x50/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #3 (&mm->mmap_lock#2){++++}-{3:3}:
      	 __might_fault+0x68/0x90
      	 _copy_to_user+0x1e/0x80
      	 perf_read+0x141/0x2c0
      	 vfs_read+0xad/0x1b0
      	 ksys_read+0x5f/0xe0
      	 do_syscall_64+0x50/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #2 (&cpuctx_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 perf_event_init_cpu+0x88/0x150
      	 perf_event_init+0x1db/0x20b
      	 start_kernel+0x3ae/0x53c
      	 secondary_startup_64+0xa4/0xb0
      
        -> #1 (pmus_lock){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 perf_event_init_cpu+0x4f/0x150
      	 cpuhp_invoke_callback+0xb1/0x900
      	 _cpu_up.constprop.26+0x9f/0x130
      	 cpu_up+0x7b/0xc0
      	 bringup_nonboot_cpus+0x4f/0x60
      	 smp_init+0x26/0x71
      	 kernel_init_freeable+0x110/0x258
      	 kernel_init+0xa/0x103
      	 ret_from_fork+0x1f/0x30
      
        -> #0 (cpu_hotplug_lock){++++}-{0:0}:
      	 __lock_acquire+0x1272/0x2310
      	 lock_acquire+0x9e/0x360
      	 cpus_read_lock+0x39/0xb0
      	 alloc_workqueue+0x378/0x450
      	 __btrfs_alloc_workqueue+0x15d/0x200
      	 btrfs_alloc_workqueue+0x51/0x160
      	 scrub_workers_get+0x5a/0x170
      	 btrfs_scrub_dev+0x18c/0x630
      	 btrfs_dev_replace_by_ioctl.cold.21+0x10a/0x1d4
      	 btrfs_ioctl+0x2799/0x30a0
      	 ksys_ioctl+0x83/0xc0
      	 __x64_sys_ioctl+0x16/0x20
      	 do_syscall_64+0x50/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        other info that might help us debug this:
      
        Chain exists of:
          cpu_hotplug_lock --> &fs_devs->device_list_mutex --> &fs_info->scrub_lock
      
         Possible unsafe locking scenario:
      
      	 CPU0                    CPU1
      	 ----                    ----
          lock(&fs_info->scrub_lock);
      				 lock(&fs_devs->device_list_mutex);
      				 lock(&fs_info->scrub_lock);
          lock(cpu_hotplug_lock);
      
         *** DEADLOCK ***
      
        2 locks held by btrfs/229626:
         #0: ffff88bfe8bb86e0 (&fs_devs->device_list_mutex){+.+.}-{3:3}, at: btrfs_scrub_dev+0xbd/0x630
         #1: ffff889dd3889518 (&fs_info->scrub_lock){+.+.}-{3:3}, at: btrfs_scrub_dev+0x11c/0x630
      
        stack backtrace:
        CPU: 15 PID: 229626 Comm: btrfs Kdump: loaded Not tainted 5.8.0-rc7-00172-g021118712e59 #932
        Hardware name: Quanta Tioga Pass Single Side 01-0030993006/Tioga Pass Single Side, BIOS F08_3A18 12/20/2018
        Call Trace:
         dump_stack+0x78/0xa0
         check_noncircular+0x165/0x180
         __lock_acquire+0x1272/0x2310
         lock_acquire+0x9e/0x360
         ? alloc_workqueue+0x378/0x450
         cpus_read_lock+0x39/0xb0
         ? alloc_workqueue+0x378/0x450
         alloc_workqueue+0x378/0x450
         ? rcu_read_lock_sched_held+0x52/0x80
         __btrfs_alloc_workqueue+0x15d/0x200
         btrfs_alloc_workqueue+0x51/0x160
         scrub_workers_get+0x5a/0x170
         btrfs_scrub_dev+0x18c/0x630
         ? start_transaction+0xd1/0x5d0
         btrfs_dev_replace_by_ioctl.cold.21+0x10a/0x1d4
         btrfs_ioctl+0x2799/0x30a0
         ? do_sigaction+0x102/0x250
         ? lockdep_hardirqs_on_prepare+0xca/0x160
         ? _raw_spin_unlock_irq+0x24/0x30
         ? trace_hardirqs_on+0x1c/0xe0
         ? _raw_spin_unlock_irq+0x24/0x30
         ? do_sigaction+0x102/0x250
         ? ksys_ioctl+0x83/0xc0
         ksys_ioctl+0x83/0xc0
         __x64_sys_ioctl+0x16/0x20
         do_syscall_64+0x50/0x90
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      This happens because we're allocating the scrub workqueues under the
      scrub and device list mutex, which brings in a whole host of other
      dependencies.
      
      Because the work queue allocation is done with GFP_KERNEL, it can
      trigger reclaim, which can lead to a transaction commit, which in turns
      needs the device_list_mutex, it can lead to a deadlock. A different
      problem for which this fix is a solution.
      
      Fix this by moving the actual allocation outside of the
      scrub lock, and then only take the lock once we're ready to actually
      assign them to the fs_info.  We'll now have to cleanup the workqueues in
      a few more places, so I've added a helper to do the refcount dance to
      safely free the workqueues.
      
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e89c4a9c
    • Josef Bacik's avatar
      btrfs: fix potential deadlock in the search ioctl · a48b73ec
      Josef Bacik authored
      
      
      With the conversion of the tree locks to rwsem I got the following
      lockdep splat:
      
        ======================================================
        WARNING: possible circular locking dependency detected
        5.8.0-rc7-00165-g04ec4da5f45f-dirty #922 Not tainted
        ------------------------------------------------------
        compsize/11122 is trying to acquire lock:
        ffff889fabca8768 (&mm->mmap_lock#2){++++}-{3:3}, at: __might_fault+0x3e/0x90
      
        but task is already holding lock:
        ffff889fe720fe40 (btrfs-fs-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x39/0x180
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #2 (btrfs-fs-00){++++}-{3:3}:
      	 down_write_nested+0x3b/0x70
      	 __btrfs_tree_lock+0x24/0x120
      	 btrfs_search_slot+0x756/0x990
      	 btrfs_lookup_inode+0x3a/0xb4
      	 __btrfs_update_delayed_inode+0x93/0x270
      	 btrfs_async_run_delayed_root+0x168/0x230
      	 btrfs_work_helper+0xd4/0x570
      	 process_one_work+0x2ad/0x5f0
      	 worker_thread+0x3a/0x3d0
      	 kthread+0x133/0x150
      	 ret_from_fork+0x1f/0x30
      
        -> #1 (&delayed_node->mutex){+.+.}-{3:3}:
      	 __mutex_lock+0x9f/0x930
      	 btrfs_delayed_update_inode+0x50/0x440
      	 btrfs_update_inode+0x8a/0xf0
      	 btrfs_dirty_inode+0x5b/0xd0
      	 touch_atime+0xa1/0xd0
      	 btrfs_file_mmap+0x3f/0x60
      	 mmap_region+0x3a4/0x640
      	 do_mmap+0x376/0x580
      	 vm_mmap_pgoff+0xd5/0x120
      	 ksys_mmap_pgoff+0x193/0x230
      	 do_syscall_64+0x50/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #0 (&mm->mmap_lock#2){++++}-{3:3}:
      	 __lock_acquire+0x1272/0x2310
      	 lock_acquire+0x9e/0x360
      	 __might_fault+0x68/0x90
      	 _copy_to_user+0x1e/0x80
      	 copy_to_sk.isra.32+0x121/0x300
      	 search_ioctl+0x106/0x200
      	 btrfs_ioctl_tree_search_v2+0x7b/0xf0
      	 btrfs_ioctl+0x106f/0x30a0
      	 ksys_ioctl+0x83/0xc0
      	 __x64_sys_ioctl+0x16/0x20
      	 do_syscall_64+0x50/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        other info that might help us debug this:
      
        Chain exists of:
          &mm->mmap_lock#2 --> &delayed_node->mutex --> btrfs-fs-00
      
         Possible unsafe locking scenario:
      
      	 CPU0                    CPU1
      	 ----                    ----
          lock(btrfs-fs-00);
      				 lock(&delayed_node->mutex);
      				 lock(btrfs-fs-00);
          lock(&mm->mmap_lock#2);
      
         *** DEADLOCK ***
      
        1 lock held by compsize/11122:
         #0: ffff889fe720fe40 (btrfs-fs-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x39/0x180
      
        stack backtrace:
        CPU: 17 PID: 11122 Comm: compsize Kdump: loaded Not tainted 5.8.0-rc7-00165-g04ec4da5f45f-dirty #922
        Hardware name: Quanta Tioga Pass Single Side 01-0030993006/Tioga Pass Single Side, BIOS F08_3A18 12/20/2018
        Call Trace:
         dump_stack+0x78/0xa0
         check_noncircular+0x165/0x180
         __lock_acquire+0x1272/0x2310
         lock_acquire+0x9e/0x360
         ? __might_fault+0x3e/0x90
         ? find_held_lock+0x72/0x90
         __might_fault+0x68/0x90
         ? __might_fault+0x3e/0x90
         _copy_to_user+0x1e/0x80
         copy_to_sk.isra.32+0x121/0x300
         ? btrfs_search_forward+0x2a6/0x360
         search_ioctl+0x106/0x200
         btrfs_ioctl_tree_search_v2+0x7b/0xf0
         btrfs_ioctl+0x106f/0x30a0
         ? __do_sys_newfstat+0x5a/0x70
         ? ksys_ioctl+0x83/0xc0
         ksys_ioctl+0x83/0xc0
         __x64_sys_ioctl+0x16/0x20
         do_syscall_64+0x50/0x90
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      The problem is we're doing a copy_to_user() while holding tree locks,
      which can deadlock if we have to do a page fault for the copy_to_user().
      This exists even without my locking changes, so it needs to be fixed.
      Rework the search ioctl to do the pre-fault and then
      copy_to_user_nofault for the copying.
      
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a48b73ec
    • Josef Bacik's avatar
      btrfs: drop path before adding new uuid tree entry · 9771a5cf
      Josef Bacik authored
      
      
      With the conversion of the tree locks to rwsem I got the following
      lockdep splat:
      
        ======================================================
        WARNING: possible circular locking dependency detected
        5.8.0-rc7-00167-g0d7ba0c5b375-dirty #925 Not tainted
        ------------------------------------------------------
        btrfs-uuid/7955 is trying to acquire lock:
        ffff88bfbafec0f8 (btrfs-root-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x39/0x180
      
        but task is already holding lock:
        ffff88bfbafef2a8 (btrfs-uuid-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x39/0x180
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #1 (btrfs-uuid-00){++++}-{3:3}:
      	 down_read_nested+0x3e/0x140
      	 __btrfs_tree_read_lock+0x39/0x180
      	 __btrfs_read_lock_root_node+0x3a/0x50
      	 btrfs_search_slot+0x4bd/0x990
      	 btrfs_uuid_tree_add+0x89/0x2d0
      	 btrfs_uuid_scan_kthread+0x330/0x390
      	 kthread+0x133/0x150
      	 ret_from_fork+0x1f/0x30
      
        -> #0 (btrfs-root-00){++++}-{3:3}:
      	 __lock_acquire+0x1272/0x2310
      	 lock_acquire+0x9e/0x360
      	 down_read_nested+0x3e/0x140
      	 __btrfs_tree_read_lock+0x39/0x180
      	 __btrfs_read_lock_root_node+0x3a/0x50
      	 btrfs_search_slot+0x4bd/0x990
      	 btrfs_find_root+0x45/0x1b0
      	 btrfs_read_tree_root+0x61/0x100
      	 btrfs_get_root_ref.part.50+0x143/0x630
      	 btrfs_uuid_tree_iterate+0x207/0x314
      	 btrfs_uuid_rescan_kthread+0x12/0x50
      	 kthread+0x133/0x150
      	 ret_from_fork+0x1f/0x30
      
        other info that might help us debug this:
      
         Possible unsafe locking scenario:
      
      	 CPU0                    CPU1
      	 ----                    ----
          lock(btrfs-uuid-00);
      				 lock(btrfs-root-00);
      				 lock(btrfs-uuid-00);
          lock(btrfs-root-00);
      
         *** DEADLOCK ***
      
        1 lock held by btrfs-uuid/7955:
         #0: ffff88bfbafef2a8 (btrfs-uuid-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x39/0x180
      
        stack backtrace:
        CPU: 73 PID: 7955 Comm: btrfs-uuid Kdump: loaded Not tainted 5.8.0-rc7-00167-g0d7ba0c5b375-dirty #925
        Hardware name: Quanta Tioga Pass Single Side 01-0030993006/Tioga Pass Single Side, BIOS F08_3A18 12/20/2018
        Call Trace:
         dump_stack+0x78/0xa0
         check_noncircular+0x165/0x180
         __lock_acquire+0x1272/0x2310
         lock_acquire+0x9e/0x360
         ? __btrfs_tree_read_lock+0x39/0x180
         ? btrfs_root_node+0x1c/0x1d0
         down_read_nested+0x3e/0x140
         ? __btrfs_tree_read_lock+0x39/0x180
         __btrfs_tree_read_lock+0x39/0x180
         __btrfs_read_lock_root_node+0x3a/0x50
         btrfs_search_slot+0x4bd/0x990
         btrfs_find_root+0x45/0x1b0
         btrfs_read_tree_root+0x61/0x100
         btrfs_get_root_ref.part.50+0x143/0x630
         btrfs_uuid_tree_iterate+0x207/0x314
         ? btree_readpage+0x20/0x20
         btrfs_uuid_rescan_kthread+0x12/0x50
         kthread+0x133/0x150
         ? kthread_create_on_node+0x60/0x60
         ret_from_fork+0x1f/0x30
      
      This problem exists because we have two different rescan threads,
      btrfs_uuid_scan_kthread which creates the uuid tree, and
      btrfs_uuid_tree_iterate that goes through and updates or deletes any out
      of date roots.  The problem is they both do things in different order.
      btrfs_uuid_scan_kthread() reads the tree_root, and then inserts entries
      into the uuid_root.  btrfs_uuid_tree_iterate() scans the uuid_root, but
      then does a btrfs_get_fs_root() which can read from the tree_root.
      
      It's actually easy enough to not be holding the path in
      btrfs_uuid_scan_kthread() when we add a uuid entry, as we already drop
      it further down and re-start the search when we loop.  So simply move
      the path release before we add our entry to the uuid tree.
      
      This also fixes a problem where we're holding a path open after we do
      btrfs_end_transaction(), which has it's own problems.
      
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      9771a5cf
    • Marcos Paulo de Souza's avatar
      btrfs: block-group: fix free-space bitmap threshold · e3e39c72
      Marcos Paulo de Souza authored
      [BUG]
      After commit 9afc6649 ("btrfs: block-group: refactor how we read one
      block group item"), cache->length is being assigned after calling
      btrfs_create_block_group_cache. This causes a problem since
      set_free_space_tree_thresholds calculates the free-space threshold to
      decide if the free-space tree should convert from extents to bitmaps.
      
      The current code calls set_free_space_tree_thresholds with cache->length
      being 0, which then makes cache->bitmap_high_thresh zero. This implies
      the system will always use bitmap instead of extents, which is not
      desired if the block group is not fragmented.
      
      This behavior can be seen by a test that expects to repair systems
      with FREE_SPACE_EXTENT and FREE_SPACE_BITMAP, but the current code only
      created FREE_SPACE_BITMAP.
      
      [FIX]
      Call set_free_space_tree_thresholds after setting cache->length. There
      is now a WARN_ON in set_free_space_tree_thresholds to help preventing
      the same mistake to happen again in the future.
      
      Link: https://github.com/kdave/btrfs-progs/issues/251
      
      
      Fixes: 9afc6649 ("btrfs: block-group: refactor how we read one block group item")
      CC: stable@vger.kernel.org # 5.8+
      Reviewed-by: default avatarQu Wenruo <wqu@suse.com>
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarMarcos Paulo de Souza <mpdesouza@suse.com>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      e3e39c72
    • Jens Axboe's avatar
      io_uring: clear req->result on IOPOLL re-issue · 56450c20
      Jens Axboe authored
      
      
      Make sure we clear req->result, which was set to -EAGAIN for retry
      purposes, when moving it to the reissue list. Otherwise we can end up
      retrying a request more than once, which leads to weird results in
      the io-wq handling (and other spots).
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarAndres Freund <andres@anarazel.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      56450c20
  5. Aug 26, 2020
    • Eric Sandeen's avatar
      xfs: fix boundary test in xfs_attr_shortform_verify · f4020438
      Eric Sandeen authored
      
      
      The boundary test for the fixed-offset parts of xfs_attr_sf_entry in
      xfs_attr_shortform_verify is off by one, because the variable array
      at the end is defined as nameval[1] not nameval[].
      Hence we need to subtract 1 from the calculation.
      
      This can be shown by:
      
      # touch file
      # setfattr -n root.a file
      
      and verifications will fail when it's written to disk.
      
      This only matters for a last attribute which has a single-byte name
      and no value, otherwise the combination of namelen & valuelen will
      push endp further out and this test won't fail.
      
      Fixes: 1e1bbd8e ("xfs: create structure verifier function for shortform xattrs")
      Signed-off-by: default avatarEric Sandeen <sandeen@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      f4020438
    • Brian Foster's avatar
      xfs: fix off-by-one in inode alloc block reservation calculation · 657f1019
      Brian Foster authored
      
      
      The inode chunk allocation transaction reserves inobt_maxlevels-1
      blocks to accommodate a full split of the inode btree. A full split
      requires an allocation for every existing level and a new root
      block, which means inobt_maxlevels is the worst case block
      requirement for a transaction that inserts to the inobt. This can
      lead to a transaction block reservation overrun when tmpfile
      creation allocates an inode chunk and expands the inobt to its
      maximum depth. This problem has been observed in conjunction with
      overlayfs, which makes frequent use of tmpfiles internally.
      
      The existing reservation code goes back as far as the Linux git repo
      history (v2.6.12). It was likely never observed as a problem because
      the traditional file/directory creation transactions also include
      worst case block reservation for directory modifications, which most
      likely is able to make up for a single block deficiency in the inode
      allocation portion of the calculation. tmpfile support is relatively
      more recent (v3.15), less heavily used, and only includes the inode
      allocation block reservation as tmpfiles aren't linked into the
      directory tree on creation.
      
      Fix up the inode alloc block reservation macro and a couple of the
      block allocator minleft parameters that enforce an allocation to
      leave enough free blocks in the AG for a full inobt split.
      
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      657f1019
    • Brian Foster's avatar
      xfs: finish dfops on every insert range shift iteration · 9c516e0e
      Brian Foster authored
      
      
      The recent change to make insert range an atomic operation used the
      incorrect transaction rolling mechanism. The explicit transaction
      roll does not finish deferred operations. This means that intents
      for rmapbt updates caused by extent shifts are not logged until the
      final transaction commits. Thus if a crash occurs during an insert
      range, log recovery might leave the rmapbt in an inconsistent state.
      This was discovered by repeated runs of generic/455.
      
      Update insert range to finish dfops on every shift iteration. This
      is similar to collapse range and ensures that intents are logged
      with the transactions that make associated changes.
      
      Fixes: dd87f87d ("xfs: rework insert range into an atomic operation")
      Signed-off-by: default avatarBrian Foster <bfoster@redhat.com>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      9c516e0e
    • Jens Axboe's avatar
      io_uring: make offset == -1 consistent with preadv2/pwritev2 · 0fef9483
      Jens Axboe authored
      
      
      The man page for io_uring generally claims were consistent with what
      preadv2 and pwritev2 accept, but turns out there's a slight discrepancy
      in how offset == -1 is handled for pipes/streams. preadv doesn't allow
      it, but preadv2 does. This currently causes io_uring to return -EINVAL
      if that is attempted, but we should allow that as documented.
      
      This change makes us consistent with preadv2/pwritev2 for just passing
      in a NULL ppos for streams if the offset is -1.
      
      Cc: stable@vger.kernel.org # v5.7+
      Reported-by: default avatarBenedikt Ames <wisp3rwind@posteo.eu>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0fef9483
  6. Aug 25, 2020
  7. Aug 24, 2020
    • Jeff Layton's avatar
      ceph: don't allow setlease on cephfs · 496ceaf1
      Jeff Layton authored
      
      
      Leases don't currently work correctly on kcephfs, as they are not broken
      when caps are revoked. They could eventually be implemented similarly to
      how we did them in libcephfs, but for now don't allow them.
      
      [ idryomov: no need for simple_nosetlease() in ceph_dir_fops and
        ceph_snapdir_fops ]
      
      Signed-off-by: default avatarJeff Layton <jlayton@kernel.org>
      Reviewed-by: default avatarIlya Dryomov <idryomov@gmail.com>
      Signed-off-by: default avatarIlya Dryomov <idryomov@gmail.com>
      496ceaf1
    • Jeff Layton's avatar
      ceph: fix inode number handling on arches with 32-bit ino_t · ebce3eb2
      Jeff Layton authored
      Tuan and Ulrich mentioned that they were hitting a problem on s390x,
      which has a 32-bit ino_t value, even though it's a 64-bit arch (for
      historical reasons).
      
      I think the current handling of inode numbers in the ceph driver is
      wrong. It tries to use 32-bit inode numbers on 32-bit arches, but that's
      actually not a problem. 32-bit arches can deal with 64-bit inode numbers
      just fine when userland code is compiled with LFS support (the common
      case these days).
      
      What we really want to do is just use 64-bit numbers everywhere, unless
      someone has mounted with the ino32 mount option. In that case, we want
      to ensure that we hash the inode number down to something that will fit
      in 32 bits before presenting the value to userland.
      
      Add new helper functions that do this, and only do the conversion before
      presenting these values to userland in getattr and readdir.
      
      The inode table hashvalue is changed to just cast the inode number to
      unsigned long, as low-order bits are the most likely to vary anyway.
      
      While it's not strictly required, we do want to put something in
      inode->i_ino. Instead of basing it on BITS_PER_LONG, however, base it on
      the size of the ino_t type.
      
      NOTE: This is a user-visible change on 32-bit arches:
      
      1/ inode numbers will be seen to have changed between kernel versions.
         32-bit arches will see large inode numbers now instead of the hashed
         ones they saw before.
      
      2/ any really old software not built with LFS support may start failing
         stat() calls with -EOVERFLOW on inode numbers >2^32. Nothing much we
         can do about these, but hopefully the intersection of people running
         such code on ceph will be very small.
      
      The workaround for both problems is to mount with "-o ino32".
      
      [ idryomov: changelog tweak ]
      
      URL: https://tracker.ceph.com/issues/46828
      
      
      Reported-by: default avatarUlrich Weigand <Ulrich.Weigand@de.ibm.com>
      Reported-and-Tested-by: default avatarTuan Hoang1 <Tuan.Hoang1@ibm.com>
      Signed-off-by: default avatarJeff Layton <jlayton@kernel.org>
      Reviewed-by: default avatar"Yan, Zheng" <zyan@redhat.com>
      Signed-off-by: default avatarIlya Dryomov <idryomov@gmail.com>
      ebce3eb2
    • Bob Peterson's avatar
      gfs2: add some much needed cleanup for log flushes that fail · 462582b9
      Bob Peterson authored
      
      
      When a log flush fails due to io errors, it signals the failure but does
      not clean up after itself very well. This is because buffers are added to
      the transaction tr_buf and tr_databuf queue, but the io error causes
      gfs2_log_flush to bypass the "after_commit" functions responsible for
      dequeueing the bd elements. If the bd elements are added to the ail list
      before the error, function ail_drain takes care of dequeueing them.
      But if they haven't gotten that far, the elements are forgotten and
      make the transactions unable to be freed.
      
      This patch introduces new function trans_drain which drains the bd
      elements from the transaction so they can be freed properly.
      
      Signed-off-by: default avatarBob Peterson <rpeterso@redhat.com>
      Signed-off-by: default avatarAndreas Gruenbacher <agruenba@redhat.com>
      462582b9
  8. Aug 23, 2020
  9. Aug 22, 2020
  10. Aug 21, 2020
    • David Howells's avatar
      afs: Fix NULL deref in afs_dynroot_depopulate() · 5e0b17b0
      David Howells authored
      
      
      If an error occurs during the construction of an afs superblock, it's
      possible that an error occurs after a superblock is created, but before
      we've created the root dentry.  If the superblock has a dynamic root
      (ie.  what's normally mounted on /afs), the afs_kill_super() will call
      afs_dynroot_depopulate() to unpin any created dentries - but this will
      oops if the root hasn't been created yet.
      
      Fix this by skipping that bit of code if there is no root dentry.
      
      This leads to an oops looking like:
      
      	general protection fault, ...
      	KASAN: null-ptr-deref in range [0x0000000000000068-0x000000000000006f]
      	...
      	RIP: 0010:afs_dynroot_depopulate+0x25f/0x529 fs/afs/dynroot.c:385
      	...
      	Call Trace:
      	 afs_kill_super+0x13b/0x180 fs/afs/super.c:535
      	 deactivate_locked_super+0x94/0x160 fs/super.c:335
      	 afs_get_tree+0x1124/0x1460 fs/afs/super.c:598
      	 vfs_get_tree+0x89/0x2f0 fs/super.c:1547
      	 do_new_mount fs/namespace.c:2875 [inline]
      	 path_mount+0x1387/0x2070 fs/namespace.c:3192
      	 do_mount fs/namespace.c:3205 [inline]
      	 __do_sys_mount fs/namespace.c:3413 [inline]
      	 __se_sys_mount fs/namespace.c:3390 [inline]
      	 __x64_sys_mount+0x27f/0x300 fs/namespace.c:3390
      	 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      which is oopsing on this line:
      
      	inode_lock(root->d_inode);
      
      presumably because sb->s_root was NULL.
      
      Fixes: 0da0b7fd ("afs: Display manually added cells in dynamic root mount")
      Reported-by: default avatar <syzbot+c1eff8205244ae7e11a6@syzkaller.appspotmail.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e0b17b0
    • Phillip Lougher's avatar
      squashfs: avoid bio_alloc() failure with 1Mbyte blocks · f26044c8
      Phillip Lougher authored
      
      
      This is a regression introduced by the patch "migrate from ll_rw_block
      usage to BIO".
      
      Bio_alloc() is limited to 256 pages (1 Mbyte).  This can cause a failure
      when reading 1 Mbyte block filesystems.  The problem is a datablock can be
      fully (or almost uncompressed), requiring 256 pages, but, because blocks
      are not aligned to page boundaries, it may require 257 pages to read.
      
      Bio_kmalloc() can handle 1024 pages, and so use this for the edge
      condition.
      
      Fixes: 93e72b3c ("squashfs: migrate from ll_rw_block usage to BIO")
      Reported-by: default avatarNicolas Prochazka <nicolas.prochazka@gmail.com>
      Reported-by: default avatarTomoatsu Shimada <shimada@walbrix.com>
      Signed-off-by: default avatarPhillip Lougher <phillip@squashfs.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarGuenter Roeck <groeck@chromium.org>
      Cc: Philippe Liard <pliard@google.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Adrien Schildknecht <adrien+dev@schischi.me>
      Cc: Daniel Rosenberg <drosen@google.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200815035637.15319-1-phillip@squashfs.org.uk
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f26044c8
    • Jann Horn's avatar
      romfs: fix uninitialized memory leak in romfs_dev_read() · bcf85fce
      Jann Horn authored
      
      
      romfs has a superblock field that limits the size of the filesystem; data
      beyond that limit is never accessed.
      
      romfs_dev_read() fetches a caller-supplied number of bytes from the
      backing device.  It returns 0 on success or an error code on failure;
      therefore, its API can't represent short reads, it's all-or-nothing.
      
      However, when romfs_dev_read() detects that the requested operation would
      cross the filesystem size limit, it currently silently truncates the
      requested number of bytes.  This e.g.  means that when the content of a
      file with size 0x1000 starts one byte before the filesystem size limit,
      ->readpage() will only fill a single byte of the supplied page while
      leaving the rest uninitialized, leaking that uninitialized memory to
      userspace.
      
      Fix it by returning an error code instead of truncating the read when the
      requested read operation would go beyond the end of the filesystem.
      
      Fixes: da4458bd ("NOMMU: Make it possible for RomFS to use MTD devices directly")
      Signed-off-by: default avatarJann Horn <jannh@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: <stable@vger.kernel.org>
      Link: http://lkml.kernel.org/r/20200818013202.2246365-1-jannh@google.com
      
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bcf85fce
    • Boris Burkov's avatar
      btrfs: detect nocow for swap after snapshot delete · a84d5d42
      Boris Burkov authored
      
      
      can_nocow_extent and btrfs_cross_ref_exist both rely on a heuristic for
      detecting a must cow condition which is not exactly accurate, but saves
      unnecessary tree traversal. The incorrect assumption is that if the
      extent was created in a generation smaller than the last snapshot
      generation, it must be referenced by that snapshot. That is true, except
      the snapshot could have since been deleted, without affecting the last
      snapshot generation.
      
      The original patch claimed a performance win from this check, but it
      also leads to a bug where you are unable to use a swapfile if you ever
      snapshotted the subvolume it's in. Make the check slower and more strict
      for the swapon case, without modifying the general cow checks as a
      compromise. Turning swap on does not seem to be a particularly
      performance sensitive operation, so incurring a possibly unnecessary
      btrfs_search_slot seems worthwhile for the added usability.
      
      Note: Until the snapshot is competely cleaned after deletion,
      check_committed_refs will still cause the logic to think that cow is
      necessary, so the user must until 'btrfs subvolu sync' finished before
      activating the swapfile swapon.
      
      CC: stable@vger.kernel.org # 5.4+
      Suggested-by: default avatarOmar Sandoval <osandov@osandov.com>
      Signed-off-by: default avatarBoris Burkov <boris@bur.io>
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      a84d5d42
    • Josef Bacik's avatar
      btrfs: check the right error variable in btrfs_del_dir_entries_in_log · fb2fecba
      Josef Bacik authored
      
      
      With my new locking code dbench is so much faster that I tripped over a
      transaction abort from ENOSPC.  This turned out to be because
      btrfs_del_dir_entries_in_log was checking for ret == -ENOSPC, but this
      function sets err on error, and returns err.  So instead of properly
      marking the inode as needing a full commit, we were returning -ENOSPC
      and aborting in __btrfs_unlink_inode.  Fix this by checking the proper
      variable so that we return the correct thing in the case of ENOSPC.
      
      The ENOENT needs to be checked, because btrfs_lookup_dir_item_index()
      can return -ENOENT if the dir item isn't in the tree log (which would
      happen if we hadn't fsync'ed this guy).  We actually handle that case in
      __btrfs_unlink_inode, so it's an expected error to get back.
      
      Fixes: 4a500fd1 ("Btrfs: Metadata ENOSPC handling for tree log")
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: default avatarFilipe Manana <fdmanana@suse.com>
      Signed-off-by: default avatarJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: default avatarDavid Sterba <dsterba@suse.com>
      [ add note and comment about ENOENT ]
      Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
      fb2fecba
  11. Aug 20, 2020
Loading