Skip to content
Commit 15a4ca98 authored by Niu Yawei's avatar Niu Yawei Committed by Greg Kroah-Hartman
Browse files

staging: lustre: ptlrpc: update replay cursor when close during replay



The replay cursor should be updated properly when close happened
during replay, otherwise, ptlrpc_replay_next() could run into a
dead loop due to an invalid replay cursor:

- replay cursor is moved to an open request during replay;
- application close that open file, so the rq_replay of the open
  request is cleared;
- ptlrpc_replay_next() calls ptlrpc_free_committed() to free
  committed/closed requests, the open request is removed from
  the committed list, so the replay cursor is changed to an
  empty list_head now. The open request won't be freed now since
  it's still held by the pending close request;
- ptlrpc_replay_next() continue to move the replay cursor to
  next and run into a dead loop at the end;

Another change in this patch is to remove the out of date comments
in ptlrpc_replay_next() and cover the whole process of finding
replay request within imp_lock, because:

1. With two separated replay lists and replay cursor introduced,
   finding replay request won't take much time as before, it's
   not necessary to do this "lock -> unlock -> lock -> unlock"
   trick anymore;

2. Nowadays there are various kind of non-replay requests are
   allowed during recovery, so ptlrpc_free_committed() may run in
   parallel to remove an open request while ptlrpc_replay_next()
   is iterating the open requests list;

Signed-off-by: default avatarNiu Yawei <yawei.niu@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-8765
Reviewed-on: https://review.whamcloud.com/23418


Reviewed-by: default avatarYang Sheng <yang.sheng@intel.com>
Reviewed-by: default avatarJohn L. Hammond <john.hammond@intel.com>
Reviewed-by: default avatarOleg Drokin <oleg.drokin@intel.com>
Signed-off-by: default avatarJames Simmons <jsimmons@infradead.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 067e6343
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment