blob: 8c792810c8da47d5e84dad7a644849a006ffed4e [file] [log] [blame]
From: Suren Baghdasaryan <surenb@google.com>
Subject: mm: fix a lockdep issue in vma_assert_write_locked
Date: Wed, 12 Jul 2023 12:56:52 -0700
__is_vma_write_locked() can be used only when mmap_lock is write-locked to
guarantee vm_lock_seq and mm_lock_seq stability during the check.
Therefore it asserts this condition before further checks. Because of
that it can't be used unless the user expects the mmap_lock to be
write-locked. vma_assert_locked() can't assume this before ensuring that
VMA is not read-locked.
Change the order of the checks in vma_assert_locked() to check if the VMA
is read-locked first and only then assert if it's not write-locked.
Link: https://lkml.kernel.org/r/20230712195652.969194-1-surenb@google.com
Fixes: 50b88b63e3e4 ("mm: handle userfaults under VMA lock")
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reported-by: Liam R. Howlett <liam.howlett@oracle.com>
Closes: https://lore.kernel.org/all/20230712022620.3yytbdh24b7i4zrn@revolver/
Reported-by: syzbot+339b02f826caafd5f7a8@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/0000000000002db68f05ffb791bc@google.com/
Cc: Christian Brauner <brauner@kernel.org>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michel Lespinasse <michel@lespinasse.org>
Cc: Paul E. McKenney <paulmck@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/mm.h | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
--- a/include/linux/mm.h~mm-handle-userfaults-under-vma-lock-fix
+++ a/include/linux/mm.h
@@ -679,6 +679,7 @@ static inline void vma_end_read(struct v
rcu_read_unlock();
}
+/* WARNING! Can only be used if mmap_lock is expected to be write-locked */
static bool __is_vma_write_locked(struct vm_area_struct *vma, int *mm_lock_seq)
{
mmap_assert_write_locked(vma->vm_mm);
@@ -714,22 +715,17 @@ static inline void vma_start_write(struc
up_write(&vma->vm_lock->lock);
}
-static inline void vma_assert_locked(struct vm_area_struct *vma)
+static inline void vma_assert_write_locked(struct vm_area_struct *vma)
{
int mm_lock_seq;
- if (__is_vma_write_locked(vma, &mm_lock_seq))
- return;
-
- lockdep_assert_held(&vma->vm_lock->lock);
- VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma);
+ VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
}
-static inline void vma_assert_write_locked(struct vm_area_struct *vma)
+static inline void vma_assert_locked(struct vm_area_struct *vma)
{
- int mm_lock_seq;
-
- VM_BUG_ON_VMA(!__is_vma_write_locked(vma, &mm_lock_seq), vma);
+ if (!rwsem_is_locked(&vma->vm_lock->lock))
+ vma_assert_write_locked(vma);
}
static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached)
_