blob: 5f310d880b94020693d4373d9b83ea3f06984838 [file] [log] [blame]
From: Suren Baghdasaryan <surenb@google.com>
Subject: mm: replace mmap with vma write lock assertions when operating on a vma
Date: Fri, 4 Aug 2023 08:27:21 -0700
Vma write lock assertion always includes mmap write lock assertion and
additional vma lock checks when per-VMA locks are enabled. Replace
weaker mmap_assert_write_locked() assertions with stronger
vma_assert_write_locked() ones when we are operating on a vma which
is expected to be locked.
Link: https://lkml.kernel.org/r/20230804152724.3090321-4-surenb@google.com
Suggested-by: Jann Horn <jannh@google.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Cc: Linus Torvalds <torvalds@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/hugetlb.c | 2 +-
mm/memory.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--- a/mm/hugetlb.c~mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma
+++ a/mm/hugetlb.c
@@ -5029,7 +5029,7 @@ int copy_hugetlb_page_range(struct mm_st
src_vma->vm_start,
src_vma->vm_end);
mmu_notifier_invalidate_range_start(&range);
- mmap_assert_write_locked(src);
+ vma_assert_write_locked(src_vma);
raw_write_seqcount_begin(&src->write_protect_seq);
} else {
/*
--- a/mm/memory.c~mm-replace-mmap-with-vma-write-lock-assertions-when-operating-on-a-vma
+++ a/mm/memory.c
@@ -1312,7 +1312,7 @@ copy_page_range(struct vm_area_struct *d
* Use the raw variant of the seqcount_t write API to avoid
* lockdep complaining about preemptibility.
*/
- mmap_assert_write_locked(src_mm);
+ vma_assert_write_locked(src_vma);
raw_write_seqcount_begin(&src_mm->write_protect_seq);
}
_