| From: Suren Baghdasaryan <surenb@google.com> |
| Subject: docs/mm: document latest changes to vm_lock |
| Date: Thu, 13 Feb 2025 14:46:55 -0800 |
| |
| Change the documentation to reflect that vm_lock is integrated into vma |
| and replaced with vm_refcnt. Document newly introduced |
| vma_start_read_locked{_nested} functions. |
| |
| Link: https://lkml.kernel.org/r/20250213224655.1680278-19-surenb@google.com |
| Signed-off-by: Suren Baghdasaryan <surenb@google.com> |
| Reviewed-by: Liam R. Howlett <Liam.Howlett@Oracle.com> |
| Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> |
| Tested-by: Shivank Garg <shivankg@amd.com> |
| Link: https://lkml.kernel.org/r/5e19ec93-8307-47c2-bb13-3ddf7150624e@amd.com |
| Reviewed-by: Vlastimil Babka <vbabka@suse.cz> |
| Cc: Christian Brauner <brauner@kernel.org> |
| Cc: David Hildenbrand <david@redhat.com> |
| Cc: David Howells <dhowells@redhat.com> |
| Cc: Davidlohr Bueso <dave@stgolabs.net> |
| Cc: Hugh Dickins <hughd@google.com> |
| Cc: Jann Horn <jannh@google.com> |
| Cc: Johannes Weiner <hannes@cmpxchg.org> |
| Cc: Jonathan Corbet <corbet@lwn.net> |
| Cc: Klara Modin <klarasmodin@gmail.com> |
| Cc: Lokesh Gidra <lokeshgidra@google.com> |
| Cc: Mateusz Guzik <mjguzik@gmail.com> |
| Cc: Matthew Wilcox <willy@infradead.org> |
| Cc: Mel Gorman <mgorman@techsingularity.net> |
| Cc: Michal Hocko <mhocko@suse.com> |
| Cc: Minchan Kim <minchan@google.com> |
| Cc: Oleg Nesterov <oleg@redhat.com> |
| Cc: Pasha Tatashin <pasha.tatashin@soleen.com> |
| Cc: "Paul E . McKenney" <paulmck@kernel.org> |
| Cc: Peter Xu <peterx@redhat.com> |
| Cc: Peter Zijlstra (Intel) <peterz@infradead.org> |
| Cc: Shakeel Butt <shakeel.butt@linux.dev> |
| Cc: Sourav Panda <souravpanda@google.com> |
| Cc: Suren Baghdasaryan <surenb@google.com> |
| Cc: Wei Yang <richard.weiyang@gmail.com> |
| Cc: Will Deacon <will@kernel.org> |
| Cc: Heiko Carstens <hca@linux.ibm.com> |
| Cc: Stephen Rothwell <sfr@canb.auug.org.au> |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| --- |
| |
| Documentation/mm/process_addrs.rst | 46 +++++++++++++++------------ |
| 1 file changed, 27 insertions(+), 19 deletions(-) |
| |
| --- a/Documentation/mm/process_addrs.rst~docs-mm-document-latest-changes-to-vm_lock |
| +++ a/Documentation/mm/process_addrs.rst |
| @@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure |
| critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`, |
| before releasing the RCU lock via :c:func:`!rcu_read_unlock`. |
| |
| -VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for |
| -their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it |
| -via :c:func:`!vma_end_read`. |
| +In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked` |
| +and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not |
| +fail due to lock contention but the caller should still check their return values |
| +in case they fail for other reasons. |
| + |
| +VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their |
| +duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via |
| +:c:func:`!vma_end_read`. |
| |
| VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a |
| VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always |
| @@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be |
| lock, releasing or downgrading the mmap write lock also releases the VMA write |
| lock so there is no :c:func:`!vma_end_write` function. |
| |
| -Note that a semaphore write lock is not held across a VMA lock. Rather, a |
| -sequence number is used for serialisation, and the write semaphore is only |
| -acquired at the point of write lock to update this. |
| +Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily |
| +modified so that readers can detect the presense of a writer. The reference counter is |
| +restored once the vma sequence number used for serialisation is updated. |
| |
| This ensures the semantics we require - VMA write locks provide exclusive write |
| access to the VMA. |
| @@ -738,7 +743,7 @@ Implementation details |
| |
| The VMA lock mechanism is designed to be a lightweight means of avoiding the use |
| of the heavily contended mmap lock. It is implemented using a combination of a |
| -read/write semaphore and sequence numbers belonging to the containing |
| +reference counter and sequence numbers belonging to the containing |
| :c:struct:`!struct mm_struct` and the VMA. |
| |
| Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic |
| @@ -779,28 +784,31 @@ release of any VMA locks on its release |
| keep VMAs locked across entirely separate write operations. It also maintains |
| correct lock ordering. |
| |
| -Each time a VMA read lock is acquired, we acquire a read lock on the |
| -:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that |
| -the sequence count of the VMA does not match that of the mm. |
| - |
| -If it does, the read lock fails. If it does not, we hold the lock, excluding |
| -writers, but permitting other readers, who will also obtain this lock under RCU. |
| +Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt` |
| +reference counter and check that the sequence count of the VMA does not match |
| +that of the mm. |
| + |
| +If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped. |
| +If it does not, we keep the reference counter raised, excluding writers, but |
| +permitting other readers, who can also obtain this lock under RCU. |
| |
| Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu` |
| are also RCU safe, so the whole read lock operation is guaranteed to function |
| correctly. |
| |
| -On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock` |
| -read/write semaphore, before setting the VMA's sequence number under this lock, |
| -also simultaneously holding the mmap write lock. |
| +On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be |
| +modified by readers and wait for all readers to drop their reference count. |
| +Once there are no readers, the VMA's sequence number is set to match that of |
| +the mm. During this entire operation mmap write lock is held. |
| |
| This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep |
| until these are finished and mutual exclusion is achieved. |
| |
| -After setting the VMA's sequence number, the lock is released, avoiding |
| -complexity with a long-term held write lock. |
| +After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt` |
| +indicating a writer is cleared. From this point on, VMA's sequence number will |
| +indicate VMA's write-locked state until mmap write lock is dropped or downgraded. |
| |
| -This clever combination of a read/write semaphore and sequence count allows for |
| +This clever combination of a reference counter and sequence count allows for |
| fast RCU-based per-VMA lock acquisition (especially on page fault, though |
| utilised elsewhere) with minimal complexity around lock ordering. |
| |
| _ |