| From: Kevin Brodsky <kevin.brodsky@arm.com> |
| Subject: mm: update lazy_mmu documentation |
| Date: Mon, 8 Sep 2025 08:39:31 +0100 |
| |
| We now support nested lazy_mmu sections on all architectures implementing |
| the API. Update the API comment accordingly. |
| |
| Link: https://lkml.kernel.org/r/20250908073931.4159362-8-kevin.brodsky@arm.com |
| Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com> |
| Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> |
| Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com> |
| Cc: Alexander Gordeev <agordeev@linux.ibm.com> |
| Cc: Andreas Larsson <andreas@gaisler.com> |
| Cc: Borislav Betkov <bp@alien8.de> |
| Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> |
| Cc: Catalin Marinas <catalin.marinas@arm.com> |
| Cc: Christophe Leroy <christophe.leroy@csgroup.eu> |
| Cc: David Hildenbrand <david@redhat.com> |
| Cc: David S. Miller <davem@davemloft.net> |
| Cc: "H. Peter Anvin" <hpa@zytor.com> |
| Cc: Ingo Molnar <mingo@redhat.com> |
| Cc: Jann Horn <jannh@google.com> |
| Cc: Juegren Gross <jgross@suse.com> |
| Cc: Liam Howlett <liam.howlett@oracle.com> |
| Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> |
| Cc: Madhavan Srinivasan <maddy@linux.ibm.com> |
| Cc: Michael Ellerman <mpe@ellerman.id.au> |
| Cc: Michal Hocko <mhocko@suse.com> |
| Cc: Nicholas Piggin <npiggin@gmail.com> |
| Cc: Peter Zijlstra <peterz@infradead.org> |
| Cc: Ryan Roberts <ryan.roberts@arm.com> |
| Cc: Suren Baghdasaryan <surenb@google.com> |
| Cc: Thomas Gleinxer <tglx@linutronix.de> |
| Cc: Vlastimil Babka <vbabka@suse.cz> |
| Cc: Will Deacon <will@kernel.org> |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| --- |
| |
| include/linux/pgtable.h | 14 ++++++++++++-- |
| 1 file changed, 12 insertions(+), 2 deletions(-) |
| |
| --- a/include/linux/pgtable.h~mm-update-lazy_mmu-documentation |
| +++ a/include/linux/pgtable.h |
| @@ -228,8 +228,18 @@ static inline int pmd_dirty(pmd_t pmd) |
| * of the lazy mode. So the implementation must assume preemption may be enabled |
| * and cpu migration is possible; it must take steps to be robust against this. |
| * (In practice, for user PTE updates, the appropriate page table lock(s) are |
| - * held, but for kernel PTE updates, no lock is held). Nesting is not permitted |
| - * and the mode cannot be used in interrupt context. |
| + * held, but for kernel PTE updates, no lock is held). The mode cannot be used |
| + * in interrupt context. |
| + * |
| + * Calls may be nested: an arch_{enter,leave}_lazy_mmu_mode() pair may be called |
| + * while the lazy MMU mode has already been enabled. An implementation should |
| + * handle this using the state returned by enter() and taken by the matching |
| + * leave() call; the LAZY_MMU_{DEFAULT,NESTED} flags can be used to indicate |
| + * whether this enter/leave pair is nested inside another or not. (It is up to |
| + * the implementation to track whether the lazy MMU mode is enabled at any point |
| + * in time.) The expectation is that leave() will flush any batched state |
| + * unconditionally, but only leave the lazy MMU mode if the passed state is not |
| + * LAZY_MMU_NESTED. |
| */ |
| #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE |
| typedef int lazy_mmu_state_t; |
| _ |