| From: Lance Yang <ioworker0@gmail.com> |
| Subject: mm/rmap: add helper to restart pgtable walk on changes |
| Date: Mon, 10 Jun 2024 20:02:07 +0800 |
| |
| Introduce the page_vma_mapped_walk_restart() helper to handle scenarios |
| where the page table walk needs to be restarted due to changes in the page |
| table, such as when a PMD is split. It releases the PTL held during the |
| previous walk and resets the state, allowing a new walk to start at the |
| current address stored in pvmw->address. |
| |
| Link: https://lkml.kernel.org/r/20240610120209.66311-3-ioworker0@gmail.com |
| Signed-off-by: Lance Yang <ioworker0@gmail.com> |
| Suggested-by: David Hildenbrand <david@redhat.com> |
| Cc: Bang Li <libang.li@antgroup.com> |
| Cc: Baolin Wang <baolin.wang@linux.alibaba.com> |
| Cc: Fangrui Song <maskray@google.com> |
| Cc: Jeff Xie <xiehuan09@gmail.com> |
| Cc: Kefeng Wang <wangkefeng.wang@huawei.com> |
| Cc: Matthew Wilcox (Oracle) <willy@infradead.org> |
| Cc: Michal Hocko <mhocko@suse.com> |
| Cc: Minchan Kim <minchan@kernel.org> |
| Cc: Muchun Song <songmuchun@bytedance.com> |
| Cc: Peter Xu <peterx@redhat.com> |
| Cc: Ryan Roberts <ryan.roberts@arm.com> |
| Cc: SeongJae Park <sj@kernel.org> |
| Cc: Yin Fengwei <fengwei.yin@intel.com> |
| Cc: Zach O'Keefe <zokeefe@google.com> |
| Cc: Zi Yan <ziy@nvidia.com> |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| --- |
| |
| include/linux/rmap.h | 22 ++++++++++++++++++++++ |
| 1 file changed, 22 insertions(+) |
| |
| --- a/include/linux/rmap.h~mm-rmap-add-helper-to-restart-pgtable-walk-on-changes |
| +++ a/include/linux/rmap.h |
| @@ -703,6 +703,28 @@ static inline void page_vma_mapped_walk_ |
| spin_unlock(pvmw->ptl); |
| } |
| |
| +/** |
| + * page_vma_mapped_walk_restart - Restart the page table walk. |
| + * @pvmw: Pointer to struct page_vma_mapped_walk. |
| + * |
| + * It restarts the page table walk when changes occur in the page |
| + * table, such as splitting a PMD. Ensures that the PTL held during |
| + * the previous walk is released and resets the state to allow for |
| + * a new walk starting at the current address stored in pvmw->address. |
| + */ |
| +static inline void |
| +page_vma_mapped_walk_restart(struct page_vma_mapped_walk *pvmw) |
| +{ |
| + WARN_ON_ONCE(!pvmw->pmd); |
| + WARN_ON_ONCE(!pvmw->ptl); |
| + |
| + if (pvmw->ptl) |
| + spin_unlock(pvmw->ptl); |
| + |
| + pvmw->ptl = NULL; |
| + pvmw->pmd = NULL; |
| +} |
| + |
| bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); |
| |
| /* |
| _ |