| From 55df81fc13e7d52519d0b86d994fac5725102ade Mon Sep 17 00:00:00 2001 |
| From: Sasha Levin <sashal@kernel.org> |
| Date: Thu, 6 Aug 2020 23:26:25 -0700 |
| Subject: khugepaged: khugepaged_test_exit() check mmget_still_valid() |
| |
| From: Hugh Dickins <hughd@google.com> |
| |
| [ Upstream commit bbe98f9cadff58cdd6a4acaeba0efa8565dabe65 ] |
| |
| Move collapse_huge_page()'s mmget_still_valid() check into |
| khugepaged_test_exit() itself. collapse_huge_page() is used for anon THP |
| only, and earned its mmget_still_valid() check because it inserts a huge |
| pmd entry in place of the page table's pmd entry; whereas |
| collapse_file()'s retract_page_tables() or collapse_pte_mapped_thp() |
| merely clears the page table's pmd entry. But core dumping without mmap |
| lock must have been as open to mistaking a racily cleared pmd entry for a |
| page table at physical page 0, as exit_mmap() was. And we certainly have |
| no interest in mapping as a THP once dumping core. |
| |
| Fixes: 59ea6d06cfa9 ("coredump: fix race condition between collapse_huge_page() and core dumping") |
| Signed-off-by: Hugh Dickins <hughd@google.com> |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| Cc: Andrea Arcangeli <aarcange@redhat.com> |
| Cc: Song Liu <songliubraving@fb.com> |
| Cc: Mike Kravetz <mike.kravetz@oracle.com> |
| Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> |
| Cc: <stable@vger.kernel.org> [4.8+] |
| Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2008021217020.27773@eggly.anvils |
| Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
| Signed-off-by: Sasha Levin <sashal@kernel.org> |
| --- |
| mm/khugepaged.c | 5 +---- |
| 1 file changed, 1 insertion(+), 4 deletions(-) |
| |
| diff --git a/mm/khugepaged.c b/mm/khugepaged.c |
| index 04b4c38d0c184..a1b7475c05d04 100644 |
| --- a/mm/khugepaged.c |
| +++ b/mm/khugepaged.c |
| @@ -394,7 +394,7 @@ static void insert_to_mm_slots_hash(struct mm_struct *mm, |
| |
| static inline int khugepaged_test_exit(struct mm_struct *mm) |
| { |
| - return atomic_read(&mm->mm_users) == 0; |
| + return atomic_read(&mm->mm_users) == 0 || !mmget_still_valid(mm); |
| } |
| |
| int __khugepaged_enter(struct mm_struct *mm) |
| @@ -1006,9 +1006,6 @@ static void collapse_huge_page(struct mm_struct *mm, |
| * handled by the anon_vma lock + PG_lock. |
| */ |
| down_write(&mm->mmap_sem); |
| - result = SCAN_ANY_PROCESS; |
| - if (!mmget_still_valid(mm)) |
| - goto out; |
| result = hugepage_vma_revalidate(mm, address, &vma); |
| if (result) |
| goto out; |
| -- |
| 2.25.1 |
| |