blob: bdb3e9106538c2ead945d0d1a0c41c2c8a790470 [file] [log] [blame]
From: Baolin Wang <baolin.wang@linux.alibaba.com>
Subject: mm: vmscan: add validation before spliting shmem large folio
Date: Wed, 7 Aug 2024 15:31:06 +0800
Patch series "support large folio swap-out and swap-in for shmem", v4.
Shmem will support large folio allocation [1] [2] to get a better
performance, however, the memory reclaim still splits the precious large
folios when trying to swap-out shmem, which may lead to the memory
fragmentation issue and can not take advantage of the large folio for
shmeme.
Moreover, the swap code already supports for swapping out large folio
without split, and large folio swap-in[3] series is queued into
mm-unstable branch. Hence this patch set also supports the large folio
swap-out and swap-in for shmem.
Please help to review. Thanks.
Functional testing
==================
Machine environment: 32 Arm cores, 120G memory and 50G swap device.
1. Run xfstests suite to test tmpfs filesystem, and I did not catch any
regressions with this patch set.
FSTYP=tmpfs
export TEST_DIR=/mnt/tempfs_mnt
export TEST_DEV=/mnt/tempfs_mnt
export SCRATCH_MNT=/mnt/scratchdir
export SCRATCH_DEV=/mnt/scratchdir
2. Run all mm selftests in tools/testing/selftests/mm/, and no
regressions found.
3. I also wrote several shmem swap test cases, including shmem
splitting, shmem swapout, shmem swapin, swapoff during shmem swapout,
shmem reclaim, shmem swapin replacement, etc. I tested these cases
under 4K and 64K shmem folio sizes with a swap device, and shmem swap
functionality works well on my machine.
[1] https://lore.kernel.org/all/cover.1717495894.git.baolin.wang@linux.alibaba.com/
[2] https://lore.kernel.org/all/20240515055719.32577-1-da.gomez@samsung.com/
[3] https://lore.kernel.org/all/20240508224040.190469-6-21cnbao@gmail.com/T/
[4] https://lore.kernel.org/all/8db63194-77fd-e0b8-8601-2bbf04889a5b@google.com/
This patch (of 10):
Page reclaim will not scan anon LRU if no swap space, however MADV_PAGEOUT
can still split shmem large folios even without a swap device. Thus add
swap available space validation before spliting shmem large folio to avoid
redundant split.
Link: https://lkml.kernel.org/r/cover.1723012159.git.baolin.wang@linux.alibaba.com
Link: https://lkml.kernel.org/r/8a8c6dc9df0bc9f6f7f937bea446062be19611b3.1723012159.git.baolin.wang@linux.alibaba.com
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Barry Song <21cnbao@gmail.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: Daniel Gomez <da.gomez@samsung.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Lance Yang <ioworker0@gmail.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Pankaj Raghav <p.raghav@samsung.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Barry Song <baohua@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/vmscan.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/mm/vmscan.c~mm-vmscan-add-validation-before-spliting-shmem-large-folio
+++ a/mm/vmscan.c
@@ -1259,6 +1259,14 @@ retry:
}
} else if (folio_test_swapbacked(folio) &&
folio_test_large(folio)) {
+
+ /*
+ * Do not split shmem folio if no swap memory
+ * available.
+ */
+ if (!total_swap_pages)
+ goto activate_locked;
+
/* Split shmem folio */
if (split_folio_to_list(folio, folio_list))
goto keep_locked;
_