blob: 5460d35bddb15fe489bbfb8181a75abbb6f92afe [file] [log] [blame]
From: Ackerley Tng <ackerleytng@google.com>
Subject: fs: hugetlbfs: set vma policy only when needed for allocating folio
Date: Tue, 2 May 2023 23:56:22 +0000
Calling hugetlb_set_vma_policy() later avoids setting the vma policy
and then dropping it on a page cache hit.
Link: https://lkml.kernel.org/r/20230502235622.3652586-1-ackerleytng@google.com
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Erdem Aktas <erdemaktas@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Vishal Annapurve <vannapurve@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
fs/hugetlbfs/inode.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
--- a/fs/hugetlbfs/inode.c~fs-hugetlbfs-set-vma-policy-only-when-needed-for-allocating-folio
+++ a/fs/hugetlbfs/inode.c
@@ -834,9 +834,6 @@ static long hugetlbfs_fallocate(struct f
break;
}
- /* Set numa allocation policy based on index */
- hugetlb_set_vma_policy(&pseudo_vma, inode, index);
-
/* addr is the offset within the file (zero based) */
addr = index * hpage_size;
@@ -850,7 +847,6 @@ static long hugetlbfs_fallocate(struct f
rcu_read_unlock();
if (present) {
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
- hugetlb_drop_vma_policy(&pseudo_vma);
continue;
}
@@ -862,6 +858,7 @@ static long hugetlbfs_fallocate(struct f
* folios in these areas, we need to consume the reserves
* to keep reservation accounting consistent.
*/
+ hugetlb_set_vma_policy(&pseudo_vma, inode, index);
folio = alloc_hugetlb_folio(&pseudo_vma, addr, 0);
hugetlb_drop_vma_policy(&pseudo_vma);
if (IS_ERR(folio)) {
_