blob: ba7cf02aea6ff8068cf2e69a6017535bb9165f48 [file] [log] [blame]
From e1f1ec3a873f2b426932855bab81c06942d8bc12 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Michal=20Koutn=C3=BD?= <>
Date: Thu, 6 Aug 2020 23:22:18 -0700
Subject: [PATCH] mm/page_counter.c: fix protection usage propagation
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
commit a6f23d14ec7d7d02220ad8bb2774be3322b9aeec upstream.
When workload runs in cgroups that aren't directly below root cgroup and
their parent specifies reclaim protection, it may end up ineffective.
The reason is that propagate_protected_usage() is not called in all
hierarchy up. All the protected usage is incorrectly accumulated in the
workload's parent. This means that siblings_low_usage is overestimated
and effective protection underestimated. Even though it is transitional
phenomenon (uncharge path does correct propagation and fixes the wrong
children_low_usage), it can undermine the intended protection
We have noticed this problem while seeing a swap out in a descendant of a
protected memcg (intermediate node) while the parent was conveniently
under its protection limit and the memory pressure was external to that
hierarchy. Michal has pinpointed this down to the wrong
siblings_low_usage which led to the unwanted reclaim.
The fix is simply updating children_low_usage in respective ancestors also
in the charging path.
Fixes: 230671533d64 ("mm: memory.low hierarchical behavior")
Signed-off-by: Michal Koutný <>
Signed-off-by: Michal Hocko <>
Signed-off-by: Andrew Morton <>
Acked-by: Michal Hocko <>
Acked-by: Roman Gushchin <>
Cc: Johannes Weiner <>
Cc: Tejun Heo <>
Cc: <> [4.18+]
Signed-off-by: Linus Torvalds <>
Signed-off-by: Paul Gortmaker <>
diff --git a/mm/page_counter.c b/mm/page_counter.c
index de31470655f6..147ff99187b8 100644
--- a/mm/page_counter.c
+++ b/mm/page_counter.c
@@ -77,7 +77,7 @@ void page_counter_charge(struct page_counter *counter, unsigned long nr_pages)
long new;
new = atomic_long_add_return(nr_pages, &c->usage);
- propagate_protected_usage(counter, new);
+ propagate_protected_usage(c, new);
* This is indeed racy, but we can live with some
* inaccuracy in the watermark.
@@ -121,7 +121,7 @@ bool page_counter_try_charge(struct page_counter *counter,
new = atomic_long_add_return(nr_pages, &c->usage);
if (new > c->max) {
atomic_long_sub(nr_pages, &c->usage);
- propagate_protected_usage(counter, new);
+ propagate_protected_usage(c, new);
* This is racy, but we can live with some
* inaccuracy in the failcnt.
@@ -130,7 +130,7 @@ bool page_counter_try_charge(struct page_counter *counter,
*fail = c;
goto failed;
- propagate_protected_usage(counter, new);
+ propagate_protected_usage(c, new);
* Just like with failcnt, we can live with some
* inaccuracy in the watermark.