| From 8f7136b28d415c0a4d57502d3afb1af4c8920b68 Mon Sep 17 00:00:00 2001 |
| From: Sasha Levin <sashal@kernel.org> |
| Date: Mon, 1 Jul 2019 17:47:02 +0200 |
| Subject: sched/fair: Fix imbalance due to CPU affinity |
| |
| From: Vincent Guittot <vincent.guittot@linaro.org> |
| |
| [ Upstream commit f6cad8df6b30a5d2bbbd2e698f74b4cafb9fb82b ] |
| |
| The load_balance() has a dedicated mecanism to detect when an imbalance |
| is due to CPU affinity and must be handled at parent level. In this case, |
| the imbalance field of the parent's sched_group is set. |
| |
| The description of sg_imbalanced() gives a typical example of two groups |
| of 4 CPUs each and 4 tasks each with a cpumask covering 1 CPU of the first |
| group and 3 CPUs of the second group. Something like: |
| |
| { 0 1 2 3 } { 4 5 6 7 } |
| * * * * |
| |
| But the load_balance fails to fix this UC on my octo cores system |
| made of 2 clusters of quad cores. |
| |
| Whereas the load_balance is able to detect that the imbalanced is due to |
| CPU affinity, it fails to fix it because the imbalance field is cleared |
| before letting parent level a chance to run. In fact, when the imbalance is |
| detected, the load_balance reruns without the CPU with pinned tasks. But |
| there is no other running tasks in the situation described above and |
| everything looks balanced this time so the imbalance field is immediately |
| cleared. |
| |
| The imbalance field should not be cleared if there is no other task to move |
| when the imbalance is detected. |
| |
| Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> |
| Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> |
| Cc: Linus Torvalds <torvalds@linux-foundation.org> |
| Cc: Peter Zijlstra <peterz@infradead.org> |
| Cc: Thomas Gleixner <tglx@linutronix.de> |
| Link: https://lkml.kernel.org/r/1561996022-28829-1-git-send-email-vincent.guittot@linaro.org |
| Signed-off-by: Ingo Molnar <mingo@kernel.org> |
| Signed-off-by: Sasha Levin <sashal@kernel.org> |
| --- |
| kernel/sched/fair.c | 5 +++-- |
| 1 file changed, 3 insertions(+), 2 deletions(-) |
| |
| diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c |
| index b314feaf91f46..d8afae1bd5c5e 100644 |
| --- a/kernel/sched/fair.c |
| +++ b/kernel/sched/fair.c |
| @@ -7929,9 +7929,10 @@ static int load_balance(int this_cpu, struct rq *this_rq, |
| out_balanced: |
| /* |
| * We reach balance although we may have faced some affinity |
| - * constraints. Clear the imbalance flag if it was set. |
| + * constraints. Clear the imbalance flag only if other tasks got |
| + * a chance to move and fix the imbalance. |
| */ |
| - if (sd_parent) { |
| + if (sd_parent && !(env.flags & LBF_ALL_PINNED)) { |
| int *group_imbalance = &sd_parent->groups->sgc->imbalance; |
| |
| if (*group_imbalance) |
| -- |
| 2.20.1 |
| |