A set of scheduler fixes:

 - Plug a load average accounting race which was introduced with a recent
   optimization casing load average to show bogus numbers.

 - Fix the rseq CPU id initialization for new tasks. sched_fork() does not
   update the rseq CPU id so the id is the stale id of the parent task,
   which can cause user space data corruption.

 - Handle a 0 return value of task_h_load() correctly in the load balancer,
   which does not decrease imbalance and therefore pulls until the maximum
   number of loops is reached, which might be all tasks just created by a
   fork bomb.
sched/fair: handle case of task_h_load() returning 0

task_h_load() can return 0 in some situations like running stress-ng
mmapfork, which forks thousands of threads, in a sched group on a 224 cores
system. The load balance doesn't handle this correctly because
env->imbalance never decreases and it will stop pulling tasks only after
reaching loop_max, which can be equal to the number of running tasks of
the cfs. Make sure that imbalance will be decreased by at least 1.

misfit task is the other feature that doesn't handle correctly such
situation although it's probably more difficult to face the problem
because of the smaller number of CPUs and running tasks on heterogenous
system.

We can't simply ensure that task_h_load() returns at least one because it
would imply to handle underflow in other places.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: <stable@vger.kernel.org> # v4.4+
Link: https://lkml.kernel.org/r/20200710152426.16981-1-vincent.guittot@linaro.org
1 file changed