| From jejb@kernel.org Wed Oct 15 14:19:33 2008 |
| From: Dario Faggioli <raistlin@linux.it> |
| Date: Fri, 10 Oct 2008 20:15:02 GMT |
| Subject: sched_rt.c: resch needed in rt_rq_enqueue() for the root rt_rq |
| To: jejb@kernel.org, stable@kernel.org |
| Message-ID: <200810102015.m9AKF27B014753@hera.kernel.org> |
| |
| From: Dario Faggioli <raistlin@linux.it> |
| |
| commit f6121f4f8708195e88cbdf8dd8d171b226b3f858 upstream |
| |
| While working on the new version of the code for SCHED_SPORADIC I |
| noticed something strange in the present throttling mechanism. More |
| specifically in the throttling timer handler in sched_rt.c |
| (do_sched_rt_period_timer()) and in rt_rq_enqueue(). |
| |
| The problem is that, when unthrottling a runqueue, rt_rq_enqueue() only |
| asks for rescheduling if the runqueue has a sched_entity associated to |
| it (i.e., rt_rq->rt_se != NULL). |
| Now, if the runqueue is the root rq (which has a rt_se = NULL) |
| rescheduling does not take place, and it is delayed to some undefined |
| instant in the future. |
| |
| This imply some random bandwidth usage by the RT tasks under throttling. |
| For instance, setting rt_runtime_us/rt_period_us = 950ms/1000ms an RT |
| task will get less than 95%. In our tests we got something varying |
| between 70% to 95%. |
| Using smaller time values, e.g., 95ms/100ms, things are even worse, and |
| I can see values also going down to 20-25%!! |
| |
| The tests we performed are simply running 'yes' as a SCHED_FIFO task, |
| and checking the CPU usage with top, but we can investigate thoroughly |
| if you think it is needed. |
| |
| Things go much better, for us, with the attached patch... Don't know if |
| it is the best approach, but it solved the issue for us. |
| |
| Signed-off-by: Dario Faggioli <raistlin@linux.it> |
| Signed-off-by: Michael Trimarchi <trimarchimichael@yahoo.it> |
| Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> |
| Signed-off-by: Ingo Molnar <mingo@elte.hu> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> |
| |
| --- |
| kernel/sched_rt.c | 8 ++++---- |
| 1 file changed, 4 insertions(+), 4 deletions(-) |
| |
| --- a/kernel/sched_rt.c |
| +++ b/kernel/sched_rt.c |
| @@ -96,12 +96,12 @@ static void dequeue_rt_entity(struct sch |
| |
| static void sched_rt_rq_enqueue(struct rt_rq *rt_rq) |
| { |
| + struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr; |
| struct sched_rt_entity *rt_se = rt_rq->rt_se; |
| |
| - if (rt_se && !on_rt_rq(rt_se) && rt_rq->rt_nr_running) { |
| - struct task_struct *curr = rq_of_rt_rq(rt_rq)->curr; |
| - |
| - enqueue_rt_entity(rt_se); |
| + if (rt_rq->rt_nr_running) { |
| + if (rt_se && !on_rt_rq(rt_se)) |
| + enqueue_rt_entity(rt_se); |
| if (rt_rq->highest_prio < curr->prio) |
| resched_task(curr); |
| } |