| From 9ee493ce0a60bf42c0f8fd0b0fe91df5704a1cbf Mon Sep 17 00:00:00 2001 |
| From: Mel Gorman <mel@csn.ul.ie> |
| Date: Thu, 9 Sep 2010 16:38:18 -0700 |
| Subject: mm: page allocator: drain per-cpu lists after direct reclaim allocation fails |
| |
| From: Mel Gorman <mel@csn.ul.ie> |
| |
| commit 9ee493ce0a60bf42c0f8fd0b0fe91df5704a1cbf upstream. |
| |
| When under significant memory pressure, a process enters direct reclaim |
| and immediately afterwards tries to allocate a page. If it fails and no |
| further progress is made, it's possible the system will go OOM. However, |
| on systems with large amounts of memory, it's possible that a significant |
| number of pages are on per-cpu lists and inaccessible to the calling |
| process. This leads to a process entering direct reclaim more often than |
| it should increasing the pressure on the system and compounding the |
| problem. |
| |
| This patch notes that if direct reclaim is making progress but allocations |
| are still failing that the system is already under heavy pressure. In |
| this case, it drains the per-cpu lists and tries the allocation a second |
| time before continuing. |
| |
| Signed-off-by: Mel Gorman <mel@csn.ul.ie> |
| Reviewed-by: Minchan Kim <minchan.kim@gmail.com> |
| Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> |
| Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> |
| Reviewed-by: Christoph Lameter <cl@linux.com> |
| Cc: Dave Chinner <david@fromorbit.com> |
| Cc: Wu Fengguang <fengguang.wu@intel.com> |
| Cc: David Rientjes <rientjes@google.com> |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> |
| |
| --- |
| mm/page_alloc.c | 20 ++++++++++++++++---- |
| 1 file changed, 16 insertions(+), 4 deletions(-) |
| |
| --- a/mm/page_alloc.c |
| +++ b/mm/page_alloc.c |
| @@ -1688,6 +1688,7 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m |
| struct page *page = NULL; |
| struct reclaim_state reclaim_state; |
| struct task_struct *p = current; |
| + bool drained = false; |
| |
| cond_resched(); |
| |
| @@ -1706,14 +1707,25 @@ __alloc_pages_direct_reclaim(gfp_t gfp_m |
| |
| cond_resched(); |
| |
| - if (order != 0) |
| - drain_all_pages(); |
| + if (unlikely(!(*did_some_progress))) |
| + return NULL; |
| |
| - if (likely(*did_some_progress)) |
| - page = get_page_from_freelist(gfp_mask, nodemask, order, |
| +retry: |
| + page = get_page_from_freelist(gfp_mask, nodemask, order, |
| zonelist, high_zoneidx, |
| alloc_flags, preferred_zone, |
| migratetype); |
| + |
| + /* |
| + * If an allocation failed after direct reclaim, it could be because |
| + * pages are pinned on the per-cpu lists. Drain them and try again |
| + */ |
| + if (!page && !drained) { |
| + drain_all_pages(); |
| + drained = true; |
| + goto retry; |
| + } |
| + |
| return page; |
| } |
| |