sl[auo]b: retry allocation once in case of failure.
When we are out of space in the caches, we will try to allocate a new
page. If we still fail, the page allocator will try to free pages
through direct reclaim. Which means that if an object allocation failed
we can be sure that no new pages could be given to us, even though
direct reclaim was likely invoked.
However, direct reclaim will also try to shrink objects from registered
shrinkers. They won't necessarily free a full page, but if our cache
happens to be one with a shrinker, this may very well open up the space
we need. So we retry the allocation in this case.
We can't know for sure if this happened. So the best we can do is try to
derive from our allocation flags how likely it is for direct reclaim to
have been called, and retry if we conclude that this is highly likely
(GFP_NOWAIT | GFP_FS | !GFP_NORETRY).
The common case is for the allocation to succeed. So we carefuly insert
a likely branch for that case.
Signed-off-by: Glauber Costa <email@example.com>
CC: Christoph Lameter <firstname.lastname@example.org>
CC: David Rientjes <email@example.com>
CC: Pekka Enberg <firstname.lastname@example.org>
CC: Andrew Morton <email@example.com>
CC: Mel Gorman <firstname.lastname@example.org>
4 files changed