blob: 68560ff1cff66f5b177739018be470af76575e3c [file] [log] [blame]
From a9e35163487a19bea457d7af1c9e4088ebb18d1a Mon Sep 17 00:00:00 2001
From: Jann Horn <>
Date: Tue, 17 Mar 2020 01:28:45 +0100
Subject: [PATCH] mm: slub: add missing TID bump in kmem_cache_alloc_bulk()
commit fd4d9c7d0c71866ec0c2825189ebd2ce35bd95b8 upstream.
When kmem_cache_alloc_bulk() attempts to allocate N objects from a percpu
freelist of length M, and N > M > 0, it will first remove the M elements
from the percpu freelist, then call ___slab_alloc() to allocate the next
element and repopulate the percpu freelist. ___slab_alloc() can re-enable
IRQs via allocate_slab(), so the TID must be bumped before ___slab_alloc()
to properly commit the freelist head change.
Fix it by unconditionally bumping c->tid when entering the slowpath.
Fixes: ebe909e0fdb3 ("slub: improve bulk alloc strategy")
Signed-off-by: Jann Horn <>
Signed-off-by: Linus Torvalds <>
Signed-off-by: Paul Gortmaker <>
diff --git a/mm/slub.c b/mm/slub.c
index 5308227e65db..7a89fbb61e59 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3145,6 +3145,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
if (unlikely(!object)) {
+ * We may have removed an object from c->freelist using
+ * the fastpath in the previous iteration; in that case,
+ * c->tid has not been bumped yet.
+ * Since ___slab_alloc() may reenable interrupts while
+ * allocating memory, we should bump c->tid now.
+ */
+ c->tid = next_tid(c->tid);
+ /*
* Invoking slow path likely have side-effect
* of re-populating per CPU c->freelist