blob: f5c836e59a271bb6a3dd0ca09079505f2dc39ad5 [file] [log] [blame]
From: Keith Busch <kbusch@kernel.org>
Subject: dmapool: push new blocks in ascending order
Date: Tue, 21 Feb 2023 08:54:00 -0800
Some users of the dmapool need their allocations to happen in ascending
order. The recent optimizations pushed the blocks in reverse order, so
restore the previous behavior by linking the next available block from
low-to-high.
usb/chipidea/udc.c qh_pool called "ci_hw_qh". My initial thought was
dmapool isn't the right API if you need a specific order when allocating
from it, but I can't readily test any changes to that driver. Restoring
the previous behavior is easy enough.
Link: https://lkml.kernel.org/r/20230221165400.1595247-1-kbusch@meta.com
Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reported-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
Tested-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
--- a/mm/dmapool.c~dmapool-link-blocks-across-pages-fix
+++ b/mm/dmapool.c
@@ -301,7 +301,7 @@ EXPORT_SYMBOL(dma_pool_create);
static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
{
unsigned int next_boundary = pool->boundary, offset = 0;
- struct dma_block *block;
+ struct dma_block *block, *first = NULL, *last = NULL;
pool_init_page(pool, page);
while (offset + pool->size <= pool->allocation) {
@@ -312,11 +312,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
}
block = page->vaddr + offset;
- pool_block_push(pool, block, page->dma + offset);
+ block->dma = page->dma + offset;
+ block->next_block = NULL;
+
+ if (last)
+ last->next_block = block;
+ else
+ first = block;
+ last = block;
+
offset += pool->size;
pool->nr_blocks++;
}
+ last->next_block = pool->next_block;
+ pool->next_block = first;
+
list_add(&page->page_list, &pool->page_list);
pool->nr_pages++;
}
_