blob: 5b9cfdcbb2c4bb967148ca55c2686a7d44dc1890 [file] [log] [blame]
From dhobsong@igel.co.jp Mon Oct 29 00:53:48 2012
From: Damian Hobson-Garcia <dhobsong@igel.co.jp>
Date: Mon, 29 Oct 2012 16:51:05 +0900
Subject: [PATCH v2 48/58] ARM: dma-mapping: fix buffer chunk allocation order
To: greg@kroah.com, laurent.pinchart@ideasonboard.com, horms@verge.net.au
Cc: ltsi-dev@lists.linuxfoundation.org, dhobsong@igel.co.jp
Message-ID: <1351497075-32717-49-git-send-email-dhobsong@igel.co.jp>
From: Marek Szyprowski <m.szyprowski@samsung.com>
IOMMU-aware dma_alloc_attrs() implementation allocates buffers in
power-of-two chunks to improve performance and take advantage of large
page mappings provided by some IOMMU hardware. However current code, due
to a subtle bug, allocated those chunks in the smallest-to-largest
order, what completely killed all the advantages of using larger than
page chunks. If a 4KiB chunk has been mapped as a first chunk, the
consecutive chunks are not aligned correctly to the power-of-two which
match their size and IOMMU drivers were not able to use internal
mappings of size other than the 4KiB (largest common denominator of
alignment and chunk size).
This patch fixes this issue by changing to the correct largest-to-smallest
chunk size allocation sequence.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
(cherry picked from commit 593f47355467b9ef44293698817e2bdb347e2d11)
Signed-off-by: Damian Hobson-Garcia <dhobsong@igel.co.jp>
---
arch/arm/mm/dma-mapping.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -942,7 +942,7 @@ static struct page **__iommu_alloc_buffe
return NULL;
while (count) {
- int j, order = __ffs(count);
+ int j, order = __fls(count);
pages[i] = alloc_pages(gfp | __GFP_NOWARN, order);
while (!pages[i] && order)