| From b9bcc919931611498e856eae9bf66337330d04cc Mon Sep 17 00:00:00 2001 |
| From: Dave P Martin <Dave.Martin@arm.com> |
| Date: Tue, 16 Jun 2015 17:38:47 +0100 |
| Subject: arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP |
| |
| From: Dave P Martin <Dave.Martin@arm.com> |
| |
| commit b9bcc919931611498e856eae9bf66337330d04cc upstream. |
| |
| The memmap freeing code in free_unused_memmap() computes the end of |
| each memblock by adding the memblock size onto the base. However, |
| if SPARSEMEM is enabled then the value (start) used for the base |
| may already have been rounded downwards to work out which memmap |
| entries to free after the previous memblock. |
| |
| This may cause memmap entries that are in use to get freed. |
| |
| In general, you're not likely to hit this problem unless there |
| are at least 2 memblocks and one of them is not aligned to a |
| sparsemem section boundary. Note that carve-outs can increase |
| the number of memblocks by splitting the regions listed in the |
| device tree. |
| |
| This problem doesn't occur with SPARSEMEM_VMEMMAP, because the |
| vmemmap code deals with freeing the unused regions of the memmap |
| instead of requiring the arch code to do it. |
| |
| This patch gets the memblock base out of the memblock directly when |
| computing the block end address to ensure the correct value is used. |
| |
| Signed-off-by: Dave Martin <Dave.Martin@arm.com> |
| Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| arch/arm64/mm/init.c | 2 +- |
| 1 file changed, 1 insertion(+), 1 deletion(-) |
| |
| --- a/arch/arm64/mm/init.c |
| +++ b/arch/arm64/mm/init.c |
| @@ -260,7 +260,7 @@ static void __init free_unused_memmap(vo |
| * memmap entries are valid from the bank end aligned to |
| * MAX_ORDER_NR_PAGES. |
| */ |
| - prev_end = ALIGN(start + __phys_to_pfn(reg->size), |
| + prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size), |
| MAX_ORDER_NR_PAGES); |
| } |
| |