blob: 7e43ae46f8354eb24a33af91acaebcc05279d614 [file] [log] [blame]
From jejb@kernel.org Wed Oct 15 14:42:49 2008
From: Alan Cox <alan@redhat.com>
Date: Sun, 12 Oct 2008 19:40:08 GMT
Subject: x86, early_ioremap: fix fencepost error
To: jejb@kernel.org, stable@kernel.org
Message-ID: <200810121940.m9CJe8k3024539@hera.kernel.org>
From: Alan Cox <alan@redhat.com>
commit c613ec1a7ff3714da11c7c48a13bab03beb5c376 upstream
The x86 implementation of early_ioremap has an off by one error. If we get
an object which ends on the first byte of a page we undermap by one page and
this causes a crash on boot with the ASUS P5QL whose DMI table happens to fit
this alignment.
The size computation is currently
last_addr = phys_addr + size - 1;
npages = (PAGE_ALIGN(last_addr) - phys_addr)
(Consider a request for 1 byte at alignment 0...)
Closes #11693
Debugging work by Ian Campbell/Felix Geyer
Signed-off-by: Alan Cox <alan@rehat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
---
arch/x86/mm/ioremap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -595,7 +595,7 @@ void __init *early_ioremap(unsigned long
*/
offset = phys_addr & ~PAGE_MASK;
phys_addr &= PAGE_MASK;
- size = PAGE_ALIGN(last_addr) - phys_addr;
+ size = PAGE_ALIGN(last_addr + 1) - phys_addr;
/*
* Mappings have to fit in the FIX_BTMAP area.