| From foo@baz Mon Apr 9 17:09:24 CEST 2018 |
| From: Michael Ellerman <mpe@ellerman.id.au> |
| Date: Thu, 18 May 2017 20:37:31 +1000 |
| Subject: powerpc/mm: Fix virt_addr_valid() etc. on 64-bit hash |
| |
| From: Michael Ellerman <mpe@ellerman.id.au> |
| |
| |
| [ Upstream commit e41e53cd4fe331d0d1f06f8e4ed7e2cc63ee2c34 ] |
| |
| virt_addr_valid() is supposed to tell you if it's OK to call virt_to_page() on |
| an address. What this means in practice is that it should only return true for |
| addresses in the linear mapping which are backed by a valid PFN. |
| |
| We are failing to properly check that the address is in the linear mapping, |
| because virt_to_pfn() will return a valid looking PFN for more or less any |
| address. That bug is actually caused by __pa(), used in virt_to_pfn(). |
| |
| eg: __pa(0xc000000000010000) = 0x10000 # Good |
| __pa(0xd000000000010000) = 0x10000 # Bad! |
| __pa(0x0000000000010000) = 0x10000 # Bad! |
| |
| This started happening after commit bdbc29c19b26 ("powerpc: Work around gcc |
| miscompilation of __pa() on 64-bit") (Aug 2013), where we changed the definition |
| of __pa() to work around a GCC bug. Prior to that we subtracted PAGE_OFFSET from |
| the value passed to __pa(), meaning __pa() of a 0xd or 0x0 address would give |
| you something bogus back. |
| |
| Until we can verify if that GCC bug is no longer an issue, or come up with |
| another solution, this commit does the minimal fix to make virt_addr_valid() |
| work, by explicitly checking that the address is in the linear mapping region. |
| |
| Fixes: bdbc29c19b26 ("powerpc: Work around gcc miscompilation of __pa() on 64-bit") |
| Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> |
| Reviewed-by: Paul Mackerras <paulus@ozlabs.org> |
| Reviewed-by: Balbir Singh <bsingharora@gmail.com> |
| Tested-by: Breno Leitao <breno.leitao@gmail.com> |
| Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| --- |
| arch/powerpc/include/asm/page.h | 12 ++++++++++++ |
| 1 file changed, 12 insertions(+) |
| |
| --- a/arch/powerpc/include/asm/page.h |
| +++ b/arch/powerpc/include/asm/page.h |
| @@ -132,7 +132,19 @@ extern long long virt_phys_offset; |
| #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) |
| #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) |
| #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) |
| + |
| +#ifdef CONFIG_PPC_BOOK3S_64 |
| +/* |
| + * On hash the vmalloc and other regions alias to the kernel region when passed |
| + * through __pa(), which virt_to_pfn() uses. That means virt_addr_valid() can |
| + * return true for some vmalloc addresses, which is incorrect. So explicitly |
| + * check that the address is in the kernel region. |
| + */ |
| +#define virt_addr_valid(kaddr) (REGION_ID(kaddr) == KERNEL_REGION_ID && \ |
| + pfn_valid(virt_to_pfn(kaddr))) |
| +#else |
| #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) |
| +#endif |
| |
| /* |
| * On Book-E parts we need __va to parse the device tree and we can't |