blob: a2492edea47080706f5a0d4b9b05f932ac2ae474 [file] [log] [blame]
From: Joerg Roedel <jroedel@suse.de>
Date: Tue, 17 Apr 2018 15:27:16 +0200
Subject: x86/mm: Prevent kernel Oops in PTDUMP code with HIGHPTE=y
commit d6ef1f194b7569af8b8397876dc9ab07649d63cb upstream.
The walk_pte_level() function just uses __va to get the virtual address of
the PTE page, but that breaks when the PTE page is not in the direct
mapping with HIGHPTE=y.
The result is an unhandled kernel paging request at some random address
when accessing the current_kernel or current_user file.
Use the correct API to access PTE pages.
Fixes: fe770bf0310d ('x86: clean up the page table dumper and add 32-bit support')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: jgross@suse.com
Cc: JBeulich@suse.com
Cc: hpa@zytor.com
Cc: aryabinin@virtuozzo.com
Cc: kirill.shutemov@linux.intel.com
Link: https://lkml.kernel.org/r/1523971636-4137-1-git-send-email-joro@8bytes.org
[bwh: Backported to 3.16:
- Keep using pte_pgprot() to get protection flags
- Adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
---
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -16,6 +16,7 @@
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/seq_file.h>
+#include <linux/highmem.h>
#include <asm/pgtable.h>
@@ -263,15 +264,16 @@ static void walk_pte_level(struct seq_fi
unsigned long P)
{
int i;
- pte_t *start;
+ pte_t *pte;
- start = (pte_t *) pmd_page_vaddr(addr);
for (i = 0; i < PTRS_PER_PTE; i++) {
- pgprot_t prot = pte_pgprot(*start);
+ pgprot_t prot;
st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT);
+ pte = pte_offset_map(&addr, st->current_address);
+ prot = pte_pgprot(*pte);
note_page(m, st, prot, 4);
- start++;
+ pte_unmap(pte);
}
}