| From a742462ef198dc7b7fe5e6c22b70e8b6e9b480e4 Mon Sep 17 00:00:00 2001 |
| From: Joerg Roedel <jroedel@suse.de> |
| Date: Tue, 21 Jul 2020 11:34:48 +0200 |
| Subject: [PATCH] x86, vmlinux.lds: Page-align end of ..page_aligned sections |
| |
| commit de2b41be8fcccb2f5b6c480d35df590476344201 upstream. |
| |
| On x86-32 the idt_table with 256 entries needs only 2048 bytes. It is |
| page-aligned, but the end of the .bss..page_aligned section is not |
| guaranteed to be page-aligned. |
| |
| As a result, objects from other .bss sections may end up on the same 4k |
| page as the idt_table, and will accidentially get mapped read-only during |
| boot, causing unexpected page-faults when the kernel writes to them. |
| |
| This could be worked around by making the objects in the page aligned |
| sections page sized, but that's wrong. |
| |
| Explicit sections which store only page aligned objects have an implicit |
| guarantee that the object is alone in the page in which it is placed. That |
| works for all objects except the last one. That's inconsistent. |
| |
| Enforcing page sized objects for these sections would wreckage memory |
| sanitizers, because the object becomes artificially larger than it should |
| be and out of bound access becomes legit. |
| |
| Align the end of the .bss..page_aligned and .data..page_aligned section on |
| page-size so all objects places in these sections are guaranteed to have |
| their own page. |
| |
| [ tglx: Amended changelog ] |
| |
| Signed-off-by: Joerg Roedel <jroedel@suse.de> |
| Signed-off-by: Thomas Gleixner <tglx@linutronix.de> |
| Reviewed-by: Kees Cook <keescook@chromium.org> |
| Cc: stable@vger.kernel.org |
| Link: https://lkml.kernel.org/r/20200721093448.10417-1-joro@8bytes.org |
| Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> |
| |
| diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S |
| index b3f6a15b9593..96d8025ea1b0 100644 |
| --- a/arch/x86/kernel/vmlinux.lds.S |
| +++ b/arch/x86/kernel/vmlinux.lds.S |
| @@ -362,6 +362,7 @@ SECTIONS |
| .bss : AT(ADDR(.bss) - LOAD_OFFSET) { |
| __bss_start = .; |
| *(.bss..page_aligned) |
| + . = ALIGN(PAGE_SIZE); |
| *(BSS_MAIN) |
| BSS_DECRYPTED |
| . = ALIGN(PAGE_SIZE); |
| diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h |
| index 088987e9a3ea..e88463e53504 100644 |
| --- a/include/asm-generic/vmlinux.lds.h |
| +++ b/include/asm-generic/vmlinux.lds.h |
| @@ -284,7 +284,8 @@ |
| |
| #define PAGE_ALIGNED_DATA(page_align) \ |
| . = ALIGN(page_align); \ |
| - *(.data..page_aligned) |
| + *(.data..page_aligned) \ |
| + . = ALIGN(page_align); |
| |
| #define READ_MOSTLY_DATA(align) \ |
| . = ALIGN(align); \ |
| @@ -655,7 +656,9 @@ |
| . = ALIGN(bss_align); \ |
| .bss : AT(ADDR(.bss) - LOAD_OFFSET) { \ |
| BSS_FIRST_SECTIONS \ |
| + . = ALIGN(PAGE_SIZE); \ |
| *(.bss..page_aligned) \ |
| + . = ALIGN(PAGE_SIZE); \ |
| *(.dynbss) \ |
| *(BSS_MAIN) \ |
| *(COMMON) \ |
| -- |
| 2.27.0 |
| |