| From 780e0106d468a2962b16b52fdf42898f2639e0a0 Mon Sep 17 00:00:00 2001 |
| From: Peter Zijlstra <peterz@infradead.org> |
| Date: Tue, 16 Apr 2019 10:03:35 +0200 |
| Subject: x86/mm/tlb: Revert "x86/mm: Align TLB invalidation info" |
| |
| From: Peter Zijlstra <peterz@infradead.org> |
| |
| commit 780e0106d468a2962b16b52fdf42898f2639e0a0 upstream. |
| |
| Revert the following commit: |
| |
| 515ab7c41306: ("x86/mm: Align TLB invalidation info") |
| |
| I found out (the hard way) that under some .config options (notably L1_CACHE_SHIFT=7) |
| and compiler combinations this on-stack alignment leads to a 320 byte |
| stack usage, which then triggers a KASAN stack warning elsewhere. |
| |
| Using 320 bytes of stack space for a 40 byte structure is ludicrous and |
| clearly not right. |
| |
| Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> |
| Acked-by: Linus Torvalds <torvalds@linux-foundation.org> |
| Acked-by: Nadav Amit <namit@vmware.com> |
| Cc: Andy Lutomirski <luto@kernel.org> |
| Cc: Borislav Petkov <bp@alien8.de> |
| Cc: Dave Hansen <dave.hansen@intel.com> |
| Cc: Peter Zijlstra <peterz@infradead.org> |
| Cc: Thomas Gleixner <tglx@linutronix.de> |
| Fixes: 515ab7c41306 ("x86/mm: Align TLB invalidation info") |
| Link: http://lkml.kernel.org/r/20190416080335.GM7905@worktop.programming.kicks-ass.net |
| [ Minor changelog edits. ] |
| Signed-off-by: Ingo Molnar <mingo@kernel.org> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| arch/x86/mm/tlb.c | 2 +- |
| 1 file changed, 1 insertion(+), 1 deletion(-) |
| |
| --- a/arch/x86/mm/tlb.c |
| +++ b/arch/x86/mm/tlb.c |
| @@ -731,7 +731,7 @@ void flush_tlb_mm_range(struct mm_struct |
| { |
| int cpu; |
| |
| - struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { |
| + struct flush_tlb_info info = { |
| .mm = mm, |
| .stride_shift = stride_shift, |
| .freed_tables = freed_tables, |