blob: bd806cbddbb600bf6754abcf9856ec6a10bd41a7 [file] [log] [blame]
From d57f727264f1425a94689bafc7e99e502cb135b5 Mon Sep 17 00:00:00 2001
From: Vineet Gupta <vgupta@synopsys.com>
Date: Thu, 13 Nov 2014 15:54:01 +0530
Subject: ARC: add compiler barrier to LLSC based cmpxchg
From: Vineet Gupta <vgupta@synopsys.com>
commit d57f727264f1425a94689bafc7e99e502cb135b5 upstream.
When auditing cmpxchg call sites, Chuck noted that gcc was optimizing
away some of the desired LDs.
| do {
| new = old = *ipi_data_ptr;
| new |= 1U << msg;
| } while (cmpxchg(ipi_data_ptr, old, new) != old);
was generating to below
| 8015cef8: ld r2,[r4,0] <-- First LD
| 8015cefc: bset r1,r2,r1
|
| 8015cf00: llock r3,[r4] <-- atomic op
| 8015cf04: brne r3,r2,8015cf10
| 8015cf08: scond r1,[r4]
| 8015cf0c: bnz 8015cf00
|
| 8015cf10: brne r3,r2,8015cf00 <-- Branch doesn't go to orig LD
Although this was fixed by adding a ACCESS_ONCE in this call site, it
seems safer (for now at least) to add compiler barrier to LLSC based
cmpxchg
Reported-by: Chuck Jordan <cjordan@synopsys.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
arch/arc/include/asm/cmpxchg.h | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -33,10 +33,11 @@ __cmpxchg(volatile void *ptr, unsigned l
" scond %3, [%1] \n"
" bnz 1b \n"
"2: \n"
- : "=&r"(prev)
- : "r"(ptr), "ir"(expected),
- "r"(new) /* can't be "ir". scond can't take limm for "b" */
- : "cc");
+ : "=&r"(prev) /* Early clobber, to prevent reg reuse */
+ : "r"(ptr), /* Not "m": llock only supports reg direct addr mode */
+ "ir"(expected),
+ "r"(new) /* can't be "ir". scond can't take LIMM for "b" */
+ : "cc", "memory"); /* so that gcc knows memory is being written here */
smp_mb();