| From 7c8746a9eb287642deaad0e7c2cdf482dce5e4be Mon Sep 17 00:00:00 2001 |
| From: Will Deacon <will.deacon@arm.com> |
| Date: Fri, 7 Feb 2014 19:12:32 +0100 |
| Subject: ARM: 7955/1: spinlock: ensure we have a compiler barrier before sev |
| |
| From: Will Deacon <will.deacon@arm.com> |
| |
| commit 7c8746a9eb287642deaad0e7c2cdf482dce5e4be upstream. |
| |
| When unlocking a spinlock, we require the following, strictly ordered |
| sequence of events: |
| |
| <barrier> /* dmb */ |
| <unlock> |
| <barrier> /* dsb */ |
| <sev> |
| |
| Whilst the code does indeed reflect this in terms of the architecture, |
| the final <barrier> + <sev> have been contracted into a single inline |
| asm without a "memory" clobber, therefore the compiler is at liberty to |
| reorder the unlock to the end of the above sequence. In such a case, |
| a waiting CPU may be woken up before the lock has been unlocked, leading |
| to extremely poor performance. |
| |
| This patch reworks the dsb_sev() function to make use of the dsb() |
| macro and ensure ordering against the unlock. |
| |
| Reported-by: Mark Rutland <mark.rutland@arm.com> |
| Signed-off-by: Will Deacon <will.deacon@arm.com> |
| Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| arch/arm/include/asm/spinlock.h | 15 +++------------ |
| 1 file changed, 3 insertions(+), 12 deletions(-) |
| |
| --- a/arch/arm/include/asm/spinlock.h |
| +++ b/arch/arm/include/asm/spinlock.h |
| @@ -37,18 +37,9 @@ |
| |
| static inline void dsb_sev(void) |
| { |
| -#if __LINUX_ARM_ARCH__ >= 7 |
| - __asm__ __volatile__ ( |
| - "dsb ishst\n" |
| - SEV |
| - ); |
| -#else |
| - __asm__ __volatile__ ( |
| - "mcr p15, 0, %0, c7, c10, 4\n" |
| - SEV |
| - : : "r" (0) |
| - ); |
| -#endif |
| + |
| + dsb(ishst); |
| + __asm__(SEV); |
| } |
| |
| /* |