| From ae120d9edfe96628f03d87634acda0bfa7110632 Mon Sep 17 00:00:00 2001 |
| From: Marc Zyngier <Marc.Zyngier@arm.com> |
| Date: Fri, 21 Jun 2013 12:06:19 +0100 |
| Subject: ARM: 7767/1: let the ASID allocator handle suspended animation |
| |
| From: Marc Zyngier <Marc.Zyngier@arm.com> |
| |
| commit ae120d9edfe96628f03d87634acda0bfa7110632 upstream. |
| |
| When a CPU is running a process, the ASID for that process is |
| held in a per-CPU variable (the "active ASIDs" array). When |
| the ASID allocator handles a rollover, it copies the active |
| ASIDs into a "reserved ASIDs" array to ensure that a process |
| currently running on another CPU will continue to run unaffected. |
| The active array is zero-ed to indicate that a rollover occurred. |
| |
| Because of this mechanism, a reserved ASID is only remembered for |
| a single rollover. A subsequent rollover will completely refill |
| the reserved ASIDs array. |
| |
| In a severely oversubscribed environment where a CPU can be |
| prevented from running for extended periods of time (think virtual |
| machines), the above has a horrible side effect: |
| |
| [P{a} denotes process P running with ASID a] |
| |
| CPU-0 CPU-1 |
| |
| A{x} [active = <x 0>] |
| |
| [suspended] runs B{y} [active = <x y>] |
| |
| [rollover: |
| active = <0 0> |
| reserved = <x y>] |
| |
| runs B{y} [active = <0 y> |
| reserved = <x y>] |
| |
| [rollover: |
| active = <0 0> |
| reserved = <0 y>] |
| |
| runs C{x} [active = <0 x>] |
| |
| [resumes] |
| |
| runs A{x} |
| |
| At that stage, both A and C have the same ASID, with deadly |
| consequences. |
| |
| The fix is to preserve reserved ASIDs across rollovers if |
| the CPU doesn't have an active ASID when the rollover occurs. |
| |
| Acked-by: Will Deacon <will.deacon@arm.com> |
| Acked-by: Catalin Carinas <catalin.marinas@arm.com> |
| Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> |
| Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| arch/arm/mm/context.c | 9 +++++++++ |
| 1 file changed, 9 insertions(+) |
| |
| --- a/arch/arm/mm/context.c |
| +++ b/arch/arm/mm/context.c |
| @@ -128,6 +128,15 @@ static void flush_context(unsigned int c |
| asid = 0; |
| } else { |
| asid = atomic64_xchg(&per_cpu(active_asids, i), 0); |
| + /* |
| + * If this CPU has already been through a |
| + * rollover, but hasn't run another task in |
| + * the meantime, we must preserve its reserved |
| + * ASID, as this is the only trace we have of |
| + * the process it is still running. |
| + */ |
| + if (asid == 0) |
| + asid = per_cpu(reserved_asids, i); |
| __set_bit(ASID_TO_IDX(asid), asid_map); |
| } |
| per_cpu(reserved_asids, i) = asid; |