| From ce9ec37bddb633404a0c23e1acb181a264e7f7f2 Mon Sep 17 00:00:00 2001 |
| From: Will Deacon <will.deacon@arm.com> |
| Date: Tue, 28 Oct 2014 13:16:28 -0700 |
| Subject: zap_pte_range: update addr when forcing flush after TLB batching faiure |
| |
| From: Will Deacon <will.deacon@arm.com> |
| |
| commit ce9ec37bddb633404a0c23e1acb181a264e7f7f2 upstream. |
| |
| When unmapping a range of pages in zap_pte_range, the page being |
| unmapped is added to an mmu_gather_batch structure for asynchronous |
| freeing. If we run out of space in the batch structure before the range |
| has been completely unmapped, then we break out of the loop, force a |
| TLB flush and free the pages that we have batched so far. If there are |
| further pages to unmap, then we resume the loop where we left off. |
| |
| Unfortunately, we forget to update addr when we break out of the loop, |
| which causes us to truncate the range being invalidated as the end |
| address is exclusive. When we re-enter the loop at the same address, the |
| page has already been freed and the pte_present test will fail, meaning |
| that we do not reconsider the address for invalidation. |
| |
| This patch fixes the problem by incrementing addr by the PAGE_SIZE |
| before breaking out of the loop on batch failure. |
| |
| Signed-off-by: Will Deacon <will.deacon@arm.com> |
| Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
| Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
| |
| --- |
| mm/memory.c | 1 + |
| 1 file changed, 1 insertion(+) |
| |
| --- a/mm/memory.c |
| +++ b/mm/memory.c |
| @@ -1147,6 +1147,7 @@ again: |
| print_bad_pte(vma, addr, ptent, page); |
| if (unlikely(!__tlb_remove_page(tlb, page))) { |
| force_flush = 1; |
| + addr += PAGE_SIZE; |
| break; |
| } |
| continue; |