mm: Disable lock elision for zap pte range

zap pte range tends to exceed the transaction capacity, while
walking the page tables under lock.

Since the lock is all shared with other VM paths, this is a tough
situation to handle for the automatic adaptation, while relies
on per lock state.

So disable elision for it explicitly.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
diff --git a/mm/memory.c b/mm/memory.c
index af84bc0..c4c0807 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1101,6 +1101,8 @@
 
 again:
 	init_rss_vec(rss);
+
+	disable_txn();	/* Likely to exceed capacity */
 	start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
 	pte = start_pte;
 	arch_enter_lazy_mmu_mode();
@@ -1196,6 +1198,7 @@
 	add_mm_rss_vec(mm, rss);
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(start_pte, ptl);
+	reenable_txn();
 
 	/*
 	 * mmu_gather ran out of room to batch pages, we break out of