| From: Christian Zander <christian@nervanasys.com> |
| Date: Wed, 10 Jun 2015 09:41:45 -0700 |
| Subject: iommu/vt-d: fix range computation when making room for large pages |
| |
| commit ba2374fd2bf379f933773811fdb06cb6a5445f41 upstream. |
| |
| In preparation for the installation of a large page, any small page |
| tables that may still exist in the target IOV address range are |
| removed. However, if a scatter/gather list entry is large enough to |
| fit more than one large page, the address space for any subsequent |
| large pages is not cleared of conflicting small page tables. |
| |
| This can cause legitimate mapping requests to fail with errors of the |
| form below, potentially followed by a series of IOMMU faults: |
| |
| ERROR: DMA PTE for vPFN 0xfde00 already set (to 7f83a4003 not 7e9e00083) |
| |
| In this example, a 4MiB scatter/gather list entry resulted in the |
| successful installation of a large page @ vPFN 0xfdc00, followed by |
| a failed attempt to install another large page @ vPFN 0xfde00, due to |
| the presence of a pointer to a small page table @ 0x7f83a4000. |
| |
| To address this problem, compute the number of large pages that fit |
| into a given scatter/gather list entry, and use it to derive the |
| last vPFN covered by the large page(s). |
| |
| Signed-off-by: Christian Zander <christian@nervanasys.com> |
| Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> |
| [bwh: Backported to 3.2: |
| - Add the lvl_pages variable, added by an earlier commit upstream |
| - Also change arguments to dma_pte_clear_range(), which is called by |
| dma_pte_free_pagetable() upstream] |
| Signed-off-by: Ben Hutchings <ben@decadent.org.uk> |
| Signed-off-by: Zefan Li <lizefan@huawei.com> |
| --- |
| drivers/iommu/intel-iommu.c | 19 +++++++++++++------ |
| 1 file changed, 13 insertions(+), 6 deletions(-) |
| |
| --- a/drivers/iommu/intel-iommu.c |
| +++ b/drivers/iommu/intel-iommu.c |
| @@ -1827,13 +1827,20 @@ static int __domain_mapping(struct dmar_ |
| return -ENOMEM; |
| /* It is large page*/ |
| if (largepage_lvl > 1) { |
| + unsigned long nr_superpages, end_pfn, lvl_pages; |
| + |
| pteval |= DMA_PTE_LARGE_PAGE; |
| - /* Ensure that old small page tables are removed to make room |
| - for superpage, if they exist. */ |
| - dma_pte_clear_range(domain, iov_pfn, |
| - iov_pfn + lvl_to_nr_pages(largepage_lvl) - 1); |
| - dma_pte_free_pagetable(domain, iov_pfn, |
| - iov_pfn + lvl_to_nr_pages(largepage_lvl) - 1); |
| + lvl_pages = lvl_to_nr_pages(largepage_lvl); |
| + |
| + nr_superpages = sg_res / lvl_pages; |
| + end_pfn = iov_pfn + nr_superpages * lvl_pages - 1; |
| + |
| + /* |
| + * Ensure that old small page tables are |
| + * removed to make room for superpage(s). |
| + */ |
| + dma_pte_clear_range(domain, iov_pfn, end_pfn); |
| + dma_pte_free_pagetable(domain, iov_pfn, end_pfn); |
| } else { |
| pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE; |
| } |