mm/rmap.c: fix pgoff calculation to handle hugepage correctly

I triggered VM_BUG_ON() in vma_address() when I tried to migrate an
anonymous hugepage with mbind() in the kernel v3.16-rc3.  This is
because pgoff's calculation in rmap_walk_anon() fails to consider
compound_order() only to have an incorrect value.

This patch introduces page_to_pgoff(), which gets the page's offset in

Kirill pointed out that page cache tree should natively handle
hugepages, and in order to make hugetlbfs fit it, page->index of
hugetlbfs page should be in PAGE_CACHE_SIZE.  This is beyond this patch,
but page_to_pgoff() contains the point to be fixed in a single function.

Signed-off-by: Naoya Horiguchi <>
Acked-by: Kirill A. Shutemov <>
Cc: Joonsoo Kim <>
Cc: Hugh Dickins <>
Cc: Rik van Riel <>
Cc: Hillf Danton <>
Cc: Naoya Horiguchi <>
Signed-off-by: Andrew Morton <>
Signed-off-by: Linus Torvalds <>
3 files changed