| From: Hugh Dickins <hughd@google.com> |
| Subject: tmpfs: fix data loss from failed fallocate |
| Date: Sun, 4 Dec 2022 16:51:50 -0800 (PST) |
| |
| Fix tmpfs data loss when the fallocate system call is interrupted by a |
| signal, or fails for some other reason. The partial folio handling in |
| shmem_undo_range() forgot to consider this unfalloc case, and was liable |
| to erase or truncate out data which had already been committed earlier. |
| |
| It turns out that none of the partial folio handling there is appropriate |
| for the unfalloc case, which just wants to proceed to removal of whole |
| folios: which find_get_entries() provides, even when partially covered. |
| |
| Original patch by Rui Wang. |
| |
| Link: https://lore.kernel.org/linux-mm/33b85d82.7764.1842e9ab207.Coremail.chenguoqic@163.com/ |
| Link: https://lkml.kernel.org/r/a5dac112-cf4b-7af-a33-f386e347fd38@google.com |
| Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios") |
| Signed-off-by: Hugh Dickins <hughd@google.com> |
| Reported-by: Guoqi Chen <chenguoqic@163.com> |
| Link: https://lore.kernel.org/all/20221101032248.819360-1-kernel@hev.cc/ |
| Cc: Rui Wang <kernel@hev.cc> |
| Cc: Huacai Chen <chenhuacai@loongson.cn> |
| Cc: Matthew Wilcox <willy@infradead.org> |
| Cc: Vishal Moola (Oracle) <vishal.moola@gmail.com> |
| Cc: <stable@vger.kernel.org> [5.17+] |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| --- |
| |
| mm/shmem.c | 11 +++++++++++ |
| 1 file changed, 11 insertions(+) |
| |
| --- a/mm/shmem.c~tmpfs-fix-data-loss-from-failed-fallocate |
| +++ a/mm/shmem.c |
| @@ -948,6 +948,15 @@ static void shmem_undo_range(struct inod |
| index++; |
| } |
| |
| + /* |
| + * When undoing a failed fallocate, we want none of the partial folio |
| + * zeroing and splitting below, but shall want to truncate the whole |
| + * folio when !uptodate indicates that it was added by this fallocate, |
| + * even when [lstart, lend] covers only a part of the folio. |
| + */ |
| + if (unfalloc) |
| + goto whole_folios; |
| + |
| same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT); |
| folio = shmem_get_partial_folio(inode, lstart >> PAGE_SHIFT); |
| if (folio) { |
| @@ -973,6 +982,8 @@ static void shmem_undo_range(struct inod |
| folio_put(folio); |
| } |
| |
| +whole_folios: |
| + |
| index = start; |
| while (index < end) { |
| cond_resched(); |
| _ |