erofs: fix managed cache race for unaligned extents
After unaligned compressed extents were introduced, the following race
could occur:
[Thread 1] [Thread 2]
(z_erofs_fill_bio_vec)
<handle a Z_EROFS_PREALLOCATED_FOLIO folio>
...
filemap_add_folio (1)
(z_erofs_bind_cache)
<the same folio is found..>
..
..
folio_attach_private (2)
filemap_add_folio (3) again
Since (1) is executed but (2) hasn't been executed yet, it's possible
that another thread finds the same managed folio in z_erofs_bind_cache()
for a different pcluster and calls filemap_add_folio() again since
folio->private is still Z_EROFS_PREALLOCATED_FOLIO.
Fix this by explicitly clearing folio->private before making the folio
visible in the managed cache so that another pcluster can simply wait
on the locked managed folio as what we did for other shared cases [1].
This only impacts unaligned data compression (`-E48bit` with zstd,
for example).
[1] Commit 9e2f9d34dd12 ("erofs: handle overlapped pclusters out of
crafted images properly") was originally introduced to handle crafted
overlapped extents, but it addresses unaligned extents as well.
Fixes: 7361d1e3763b ("erofs: support unaligned encoded data")
Reported-by: Arseniy Krasnov <avkrasnov@salutedevices.com>
Closes: https://lore.kernel.org/r/4a2f3801-fac1-42fe-ae75-da315822e088@salutedevices.com
Signed-off-by: Gao Xiang <hsiangkao@linux.alibaba.com>
1 file changed