Test setup

Processor: x86_64, Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz * 2 VCores

Storage: Cloud disk, 3000 IOPS upper limit

OS Kernel: Linux 6.2

Software: LZ4 1.9.3, erofs-utils 1.6, squashfs-tools 4.5.1

Disclaimer: Test results could be varied from different hardware and/or data patterns. Therefore, the following results are ONLY for reference.

Benchmark on multiple files

Rootfs of Debian docker image is used as the dataset, which contains 7000+ files and directories. Note that that dataset can be replaced regularly, and the SHA1 of the snapshot “rootfs.tar.xz” used here is “aee9b01a530078dbef8f08521bfcabe65b244955”.

Image size

SizeFilesystemCluster sizeBuild options
124669952erofsuncompressed-T0 [^1]
124522496squashfsuncompressed-noD -noI -noX -noF -no-xattrs -all-time 0 -no-duplicates [^2]
73601024squashfs4096-b 4096 -comp lz4 -Xhc -no-xattrs -all-time 0
73121792erofs4096-zlz4hc,12 [^3] -C4096 -Efragments -T0
67162112squashfs16384-b 16384 -comp lz4 -Xhc -no-xattrs -all-time 0
65478656erofs16384-zlz4hc,12 -C16384 -Efragments -T0
61456384squashfs65536-b 65536 -comp lz4 -Xhc -no-xattrs -all-time 0
59834368erofs65536-zlz4hc,12 -C65536 -Efragments -T0
59150336squashfs131072-b 131072 -comp lz4 -Xhc -no-xattrs -all-time 0
58515456erofs131072-zlz4hc,12 -C131072 -Efragments -T0

[^1]: Forcely reset all timestamps to match squashfs on-disk basic inodes for now. [^2]: Currently erofs-utils doesn't actively de-duplicate identical files although the on-disk format supports this. [^3]: Because squashfs uses level 12 for LZ4HC by default.

Sequential data access

hyperfine -p "echo 3 > /proc/sys/vm/drop_caches; sleep 1" "tar cf - . | cat > /dev/null"
FilesystemCluster sizeTime
squashfs409610.257 s ± 0.031 s
erofsuncompressed1.111 s ± 0.022 s
squashfsuncompressed1.034 s ± 0.020 s
squashfs131072941.3 ms ± 7.5 ms
erofs4096848.1 ms ± 17.8 ms
erofs131072724.2 ms ± 11.0 ms

Sequential metadata access

hyperfine -p "echo 3 > /proc/sys/vm/drop_caches; sleep 1" "tar cf /dev/null ."
FilesystemCluster sizeTime
erofsuncompressed419.6 ms ± 8.2 ms
squashfs4096142.5 ms ± 5.4 ms
squashfsuncompressed129.2 ms ± 3.9 ms
squashfs131072125.4 ms ± 4.0 ms
erofs409675.5 ms ± 3.5 ms
erofs13107265.8 ms ± 3.6 ms

[ Note that erofs-utils currently doesn't perform quite well for such cases due to metadata arrangement when building. It will be fixed in the later versions. ]

Small random data access (~7%)

find mnt -type f -printf "%p\n" | sort -R | head -n 500 > list.txt
hyperfine -p "echo 3 > /proc/sys/vm/drop_caches; sleep 1" "cat list.txt | xargs cat > /dev/null"
FilesystemCluster sizeTime
squashfs40961.386 s ± 0.032 s
squashfsuncompressed1.083 s ± 0.044 s
squashfs1310721.067 s ± 0.046 s
erofs4096249.6 ms ± 6.5 ms
erofsuncompressed237.8 ms ± 6.3 ms
erofs131072189.6 ms ± 7.8 ms

Small random metadata access (~7%)

find mnt -type f -printf "%p\n" | sort -R | head -n 500 > list.txt
hyperfine -p "echo 3 > /proc/sys/vm/drop_caches; sleep 1" "cat list.txt | xargs stat"
FilesystemCluster sizeTime
squashfs4096817.0 ms ± 34.5 ms
squashfs131072801.0 ms ± 40.1 ms
squashfsuncompressed741.3 ms ± 18.2 ms
erofsuncompressed197.8 ms ± 4.1 ms
erofs409663.1 ms ± 2.0 ms
erofs13107260.7 ms ± 3.6 ms

Full random data access (~100%)

find mnt -type f -printf "%p\n" | sort -R > list.txt
hyperfine -p "echo 3 > /proc/sys/vm/drop_caches; sleep 1" "cat list.txt | xargs cat > /dev/null"
FilesystemCluster sizeTime
squashfs409620.668 s ± 0.040 s
squashfsuncompressed12.543 s ± 0.041 s
squashfs13107211.753 s ± 0.412 s
erofsuncompressed1.493 s ± 0.023 s
erofs40961.223 s ± 0.013 s
erofs131072598.2 ms ± 6.6 ms

Full random metadata access (~100%)

find mnt -type f -printf "%p\n" | sort -R > list.txt
hyperfine -p "echo 3 > /proc/sys/vm/drop_caches; sleep 1" "cat list.txt | xargs stat"
FilesystemCluster sizeTime
squashfs1310729.212 s ± 0.467 s
squashfs40968.905 s ± 0.147 s
squashfsuncompressed7.961 s ± 0.045 s
erofs4096661.2 ms ± 14.9 ms
erofsuncompressed125.8 ms ± 6.6 ms
erofs131072119.6 ms ± 5.5 ms

FIO benchmark on a single large file

silesia.tar (203M) is used to benchmark, which could be generated from unzipping silesia.zip and tar.

Image size

SizeFilesystemCluster sizeBuild options
114339840squashfs4096-b 4096 -comp lz4 -Xhc -no-xattrs
104972288erofs4096-zlz4hc,12 -C4096
98033664squashfs16384-b 16384 -comp lz4 -Xhc -no-xattrs
89571328erofs16384-zlz4hc,12 -C16384
85143552squashfs65536-b 65536 -comp lz4 -Xhc -no-xattrs
81211392squashfs131072-b 131072 -comp lz4 -Xhc -no-xattrs
80519168erofs65536-zlz4hc,12 -C65536
78888960erofs131072-zlz4hc,12 -C131072

Sequential I/Os

fio -filename=silesia.tar -bs=4k -rw=read -name=job1
FilesystemCluster sizeBandwidth
erofs65536624 MiB/s
erofs16384600 MiB/s
erofs4096569 MiB/s
erofs131072535 MiB/s
squashfs131072236 MiB/s
squashfs65536157 MiB/s
squashfs1638455.2MiB/s
squashfs409612.5MiB/s

Full Random I/Os

fio -filename=silesia.tar -bs=4k -rw=randread -name=job1
FilesystemCluster sizeBandwidth
erofs131072242 MiB/s
squashfs131072232 MiB/s
erofs65536198 MiB/s
squashfs65536150 MiB/s
erofs1638496.4MiB/s
squashfs1638449.5MiB/s
erofs409633.7MiB/s
squashfs40966817KiB/s

Small Random I/Os (~5%)

fio -filename=silesia.tar -bs=4k -rw=randread --io_size=10m -name=job1
FilesystemCluster sizeBandwidth
erofs13107219.2MiB/s
erofs6553616.9MiB/s
squashfs13107215.1MiB/s
erofs1638414.7MiB/s
squashfs6553613.8MiB/s
erofs409613.0MiB/s
squashfs1638411.7MiB/s
squashfs40964376KiB/s