| ================================================================================ | 
 | WHAT IS Flash-Friendly File System (F2FS)? | 
 | ================================================================================ | 
 |  | 
 | NAND flash memory-based storage devices, such as SSD, eMMC, and SD cards, have | 
 | been equipped on a variety systems ranging from mobile to server systems. Since | 
 | they are known to have different characteristics from the conventional rotating | 
 | disks, a file system, an upper layer to the storage device, should adapt to the | 
 | changes from the sketch in the design level. | 
 |  | 
 | F2FS is a file system exploiting NAND flash memory-based storage devices, which | 
 | is based on Log-structured File System (LFS). The design has been focused on | 
 | addressing the fundamental issues in LFS, which are snowball effect of wandering | 
 | tree and high cleaning overhead. | 
 |  | 
 | Since a NAND flash memory-based storage device shows different characteristic | 
 | according to its internal geometry or flash memory management scheme, namely FTL, | 
 | F2FS and its tools support various parameters not only for configuring on-disk | 
 | layout, but also for selecting allocation and cleaning algorithms. | 
 |  | 
 | The following git tree provides the file system formatting tool (mkfs.f2fs), | 
 | a consistency checking tool (fsck.f2fs), and a debugging tool (dump.f2fs). | 
 | >> git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git | 
 |  | 
 | For reporting bugs and sending patches, please use the following mailing list: | 
 | >> linux-f2fs-devel@lists.sourceforge.net | 
 |  | 
 | ================================================================================ | 
 | BACKGROUND AND DESIGN ISSUES | 
 | ================================================================================ | 
 |  | 
 | Log-structured File System (LFS) | 
 | -------------------------------- | 
 | "A log-structured file system writes all modifications to disk sequentially in | 
 | a log-like structure, thereby speeding up  both file writing and crash recovery. | 
 | The log is the only structure on disk; it contains indexing information so that | 
 | files can be read back from the log efficiently. In order to maintain large free | 
 | areas on disk for fast writing, we divide  the log into segments and use a | 
 | segment cleaner to compress the live information from heavily fragmented | 
 | segments." from Rosenblum, M. and Ousterhout, J. K., 1992, "The design and | 
 | implementation of a log-structured file system", ACM Trans. Computer Systems | 
 | 10, 1, 26–52. | 
 |  | 
 | Wandering Tree Problem | 
 | ---------------------- | 
 | In LFS, when a file data is updated and written to the end of log, its direct | 
 | pointer block is updated due to the changed location. Then the indirect pointer | 
 | block is also updated due to the direct pointer block update. In this manner, | 
 | the upper index structures such as inode, inode map, and checkpoint block are | 
 | also updated recursively. This problem is called as wandering tree problem [1], | 
 | and in order to enhance the performance, it should eliminate or relax the update | 
 | propagation as much as possible. | 
 |  | 
 | [1] Bityutskiy, A. 2005. JFFS3 design issues. http://www.linux-mtd.infradead.org/ | 
 |  | 
 | Cleaning Overhead | 
 | ----------------- | 
 | Since LFS is based on out-of-place writes, it produces so many obsolete blocks | 
 | scattered across the whole storage. In order to serve new empty log space, it | 
 | needs to reclaim these obsolete blocks seamlessly to users. This job is called | 
 | as a cleaning process. | 
 |  | 
 | The process consists of three operations as follows. | 
 | 1. A victim segment is selected through referencing segment usage table. | 
 | 2. It loads parent index structures of all the data in the victim identified by | 
 |    segment summary blocks. | 
 | 3. It checks the cross-reference between the data and its parent index structure. | 
 | 4. It moves valid data selectively. | 
 |  | 
 | This cleaning job may cause unexpected long delays, so the most important goal | 
 | is to hide the latencies to users. And also definitely, it should reduce the | 
 | amount of valid data to be moved, and move them quickly as well. | 
 |  | 
 | ================================================================================ | 
 | KEY FEATURES | 
 | ================================================================================ | 
 |  | 
 | Flash Awareness | 
 | --------------- | 
 | - Enlarge the random write area for better performance, but provide the high | 
 |   spatial locality | 
 | - Align FS data structures to the operational units in FTL as best efforts | 
 |  | 
 | Wandering Tree Problem | 
 | ---------------------- | 
 | - Use a term, “node”, that represents inodes as well as various pointer blocks | 
 | - Introduce Node Address Table (NAT) containing the locations of all the “node” | 
 |   blocks; this will cut off the update propagation. | 
 |  | 
 | Cleaning Overhead | 
 | ----------------- | 
 | - Support a background cleaning process | 
 | - Support greedy and cost-benefit algorithms for victim selection policies | 
 | - Support multi-head logs for static/dynamic hot and cold data separation | 
 | - Introduce adaptive logging for efficient block allocation | 
 |  | 
 | ================================================================================ | 
 | MOUNT OPTIONS | 
 | ================================================================================ | 
 |  | 
 | background_gc=%s       Turn on/off cleaning operations, namely garbage | 
 |                        collection, triggered in background when I/O subsystem is | 
 |                        idle. If background_gc=on, it will turn on the garbage | 
 |                        collection and if background_gc=off, garbage collection | 
 |                        will be turned off. If background_gc=sync, it will turn | 
 |                        on synchronous garbage collection running in background. | 
 |                        Default value for this option is on. So garbage | 
 |                        collection is on by default. | 
 | disable_roll_forward   Disable the roll-forward recovery routine | 
 | norecovery             Disable the roll-forward recovery routine, mounted read- | 
 |                        only (i.e., -o ro,disable_roll_forward) | 
 | discard                Issue discard/TRIM commands when a segment is cleaned. | 
 | no_heap                Disable heap-style segment allocation which finds free | 
 |                        segments for data from the beginning of main area, while | 
 | 		       for node from the end of main area. | 
 | nouser_xattr           Disable Extended User Attributes. Note: xattr is enabled | 
 |                        by default if CONFIG_F2FS_FS_XATTR is selected. | 
 | noacl                  Disable POSIX Access Control List. Note: acl is enabled | 
 |                        by default if CONFIG_F2FS_FS_POSIX_ACL is selected. | 
 | active_logs=%u         Support configuring the number of active logs. In the | 
 |                        current design, f2fs supports only 2, 4, and 6 logs. | 
 |                        Default number is 6. | 
 | disable_ext_identify   Disable the extension list configured by mkfs, so f2fs | 
 |                        does not aware of cold files such as media files. | 
 | inline_xattr           Enable the inline xattrs feature. | 
 | inline_data            Enable the inline data feature: New created small(<~3.4k) | 
 |                        files can be written into inode block. | 
 | inline_dentry          Enable the inline dir feature: data in new created | 
 |                        directory entries can be written into inode block. The | 
 |                        space of inode block which is used to store inline | 
 |                        dentries is limited to ~3.4k. | 
 | flush_merge	       Merge concurrent cache_flush commands as much as possible | 
 |                        to eliminate redundant command issues. If the underlying | 
 | 		       device handles the cache_flush command relatively slowly, | 
 | 		       recommend to enable this option. | 
 | nobarrier              This option can be used if underlying storage guarantees | 
 |                        its cached data should be written to the novolatile area. | 
 | 		       If this option is set, no cache_flush commands are issued | 
 | 		       but f2fs still guarantees the write ordering of all the | 
 | 		       data writes. | 
 | fastboot               This option is used when a system wants to reduce mount | 
 |                        time as much as possible, even though normal performance | 
 | 		       can be sacrificed. | 
 | extent_cache           Enable an extent cache based on rb-tree, it can cache | 
 |                        as many as extent which map between contiguous logical | 
 |                        address and physical address per inode, resulting in | 
 |                        increasing the cache hit ratio. Set by default. | 
 | noextent_cache         Disable an extent cache based on rb-tree explicitly, see | 
 |                        the above extent_cache mount option. | 
 | noinline_data          Disable the inline data feature, inline data feature is | 
 |                        enabled by default. | 
 | data_flush             Enable data flushing before checkpoint in order to | 
 |                        persist data of regular and symlink. | 
 |  | 
 | ================================================================================ | 
 | DEBUGFS ENTRIES | 
 | ================================================================================ | 
 |  | 
 | /sys/kernel/debug/f2fs/ contains information about all the partitions mounted as | 
 | f2fs. Each file shows the whole f2fs information. | 
 |  | 
 | /sys/kernel/debug/f2fs/status includes: | 
 |  - major file system information managed by f2fs currently | 
 |  - average SIT information about whole segments | 
 |  - current memory footprint consumed by f2fs. | 
 |  | 
 | ================================================================================ | 
 | SYSFS ENTRIES | 
 | ================================================================================ | 
 |  | 
 | Information about mounted f2f2 file systems can be found in | 
 | /sys/fs/f2fs.  Each mounted filesystem will have a directory in | 
 | /sys/fs/f2fs based on its device name (i.e., /sys/fs/f2fs/sda). | 
 | The files in each per-device directory are shown in table below. | 
 |  | 
 | Files in /sys/fs/f2fs/<devname> | 
 | (see also Documentation/ABI/testing/sysfs-fs-f2fs) | 
 | .............................................................................. | 
 |  File                         Content | 
 |  | 
 |  gc_max_sleep_time            This tuning parameter controls the maximum sleep | 
 |                               time for the garbage collection thread. Time is | 
 |                               in milliseconds. | 
 |  | 
 |  gc_min_sleep_time            This tuning parameter controls the minimum sleep | 
 |                               time for the garbage collection thread. Time is | 
 |                               in milliseconds. | 
 |  | 
 |  gc_no_gc_sleep_time          This tuning parameter controls the default sleep | 
 |                               time for the garbage collection thread. Time is | 
 |                               in milliseconds. | 
 |  | 
 |  gc_idle                      This parameter controls the selection of victim | 
 |                               policy for garbage collection. Setting gc_idle = 0 | 
 |                               (default) will disable this option. Setting | 
 |                               gc_idle = 1 will select the Cost Benefit approach | 
 |                               & setting gc_idle = 2 will select the greedy approach. | 
 |  | 
 |  reclaim_segments             This parameter controls the number of prefree | 
 |                               segments to be reclaimed. If the number of prefree | 
 | 			      segments is larger than the number of segments | 
 | 			      in the proportion to the percentage over total | 
 | 			      volume size, f2fs tries to conduct checkpoint to | 
 | 			      reclaim the prefree segments to free segments. | 
 | 			      By default, 5% over total # of segments. | 
 |  | 
 |  max_small_discards	      This parameter controls the number of discard | 
 | 			      commands that consist small blocks less than 2MB. | 
 | 			      The candidates to be discarded are cached until | 
 | 			      checkpoint is triggered, and issued during the | 
 | 			      checkpoint. By default, it is disabled with 0. | 
 |  | 
 |  trim_sections                This parameter controls the number of sections | 
 |                               to be trimmed out in batch mode when FITRIM | 
 |                               conducts. 32 sections is set by default. | 
 |  | 
 |  ipu_policy                   This parameter controls the policy of in-place | 
 |                               updates in f2fs. There are five policies: | 
 |                                0x01: F2FS_IPU_FORCE, 0x02: F2FS_IPU_SSR, | 
 |                                0x04: F2FS_IPU_UTIL,  0x08: F2FS_IPU_SSR_UTIL, | 
 |                                0x10: F2FS_IPU_FSYNC. | 
 |  | 
 |  min_ipu_util                 This parameter controls the threshold to trigger | 
 |                               in-place-updates. The number indicates percentage | 
 |                               of the filesystem utilization, and used by | 
 |                               F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies. | 
 |  | 
 |  min_fsync_blocks             This parameter controls the threshold to trigger | 
 |                               in-place-updates when F2FS_IPU_FSYNC mode is set. | 
 | 			      The number indicates the number of dirty pages | 
 | 			      when fsync needs to flush on its call path. If | 
 | 			      the number is less than this value, it triggers | 
 | 			      in-place-updates. | 
 |  | 
 |  max_victim_search	      This parameter controls the number of trials to | 
 | 			      find a victim segment when conducting SSR and | 
 | 			      cleaning operations. The default value is 4096 | 
 | 			      which covers 8GB block address range. | 
 |  | 
 |  dir_level                    This parameter controls the directory level to | 
 | 			      support large directory. If a directory has a | 
 | 			      number of files, it can reduce the file lookup | 
 | 			      latency by increasing this dir_level value. | 
 | 			      Otherwise, it needs to decrease this value to | 
 | 			      reduce the space overhead. The default value is 0. | 
 |  | 
 |  ram_thresh                   This parameter controls the memory footprint used | 
 | 			      by free nids and cached nat entries. By default, | 
 | 			      10 is set, which indicates 10 MB / 1 GB RAM. | 
 |  | 
 | ================================================================================ | 
 | USAGE | 
 | ================================================================================ | 
 |  | 
 | 1. Download userland tools and compile them. | 
 |  | 
 | 2. Skip, if f2fs was compiled statically inside kernel. | 
 |    Otherwise, insert the f2fs.ko module. | 
 |  # insmod f2fs.ko | 
 |  | 
 | 3. Create a directory trying to mount | 
 |  # mkdir /mnt/f2fs | 
 |  | 
 | 4. Format the block device, and then mount as f2fs | 
 |  # mkfs.f2fs -l label /dev/block_device | 
 |  # mount -t f2fs /dev/block_device /mnt/f2fs | 
 |  | 
 | mkfs.f2fs | 
 | --------- | 
 | The mkfs.f2fs is for the use of formatting a partition as the f2fs filesystem, | 
 | which builds a basic on-disk layout. | 
 |  | 
 | The options consist of: | 
 | -l [label]   : Give a volume label, up to 512 unicode name. | 
 | -a [0 or 1]  : Split start location of each area for heap-based allocation. | 
 |                1 is set by default, which performs this. | 
 | -o [int]     : Set overprovision ratio in percent over volume size. | 
 |                5 is set by default. | 
 | -s [int]     : Set the number of segments per section. | 
 |                1 is set by default. | 
 | -z [int]     : Set the number of sections per zone. | 
 |                1 is set by default. | 
 | -e [str]     : Set basic extension list. e.g. "mp3,gif,mov" | 
 | -t [0 or 1]  : Disable discard command or not. | 
 |                1 is set by default, which conducts discard. | 
 |  | 
 | fsck.f2fs | 
 | --------- | 
 | The fsck.f2fs is a tool to check the consistency of an f2fs-formatted | 
 | partition, which examines whether the filesystem metadata and user-made data | 
 | are cross-referenced correctly or not. | 
 | Note that, initial version of the tool does not fix any inconsistency. | 
 |  | 
 | The options consist of: | 
 |   -d debug level [default:0] | 
 |  | 
 | dump.f2fs | 
 | --------- | 
 | The dump.f2fs shows the information of specific inode and dumps SSA and SIT to | 
 | file. Each file is dump_ssa and dump_sit. | 
 |  | 
 | The dump.f2fs is used to debug on-disk data structures of the f2fs filesystem. | 
 | It shows on-disk inode information recognized by a given inode number, and is | 
 | able to dump all the SSA and SIT entries into predefined files, ./dump_ssa and | 
 | ./dump_sit respectively. | 
 |  | 
 | The options consist of: | 
 |   -d debug level [default:0] | 
 |   -i inode no (hex) | 
 |   -s [SIT dump segno from #1~#2 (decimal), for all 0~-1] | 
 |   -a [SSA dump segno from #1~#2 (decimal), for all 0~-1] | 
 |  | 
 | Examples: | 
 | # dump.f2fs -i [ino] /dev/sdx | 
 | # dump.f2fs -s 0~-1 /dev/sdx (SIT dump) | 
 | # dump.f2fs -a 0~-1 /dev/sdx (SSA dump) | 
 |  | 
 | ================================================================================ | 
 | DESIGN | 
 | ================================================================================ | 
 |  | 
 | On-disk Layout | 
 | -------------- | 
 |  | 
 | F2FS divides the whole volume into a number of segments, each of which is fixed | 
 | to 2MB in size. A section is composed of consecutive segments, and a zone | 
 | consists of a set of sections. By default, section and zone sizes are set to one | 
 | segment size identically, but users can easily modify the sizes by mkfs. | 
 |  | 
 | F2FS splits the entire volume into six areas, and all the areas except superblock | 
 | consists of multiple segments as described below. | 
 |  | 
 |                                             align with the zone size <-| | 
 |                  |-> align with the segment size | 
 |      _________________________________________________________________________ | 
 |     |            |            |   Segment   |    Node     |   Segment  |      | | 
 |     | Superblock | Checkpoint |    Info.    |   Address   |   Summary  | Main | | 
 |     |    (SB)    |   (CP)     | Table (SIT) | Table (NAT) | Area (SSA) |      | | 
 |     |____________|_____2______|______N______|______N______|______N_____|__N___| | 
 |                                                                        .      . | 
 |                                                              .                . | 
 |                                                  .                            . | 
 |                                     ._________________________________________. | 
 |                                     |_Segment_|_..._|_Segment_|_..._|_Segment_| | 
 |                                     .           . | 
 |                                     ._________._________ | 
 |                                     |_section_|__...__|_ | 
 |                                     .            . | 
 | 		                    .________. | 
 | 	                            |__zone__| | 
 |  | 
 | - Superblock (SB) | 
 |  : It is located at the beginning of the partition, and there exist two copies | 
 |    to avoid file system crash. It contains basic partition information and some | 
 |    default parameters of f2fs. | 
 |  | 
 | - Checkpoint (CP) | 
 |  : It contains file system information, bitmaps for valid NAT/SIT sets, orphan | 
 |    inode lists, and summary entries of current active segments. | 
 |  | 
 | - Segment Information Table (SIT) | 
 |  : It contains segment information such as valid block count and bitmap for the | 
 |    validity of all the blocks. | 
 |  | 
 | - Node Address Table (NAT) | 
 |  : It is composed of a block address table for all the node blocks stored in | 
 |    Main area. | 
 |  | 
 | - Segment Summary Area (SSA) | 
 |  : It contains summary entries which contains the owner information of all the | 
 |    data and node blocks stored in Main area. | 
 |  | 
 | - Main Area | 
 |  : It contains file and directory data including their indices. | 
 |  | 
 | In order to avoid misalignment between file system and flash-based storage, F2FS | 
 | aligns the start block address of CP with the segment size. Also, it aligns the | 
 | start block address of Main area with the zone size by reserving some segments | 
 | in SSA area. | 
 |  | 
 | Reference the following survey for additional technical details. | 
 | https://wiki.linaro.org/WorkingGroups/Kernel/Projects/FlashCardSurvey | 
 |  | 
 | File System Metadata Structure | 
 | ------------------------------ | 
 |  | 
 | F2FS adopts the checkpointing scheme to maintain file system consistency. At | 
 | mount time, F2FS first tries to find the last valid checkpoint data by scanning | 
 | CP area. In order to reduce the scanning time, F2FS uses only two copies of CP. | 
 | One of them always indicates the last valid data, which is called as shadow copy | 
 | mechanism. In addition to CP, NAT and SIT also adopt the shadow copy mechanism. | 
 |  | 
 | For file system consistency, each CP points to which NAT and SIT copies are | 
 | valid, as shown as below. | 
 |  | 
 |   +--------+----------+---------+ | 
 |   |   CP   |    SIT   |   NAT   | | 
 |   +--------+----------+---------+ | 
 |   .         .          .          . | 
 |   .            .              .              . | 
 |   .               .                 .                 . | 
 |   +-------+-------+--------+--------+--------+--------+ | 
 |   | CP #0 | CP #1 | SIT #0 | SIT #1 | NAT #0 | NAT #1 | | 
 |   +-------+-------+--------+--------+--------+--------+ | 
 |      |             ^                          ^ | 
 |      |             |                          | | 
 |      `----------------------------------------' | 
 |  | 
 | Index Structure | 
 | --------------- | 
 |  | 
 | The key data structure to manage the data locations is a "node". Similar to | 
 | traditional file structures, F2FS has three types of node: inode, direct node, | 
 | indirect node. F2FS assigns 4KB to an inode block which contains 923 data block | 
 | indices, two direct node pointers, two indirect node pointers, and one double | 
 | indirect node pointer as described below. One direct node block contains 1018 | 
 | data blocks, and one indirect node block contains also 1018 node blocks. Thus, | 
 | one inode block (i.e., a file) covers: | 
 |  | 
 |   4KB * (923 + 2 * 1018 + 2 * 1018 * 1018 + 1018 * 1018 * 1018) := 3.94TB. | 
 |  | 
 |    Inode block (4KB) | 
 |      |- data (923) | 
 |      |- direct node (2) | 
 |      |          `- data (1018) | 
 |      |- indirect node (2) | 
 |      |            `- direct node (1018) | 
 |      |                       `- data (1018) | 
 |      `- double indirect node (1) | 
 |                          `- indirect node (1018) | 
 | 			              `- direct node (1018) | 
 | 	                                         `- data (1018) | 
 |  | 
 | Note that, all the node blocks are mapped by NAT which means the location of | 
 | each node is translated by the NAT table. In the consideration of the wandering | 
 | tree problem, F2FS is able to cut off the propagation of node updates caused by | 
 | leaf data writes. | 
 |  | 
 | Directory Structure | 
 | ------------------- | 
 |  | 
 | A directory entry occupies 11 bytes, which consists of the following attributes. | 
 |  | 
 | - hash		hash value of the file name | 
 | - ino		inode number | 
 | - len		the length of file name | 
 | - type		file type such as directory, symlink, etc | 
 |  | 
 | A dentry block consists of 214 dentry slots and file names. Therein a bitmap is | 
 | used to represent whether each dentry is valid or not. A dentry block occupies | 
 | 4KB with the following composition. | 
 |  | 
 |   Dentry Block(4 K) = bitmap (27 bytes) + reserved (3 bytes) + | 
 | 	              dentries(11 * 214 bytes) + file name (8 * 214 bytes) | 
 |  | 
 |                          [Bucket] | 
 |              +--------------------------------+ | 
 |              |dentry block 1 | dentry block 2 | | 
 |              +--------------------------------+ | 
 |              .               . | 
 |        .                             . | 
 |   .       [Dentry Block Structure: 4KB]       . | 
 |   +--------+----------+----------+------------+ | 
 |   | bitmap | reserved | dentries | file names | | 
 |   +--------+----------+----------+------------+ | 
 |   [Dentry Block: 4KB] .   . | 
 | 		 .               . | 
 |             .                          . | 
 |             +------+------+-----+------+ | 
 |             | hash | ino  | len | type | | 
 |             +------+------+-----+------+ | 
 |             [Dentry Structure: 11 bytes] | 
 |  | 
 | F2FS implements multi-level hash tables for directory structure. Each level has | 
 | a hash table with dedicated number of hash buckets as shown below. Note that | 
 | "A(2B)" means a bucket includes 2 data blocks. | 
 |  | 
 | ---------------------- | 
 | A : bucket | 
 | B : block | 
 | N : MAX_DIR_HASH_DEPTH | 
 | ---------------------- | 
 |  | 
 | level #0   | A(2B) | 
 |            | | 
 | level #1   | A(2B) - A(2B) | 
 |            | | 
 | level #2   | A(2B) - A(2B) - A(2B) - A(2B) | 
 |      .     |   .       .       .       . | 
 | level #N/2 | A(2B) - A(2B) - A(2B) - A(2B) - A(2B) - ... - A(2B) | 
 |      .     |   .       .       .       . | 
 | level #N   | A(4B) - A(4B) - A(4B) - A(4B) - A(4B) - ... - A(4B) | 
 |  | 
 | The number of blocks and buckets are determined by, | 
 |  | 
 |                             ,- 2, if n < MAX_DIR_HASH_DEPTH / 2, | 
 |   # of blocks in level #n = | | 
 |                             `- 4, Otherwise | 
 |  | 
 |                              ,- 2^(n + dir_level), | 
 | 			     |        if n + dir_level < MAX_DIR_HASH_DEPTH / 2, | 
 |   # of buckets in level #n = | | 
 |                              `- 2^((MAX_DIR_HASH_DEPTH / 2) - 1), | 
 | 			              Otherwise | 
 |  | 
 | When F2FS finds a file name in a directory, at first a hash value of the file | 
 | name is calculated. Then, F2FS scans the hash table in level #0 to find the | 
 | dentry consisting of the file name and its inode number. If not found, F2FS | 
 | scans the next hash table in level #1. In this way, F2FS scans hash tables in | 
 | each levels incrementally from 1 to N. In each levels F2FS needs to scan only | 
 | one bucket determined by the following equation, which shows O(log(# of files)) | 
 | complexity. | 
 |  | 
 |   bucket number to scan in level #n = (hash value) % (# of buckets in level #n) | 
 |  | 
 | In the case of file creation, F2FS finds empty consecutive slots that cover the | 
 | file name. F2FS searches the empty slots in the hash tables of whole levels from | 
 | 1 to N in the same way as the lookup operation. | 
 |  | 
 | The following figure shows an example of two cases holding children. | 
 |        --------------> Dir <-------------- | 
 |        |                                 | | 
 |     child                             child | 
 |  | 
 |     child - child                     [hole] - child | 
 |  | 
 |     child - child - child             [hole] - [hole] - child | 
 |  | 
 |    Case 1:                           Case 2: | 
 |    Number of children = 6,           Number of children = 3, | 
 |    File size = 7                     File size = 7 | 
 |  | 
 | Default Block Allocation | 
 | ------------------------ | 
 |  | 
 | At runtime, F2FS manages six active logs inside "Main" area: Hot/Warm/Cold node | 
 | and Hot/Warm/Cold data. | 
 |  | 
 | - Hot node	contains direct node blocks of directories. | 
 | - Warm node	contains direct node blocks except hot node blocks. | 
 | - Cold node	contains indirect node blocks | 
 | - Hot data	contains dentry blocks | 
 | - Warm data	contains data blocks except hot and cold data blocks | 
 | - Cold data	contains multimedia data or migrated data blocks | 
 |  | 
 | LFS has two schemes for free space management: threaded log and copy-and-compac- | 
 | tion. The copy-and-compaction scheme which is known as cleaning, is well-suited | 
 | for devices showing very good sequential write performance, since free segments | 
 | are served all the time for writing new data. However, it suffers from cleaning | 
 | overhead under high utilization. Contrarily, the threaded log scheme suffers | 
 | from random writes, but no cleaning process is needed. F2FS adopts a hybrid | 
 | scheme where the copy-and-compaction scheme is adopted by default, but the | 
 | policy is dynamically changed to the threaded log scheme according to the file | 
 | system status. | 
 |  | 
 | In order to align F2FS with underlying flash-based storage, F2FS allocates a | 
 | segment in a unit of section. F2FS expects that the section size would be the | 
 | same as the unit size of garbage collection in FTL. Furthermore, with respect | 
 | to the mapping granularity in FTL, F2FS allocates each section of the active | 
 | logs from different zones as much as possible, since FTL can write the data in | 
 | the active logs into one allocation unit according to its mapping granularity. | 
 |  | 
 | Cleaning process | 
 | ---------------- | 
 |  | 
 | F2FS does cleaning both on demand and in the background. On-demand cleaning is | 
 | triggered when there are not enough free segments to serve VFS calls. Background | 
 | cleaner is operated by a kernel thread, and triggers the cleaning job when the | 
 | system is idle. | 
 |  | 
 | F2FS supports two victim selection policies: greedy and cost-benefit algorithms. | 
 | In the greedy algorithm, F2FS selects a victim segment having the smallest number | 
 | of valid blocks. In the cost-benefit algorithm, F2FS selects a victim segment | 
 | according to the segment age and the number of valid blocks in order to address | 
 | log block thrashing problem in the greedy algorithm. F2FS adopts the greedy | 
 | algorithm for on-demand cleaner, while background cleaner adopts cost-benefit | 
 | algorithm. | 
 |  | 
 | In order to identify whether the data in the victim segment are valid or not, | 
 | F2FS manages a bitmap. Each bit represents the validity of a block, and the | 
 | bitmap is composed of a bit stream covering whole blocks in main area. |