| From: Sergey Senozhatsky <senozhatsky@chromium.org> |
| Subject: zsmalloc: remove insert_zspage() ->inuse optimization |
| Date: Sat, 4 Mar 2023 12:48:32 +0900 |
| |
| Patch series "zsmalloc: fine-grained fullness and new compaction |
| algorithm", v4. |
| |
| Existing zsmalloc page fullness grouping leads to suboptimal page |
| selection for both zs_malloc() and zs_compact(). This patchset reworks |
| zsmalloc fullness grouping/classification. |
| |
| Additinally it also implements new compaction algorithm that is expected |
| to use less CPU-cycles (as it potentially does fewer memcpy-s in |
| zs_object_copy()). |
| |
| Test (synthetic) results can be seen in patch 0003. |
| |
| |
| This patch (of 4): |
| |
| This optimization has no effect. It only ensures that when a zspage was |
| added to its corresponding fullness list, its "inuse" counter was higher |
| or lower than the "inuse" counter of the zspage at the head of the list. |
| The intention was to keep busy zspages at the head, so they could be |
| filled up and moved to the ZS_FULL fullness group more quickly. However, |
| this doesn't work as the "inuse" counter of a zspage can be modified by |
| obj_free() but the zspage may still belong to the same fullness list. So, |
| fix_fullness_group() won't change the zspage's position in relation to the |
| head's "inuse" counter, leading to a largely random order of zspages |
| within the fullness list. |
| |
| For instance, consider a printout of the "inuse" counters of the first 10 |
| zspages in a class that holds 93 objects per zspage: |
| |
| ZS_ALMOST_EMPTY: 36 67 68 64 35 54 63 52 |
| |
| As we can see the zspage with the lowest "inuse" counter |
| is actually the head of the fullness list. |
| |
| Remove this pointless "optimisation". |
| |
| Link: https://lkml.kernel.org/r/20230304034835.2082479-1-senozhatsky@chromium.org |
| Link: https://lkml.kernel.org/r/20230304034835.2082479-2-senozhatsky@chromium.org |
| Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> |
| Acked-by: Minchan Kim <minchan@kernel.org> |
| Cc: Yosry Ahmed <yosryahmed@google.com> |
| Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
| --- |
| |
| mm/zsmalloc.c | 13 +------------ |
| 1 file changed, 1 insertion(+), 12 deletions(-) |
| |
| --- a/mm/zsmalloc.c~zsmalloc-remove-insert_zspage-inuse-optimization |
| +++ a/mm/zsmalloc.c |
| @@ -762,19 +762,8 @@ static void insert_zspage(struct size_cl |
| struct zspage *zspage, |
| enum fullness_group fullness) |
| { |
| - struct zspage *head; |
| - |
| class_stat_inc(class, fullness, 1); |
| - head = list_first_entry_or_null(&class->fullness_list[fullness], |
| - struct zspage, list); |
| - /* |
| - * We want to see more ZS_FULL pages and less almost empty/full. |
| - * Put pages with higher ->inuse first. |
| - */ |
| - if (head && get_zspage_inuse(zspage) < get_zspage_inuse(head)) |
| - list_add(&zspage->list, &head->list); |
| - else |
| - list_add(&zspage->list, &class->fullness_list[fullness]); |
| + list_add(&zspage->list, &class->fullness_list[fullness]); |
| } |
| |
| /* |
| _ |