diff options
author | Andreas Kling <awesomekling@gmail.com> | 2019-05-02 18:11:36 +0200 |
---|---|---|
committer | Andreas Kling <awesomekling@gmail.com> | 2019-05-02 18:11:36 +0200 |
commit | 66e401d668a1f18020ba600546e5650c5edb0578 (patch) | |
tree | 02fdf131de1ce78f555e5de640c6aeecc57b2b50 /LibC/malloc.cpp | |
parent | b4e7925e31c34e4751dfe560ab63532cbb225f6c (diff) | |
download | serenity-66e401d668a1f18020ba600546e5650c5edb0578.zip |
LibC: Tune the number of ChunkedBlocks we keep around empty.
At the moment, both mmap() and munmap() are kind of slow. Compiling with GCC
was suffering quite badly from munmap() slowness, so let's keep a few more
of the ChunkedBlocks around after they are empty, to avoid having to munmap.
Diffstat (limited to 'LibC/malloc.cpp')
-rw-r--r-- | LibC/malloc.cpp | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/LibC/malloc.cpp b/LibC/malloc.cpp index 633ec71f3f..d79a918474 100644 --- a/LibC/malloc.cpp +++ b/LibC/malloc.cpp @@ -15,7 +15,7 @@ #define MAGIC_BIGALLOC_HEADER 0x42697267 #define PAGE_ROUND_UP(x) ((((size_t)(x)) + PAGE_SIZE-1) & (~(PAGE_SIZE-1))) -static const size_t number_of_chunked_blocks_to_keep_around_per_size_class = 4; +static const size_t number_of_chunked_blocks_to_keep_around_per_size_class = 32; static bool s_log_malloc = false; static bool s_scrub_malloc = true; |