summaryrefslogtreecommitdiff
path: root/Kernel/Heap
AgeCommit message (Collapse)Author
2021-05-15AK+LibC: Implement malloc_good_size() and use it for Vector/HashTableGunnar Beutner
This implements the macOS API malloc_good_size() which returns the true allocation size for a given requested allocation size. This allows us to make use of all the available memory in a malloc chunk. For example, for a malloc request of 35 bytes our malloc would internally use a chunk of size 64, however the remaining 29 bytes would be unused. Knowing the true allocation size allows us to request more usable memory that would otherwise be wasted and make that available for Vector, HashTable and potentially other callers in the future.
2021-05-14Kernel: Add the ability to verify we don't kmalloc under spinlock.Brian Gianforcaro
Ideally we would never allocate under a spinlock, as it has many performance and potentially functionality (deadlock) pitfalls. We violate that rule in many places today, but we need a tool to track them all down and fix them. This change introduces a new macro option named `KMALLOC_VERIFY_NO_SPINLOCK_HELD` which can catch these situations at runtime via an assert.
2021-05-13Kernel: Declare operator new/delete noexcept for MAKE_SLAB_ALLOCATEDBrian Gianforcaro
2021-05-13Kernel: Declare operator new/delete noexcept for MAKE_ALIGNED_ALLOCATEDBrian Gianforcaro
2021-05-13Kernel: Declare operator new/delete as noexcept for the KernelBrian Gianforcaro
For Kernel OOM hardening to work correctly, we need to be able to call a "nothrow" version of operator new. Unfortunately the default "throwing" version of operator new assumes that the allocation will never return on failure and will always throw an exception. This isn't true in the Kernel, as we don't have exceptions. So if we call the normal/throwing new and kmalloc returns NULL, the generated code will happily go and dereference that NULL pointer by invoking the constructor before we have a chance to handle the failure. To fix this we declare operator new as noexcept in the Kernel headers, which will allow the caller to actually handle allocation failure. The delete implementations need to match the prototype of the new which allocated them, so we need define delete as noexcept as well. GCC then errors out declaring that you should implement sized delete as well, so this change provides those stubs in order to compile cleanly. Finally the new operator definitions have been standardized as being declared with [[nodiscard]] to avoid potential memory leaks. So lets declares the kernel versions that way as well.
2021-04-29Everywhere: Use "the SerenityOS developers." in copyright headersLinus Groh
We had some inconsistencies before: - Sometimes "The", sometimes "the" - Sometimes trailing ".", sometimes no trailing "." I picked the most common one (lowecase "the", trailing ".") and applied it to all copyright headers. By using the exact same string everywhere we can ensure nothing gets missed during a global search (and replace), and that these inconsistencies are not spread any further (as copyright headers are commonly copied to new files).
2021-04-22Everything: Move to SPDX license identifiers in all files.Brian Gianforcaro
SPDX License Identifiers are a more compact / standardized way of representing file license information. See: https://spdx.dev/resources/use/#identifiers This was done with the `ambr` search and replace tool. ambr --no-parent-ignore --key-from-file --rep-from-file key.txt rep.txt *
2021-04-09Kernel: Do some basic metadata integrity verification in kmalloc/kfreeAndreas Kling
Use BitmapView::set_range_and_verify_that_all_bits_flip() to validate the heap chunk metadata bits as we go through them in kmalloc/kfree.
2021-04-09Kernel: Add some basic double-kfree() detectionAndreas Kling
Double kfree() is exceedingly rare in our kernel since we use automatic memory management and smart pointers for almost all code. However, it doesn't hurt to do some basic checking that might one day catch bugs. This patch makes us VERIFY that we don't already consider the first chunk of a kmalloc() allocation free when kfree()'ing it.
2021-03-21Kernel::CPU: Move headers into common directoryHendiadyoin1
Alot of code is shared between i386/i686/x86 and x86_64 and a lot probably will be used for compatability modes. So we start by moving the headers into one Directory. We will probalby be able to move some cpp files aswell.
2021-03-11Kernel: Suppress logging during kmalloc heap expansionAndreas Kling
The system is extremely sensitive to heap allocations during heap expansion. This was causing frequent OOM panics under various loads. Work around the issue for now by putting the logging behind KMALLOC_DEBUG. Ideally dmesgln() & friends would not reqiure any heap allocations, but we're not there right now. Fixes #5724.
2021-03-11Kernel: Add MAKE_ALIGNED_ALLOCATED helper macroAndreas Kling
This macro inserts operator new/delete into a class, allowing you to very easily specify a specific heap alignment.
2021-03-11Kernel: Allow kmalloc_aligned() alignment up to 4096Andreas Kling
This allows us to get kmalloc() memory aligned to the VM page size.
2021-03-09Kernel: Remove some unused things in kmalloc.cppAndreas Kling
2021-03-09Kernel: Convert klog() => dmesgln() in kmallocAndreas Kling
2021-03-04Kernel: Remove unused KMALLOC_DEBUG_LARGE_ALLOCATIONS modeAndreas Kling
This was a thing back when the system was so little that any kernel allocation above 1 MiB was basically guaranteed to be a bug. :^)
2021-03-04Kernel: Use BitmapView instead of Bitmap::wrap()Andreas Kling
2021-02-28Kernel: Use default con/de-structorsBen Wiederhake
This may seem like a no-op change, however it shrinks down the Kernel by a bit: .text -432 .unmap_after_init -60 .data -480 .debug_info -673 .debug_aranges 8 .debug_ranges -232 .debug_line -558 .debug_str -308 .debug_frame -40 With '= default', the compiler can do more inlining, hence the savings. I intentionally omitted some opportunities for '= default', because they would increase the Kernel size.
2021-02-25Kernel: Take some baby steps towards x86_64Andreas Kling
Make more of the kernel compile in 64-bit mode, and make some things pointer-size-agnostic (by using FlatPtr.) There's a lot of work to do here before the kernel will even compile.
2021-02-23Everywhere: Rename ASSERT => VERIFYAndreas Kling
(...and ASSERT_NOT_REACHED => VERIFY_NOT_REACHED) Since all of these checks are done in release builds as well, let's rename them to VERIFY to prevent confusion, as everyone is used to assertions being compiled out in release. We can introduce a new ASSERT macro that is specifically for debug checks, but I'm doing this wholesale conversion first since we've accumulated thousands of these already, and it's not immediately obvious which ones are suitable for ASSERT.
2021-02-23Kernel: Fix a dmesgln() format errorAnotherTest
2021-02-19Kernel: Slap UNMAP_AFTER_INIT on a whole bunch of functionsAndreas Kling
There's no real system here, I just added it to various functions that I don't believe we ever want to call after initialization has finished. With these changes, we're able to unmap 60 KiB of kernel text after init. :^)
2021-02-14Kernel: Mark a handful of things in kmalloc.cpp as READONLY_AFTER_INITAndreas Kling
2021-02-14Kernel: Assert if rounding-up-to-page-size would wrap around to 0Andreas Kling
If we try to align a number above 0xfffff000 to the next multiple of the page size (4 KiB), it would wrap around to 0. This is most likely never what we want, so let's assert if that happens.
2021-02-14Kernel: Use PANIC() in a bunch of places :^)Andreas Kling
2021-02-14Kernel: Remove user/kernel flags from RegionAndreas Kling
Now that we no longer need to support the signal trampolines being user-accessible inside the kernel memory range, we can get rid of the "kernel" and "user-accessible" flags on Region and simply use the address of the region to determine whether it's kernel or user. This also tightens the page table mapping code, since it can now set user-accessibility based solely on the virtual address of a page.
2021-01-26Meta: Split debug defines into multiple headers.asynts
The following script was used to make these changes: #!/bin/bash set -e tmp=$(mktemp -d) echo "tmp=$tmp" find Kernel \( -name '*.cpp' -o -name '*.h' \) | sort > $tmp/Kernel.files find . \( -path ./Toolchain -prune -o -path ./Build -prune -o -path ./Kernel -prune \) -o \( -name '*.cpp' -o -name '*.h' \) -print | sort > $tmp/EverythingExceptKernel.files cat $tmp/Kernel.files | xargs grep -Eho '[A-Z0-9_]+_DEBUG' | sort | uniq > $tmp/Kernel.macros cat $tmp/EverythingExceptKernel.files | xargs grep -Eho '[A-Z0-9_]+_DEBUG' | sort | uniq > $tmp/EverythingExceptKernel.macros comm -23 $tmp/Kernel.macros $tmp/EverythingExceptKernel.macros > $tmp/Kernel.unique comm -1 $tmp/Kernel.macros $tmp/EverythingExceptKernel.macros > $tmp/EverythingExceptKernel.unique cat $tmp/Kernel.unique | awk '{ print "#cmakedefine01 "$1 }' > $tmp/Kernel.header cat $tmp/EverythingExceptKernel.unique | awk '{ print "#cmakedefine01 "$1 }' > $tmp/EverythingExceptKernel.header for macro in $(cat $tmp/Kernel.unique) do cat $tmp/Kernel.files | xargs grep -l $macro >> $tmp/Kernel.new-includes ||: done cat $tmp/Kernel.new-includes | sort > $tmp/Kernel.new-includes.sorted for macro in $(cat $tmp/EverythingExceptKernel.unique) do cat $tmp/Kernel.files | xargs grep -l $macro >> $tmp/Kernel.old-includes ||: done cat $tmp/Kernel.old-includes | sort > $tmp/Kernel.old-includes.sorted comm -23 $tmp/Kernel.new-includes.sorted $tmp/Kernel.old-includes.sorted > $tmp/Kernel.includes.new comm -13 $tmp/Kernel.new-includes.sorted $tmp/Kernel.old-includes.sorted > $tmp/Kernel.includes.old comm -12 $tmp/Kernel.new-includes.sorted $tmp/Kernel.old-includes.sorted > $tmp/Kernel.includes.mixed for file in $(cat $tmp/Kernel.includes.new) do sed -i -E 's/#include <AK\/Debug\.h>/#include <Kernel\/Debug\.h>/' $file done for file in $(cat $tmp/Kernel.includes.mixed) do echo "mixed include in $file, requires manual editing." done
2021-01-25Everywhere: Hook up remaining debug macros to Debug.h.asynts
2021-01-25Everywhere: Remove unnecessary debug comments.asynts
It would be tempting to uncomment these statements, but that won't work with the new changes. This was done with the following commands: find . \( -name '*.cpp' -o -name '*.h' -o -name '*.in' \) -not -path './Toolchain/*' -not -path './Build/*' -exec awk -i inplace '$0 !~ /\/\/#define/ { if (!toggle) { print; } else { toggle = !toggle } } ; $0 ~/\/\/#define/ { toggle = 1 }' {} \; find . \( -name '*.cpp' -o -name '*.h' -o -name '*.in' \) -not -path './Toolchain/*' -not -path './Build/*' -exec awk -i inplace '$0 !~ /\/\/ #define/ { if (!toggle) { print; } else { toggle = !toggle } } ; $0 ~/\/\/ #define/ { toggle = 1 }' {} \;
2021-01-22Kernel: Move kmalloc heaps and super pages inside .bss segmentJean-Baptiste Boric
The kernel ignored the first 8 MiB of RAM while parsing the memory map because the kmalloc heaps and the super physical pages lived here. Move all that stuff inside the .bss segment so that those memory regions are accounted for, otherwise we risk overwriting boot modules placed next to the kernel.
2021-01-11Everywhere: Replace a bundle of dbg with dbgln.asynts
These changes are arbitrarily divided into multiple commits to make it easier to find potentially introduced bugs with git bisect.Everything:
2021-01-04Kernel: Specify default memory order for some non-synchronizing AtomicsTom
2021-01-01Kernel: Merge PurgeableVMObject into AnonymousVMObjectTom
This implements memory commitments and lazy-allocation of committed memory.
2021-01-01Kernel: Memory purging improvementsTom
This adds the ability for a Region to define volatile/nonvolatile areas within mapped memory using madvise(). This also means that memory purging takes into account all views of the PurgeableVMObject and only purges memory that is not needed by all of them. When calling madvise() to change an area to nonvolatile memory, return whether memory from that area was purged. At that time also try to remap all memory that is requested to be nonvolatile, and if insufficient pages are available notify the caller of that fact.
2020-12-31Kernel: Fix heap expansions deadlockTom
If a heap expansion is triggered by allocating from e.g. the RangeAllocator, which may be holding a spin lock, we cannot immediately allocate another block of backup memory, which could require the same locks to be acquired. So, defer allocating the backup memory Fixes #4675
2020-12-26Kernel: Remove subheap from list before removing memoryTom
When the ExpandableHeap calls the remove_memory function, the subheap is assumed to be removed and freed entirely. remove_memory may drop the underlying memory at any time, but it also may cause further allocation requests. Not removing it from the list before calling remove_memory could cause a memory allocation in that subheap while remove_memory is executing. which then causes issues once the underlying memory is actually freed.
2020-11-04Kernel: Defer kmalloc heap contractionTom
Because allocating/freeing regions may require locks that need to wait on other processors for completion, this needs to be delayed until it's safer. Otherwise it is possible to deadlock because we're holding the global heap lock.
2020-11-01Kernel: kmalloc_eternal should align pointersTom
2020-09-25Meta+Kernel: Make clang-format-10 cleanBen Wiederhake
2020-09-09Kernel: Fix heap expansion loopTom
By being a bit too greedy and only allocating how much we need for the failing allocation, we can end up in an infinite loop trying to expand the heap further. That's because there are other allocations (e.g. logging, vmobjects, regions, ...) that happen before we finally retry the failed allocation request. Also fix allocating in page size increments, which lead to an assertion when the heap had to grow more than the 1 MiB backup.
2020-09-02Kernel: Use removed memory as backup if backup hasn't been allocatedTom
It may be impossible to allocate more backup memory after expanding the heap if memory is running low. In that case we wouldn't allocate backup memory until trying to expand the heap again. But we also wouldn't take advantage of using removed memory as backup, which means that no backup memory would be available when the heap needs to grow again, causing subsequent expansion to fail because there is no backup memory.
2020-09-02Kernel: Prevent recursive expansion or removing memory while expanding itTom
The process of expanding memory requires allocations and deallocations on the heap itself. So, while we're trying to expand the heap, don't remove memory just because we might briefly not need it. Also prevent recursive expansion attempts.
2020-08-30Kernel: Make Heap implementation reusable, and make kmalloc expandableTom
Add an ExpandableHeap and switch kmalloc to use it, which allows for the kmalloc heap to grow as needed. In order to make heap expansion to work, we keep around a 1 MiB backup memory region, because creating a region would require space in the same heap. This means, the heap will grow as soon as the reported utilization is less than 1 MiB. It will also return memory if an entire subheap is no longer needed, although that is rarely possible.
2020-08-25Kernel: Optimize SlabAllocator to be lock-freeTom
2020-08-25Kernel: Fix kmalloc memory corruptionTom
Rather than hardcoding where the kmalloc pool should be, place it at the end of the kernel image instead. This avoids corrupting global variables or other parts of the kernel as it grows. Fixes #3257
2020-08-22Revert "Kernel: Fix kmalloc memory corruption"Andreas Kling
This reverts commit b306f240a4a3ef4a8f5797734457572e0026cc0c.
2020-08-22Kernel: Fix kmalloc memory corruptionTom
Rather than hardcoding where the kmalloc pool should be, place it at the end of the kernel image instead. This avoids corrupting global variables or other parts of the kernel as it grows. Fixes #3257
2020-08-16AK: Rename KB, MB, GB to KiB, MiB, GiBNico Weber
The SI prefixes "k", "M", "G" mean "10^3", "10^6", "10^9". The IEC prefixes "Ki", "Mi", "Gi" mean "2^10", "2^20", "2^30". Let's use the correct name, at least in code. Only changes the name of the constants, no other behavior change.
2020-08-14Kernel: mark kmalloc with attributesMuhammad Zahalqa
2020-08-10Kernel: Include the 128 byte slab allocator in for_each_allocatorTom