summaryrefslogtreecommitdiff
path: root/Kernel/Heap
AgeCommit message (Collapse)Author
2023-01-21Kernel+LibC: Move name length constants to Kernel/API from limits.hAndrew Kaster
Reduce inclusion of limits.h as much as possible at the same time. This does mean that kmalloc.h is now including Kernel/API/POSIX/limits.h instead of LibC/limits.h, but the scope could be limited a lot more. Basically every file in the kernel includes kmalloc.h, and needs the limits.h include for PAGE_SIZE.
2023-01-02Kernel: Remove unused includes of Kernel/Debug.hBen Wiederhake
These instances were detected by searching for files that include Kernel/Debug.h, but don't match the regex: \\bdbgln_if\(|_DEBUG\\b This regex is pessimistic, so there might be more files that don't check for any real *_DEBUG macro. There seem to be no corner cases anyway. In theory, one might use LibCPP to detect things like this automatically, but let's do this one step after another.
2023-01-02Kernel: Turn lock ranks into template parameterskleines Filmröllchen
This step would ideally not have been necessary (increases amount of refactoring and templates necessary, which in turn increases build times), but it gives us a couple of nice properties: - SpinlockProtected inside Singleton (a very common combination) can now obtain any lock rank just via the template parameter. It was not previously possible to do this with SingletonInstanceCreator magic. - SpinlockProtected's lock rank is now mandatory; this is the majority of cases and allows us to see where we're still missing proper ranks. - The type already informs us what lock rank a lock has, which aids code readability and (possibly, if gdb cooperates) lock mismatch debugging. - The rank of a lock can no longer be dynamic, which is not something we wanted in the first place (or made use of). Locks randomly changing their rank sounds like a disaster waiting to happen. - In some places, we might be able to statically check that locks are taken in the right order (with the right lock rank checking implementation) as rank information is fully statically known. This refactoring even more exposes the fact that Mutex has no lock rank capabilites, which is not fixed here.
2022-12-28Kernel: Remove i686 supportLiav A
2022-12-21Kernel: Use AK::is_power_of_two instead of AK::popcount in kmalloc_implTimon Kruiper
AK::popcount will use floating-point instructions, which in the aarch64 kernel are not allowed, and will result in an exception.
2022-12-14Kernel: Start implementing `kmalloc_aligned` more efficientlyTim Schumacher
This now only requires `size + alignment` bytes while searching for a free memory location. For the actual allocation, the memory area is properly trimmed to the required alignment.
2022-12-07Kernel: Return nullptr instead of PANICking in KmallocSlabHeapThomas Queiroz
I dared to return nullptr :^)
2022-12-05Kernel: Don't memset() allocated memory twice in kcalloc()Andreas Kling
This patch adds a way to ask the allocator to skip its internal scrubbing memset operation. Before this change, kcalloc() would scrub twice: once internally in kmalloc() and then again in kcalloc(). The same mechanism already existed in LibC malloc, and this patch brings it over to the kernel heap allocator as well. This solves one FIXME in kcalloc(). :^)
2022-12-03Everywhere: Run clang-formatLinus Groh
2022-10-20Kernel/aarch64: Force kmalloc to return 16 byte aligned pointersTimon Kruiper
KUBSAN complained about a misaligned address when trying to construct the Thread class.
2022-10-16Kernel: Add formal Processor::verify_no_spinlocks_held() APIBrian Gianforcaro
In a few places we check `!Processor::in_critical()` to validate that the current processor doesn't hold any kernel spinlocks. Instead lets provide it a first class name for readability. I'll also be adding more of these, so I would rather add more usages of a nice API instead of this implicit/assumed logic.
2022-08-22Kernel: Stop taking MM lock while using PD/PT quickmapsAndreas Kling
This is no longer required as these quickmaps are now per-CPU. :^)
2022-08-19Kernel: Require lock rank for Spinlock constructionkleines Filmröllchen
All users which relied on the default constructor use a None lock rank for now. This will make it easier to in the future remove LockRank and actually annotate the ranks by searching for None.
2022-08-18Kernel: Fix inconsistent lock acquisition order in kmallocAndreas Kling
We always want to grab the page directory lock before the MM lock. This fixes a deadlock I encountered when building DOOM with make -j4.
2022-07-27Everywhere: Make the codebase more architecture awareUndefine
2022-07-14Kernel+Userland: Rename prefix of user_physical => physicalLiav A
There's no such supervisor pages concept, so there's no need to call physical pages with the "user_physical" prefix anymore.
2022-04-05Kernel: Move allocate_unbacked_region_anywhere() to MemoryManagerAndreas Kling
This didn't need to be in RegionTree, and since it's specific to kernel VM anyway, let's move it to MemoryManager.
2022-04-03Kernel: Add kmalloc.cpp to aarch64James Mintram
2022-04-03Kernel: Use intrusive RegionTree solution for kernel regions as wellAndreas Kling
This patch ports MemoryManager to RegionTree as well. The biggest difference between this and the userspace code is that kernel regions are owned by extant OwnPtr<Region> objects spread around the kernel, while userspace regions are owned by the AddressSpace itself. For kernelspace, there are a couple of situations where we need to make large VM reservations that never get backed by regular VMObjects (for example the kernel image reservation, or the big kmalloc range.) Since we can't make a VM reservation without a Region object anymore, this patch adds a way to create unbacked Region objects that can be used for this exact purpose. They have no internal VMObject.)
2022-04-01Everywhere: Run clang-formatIdan Horowitz
2022-03-15AK+Kernel: Avoid double memory clearing of HashTable bucketsDaniel Bertalan
Since the allocated memory is going to be zeroed immediately anyway, let's avoid redundantly scrubbing it with MALLOC_SCRUB_BYTE just before that. The latest versions of gcc and Clang can automatically do this malloc + memset -> calloc optimization, but I've seen a couple of places where it failed to be done. This commit also adds a naive kcalloc function to the kernel that doesn't (yet) eliminate the redundancy like the userland does.
2022-03-14Kernel: Try to reuse empty slabheaps before expanding the kmalloc-heapHendiadyoin1
2022-03-08Kernel: Implement kmalloc_good_size for the new kmallocIdan Horowitz
This lets kmalloc-aware data structures like Vector and HashTable use up the extra wasted space we allocate in the slab heaps & heap chunks.
2022-02-05Kernel: Put kmalloc heap expansion debug spam behind KMALLOC_DEBUGAndreas Kling
2022-01-24Kernel: Include slabheaps in kmalloc statisticsIdan Horowitz
2022-01-13Kernel: Skip unnecessary TLB flush when growing kmalloc heapAndreas Kling
When adding entirely new page table mappings, we don't need to flush the TLB since they were not present before.
2022-01-11Kernel: Allow preventing kmalloc and kfreekleines Filmröllchen
For "destructive" disallowance of allocations throughout the system, Thread gains a member that controls whether allocations are currently allowed or not. kmalloc checks this member on both allocations and deallocations (with the exception of early boot) and panics the kernel if allocations are disabled. This will allow for critical sections that can't be allowed to allocate to fail-fast, making for easier debugging. PS: My first proper Kernel commit :^)
2021-12-28Kernel: Propagate overflow errors from Memory::page_round_upGuilherme Goncalves
Fixes #11402.
2021-12-28Kernel: Remove old comment about kmalloc() being Q&D :^)Andreas Kling
We've finally gotten kmalloc to a point where it feels decent enough to drop this comment. There's still a lot of room for improvement, and we'll continue working on it.
2021-12-28Kernel: VERIFY that addresses passed to kfree_sized() look validAndreas Kling
Let's do some simple pointer arithmetic to verify that the address being freed is at least within one of the two valid kmalloc VM ranges.
2021-12-28Kernel: Rename kmalloc_pool_heap => initial_kmalloc_memoryAndreas Kling
2021-12-28Kernel: Remove the kmalloc_eternal heap :^)Andreas Kling
This was a premature optimization from the early days of SerenityOS. The eternal heap was a simple bump pointer allocator over a static byte array. My original idea was to avoid heap fragmentation and improve data locality, but both ideas were rooted in cargo culting, not data. We would reserve 4 MiB at boot and only ended up using ~256 KiB, wasting the rest. This patch replaces all kmalloc_eternal() usage by regular kmalloc().
2021-12-28Kernel: Use type alias for Kmalloc SubHeap and SlabBlock list typesBrian Gianforcaro
We've moved to this pattern for the majority of usages of IntrusiveList in the Kernel, might as well be consistent. :^)
2021-12-26Kernel: Scrub kmalloc slabs when allocated and deallocatedAndreas Kling
This matches the behavior of the generic subheaps (and the old slab allocator implementation.)
2021-12-26Kernel: Remove old SlabAllocator :^)Andreas Kling
This is no longer useful since kmalloc() does automatic slab allocation without any of the limitations of the old SlabAllocator. :^)
2021-12-26Kernel: Add FIXME about allocation waste in kmalloc slabheapAndreas Kling
2021-12-26Kernel: Use slab allocation automagically for small kmalloc() requestsAndreas Kling
This patch adds generic slab allocators to kmalloc. In this initial version, the slab sizes are 16, 32, 64, 128, 256 and 512 bytes. Slabheaps are backed by 64 KiB block-aligned blocks with freelists, similar to what we do in LibC malloc and LibJS Heap.
2021-12-26Kernel: Remove arbitrary alignment requirement from kmalloc_aligned()Andreas Kling
We were not allowing alignments greater than PAGE_SIZE for some reason.
2021-12-26Kernel: Log purported size of bogus kfree_sized() requestsAndreas Kling
2021-12-26Kernel: Remove kfree(), leaving only kfree_sized() :^)Andreas Kling
There are no more users of the C-style kfree() API in the kernel, so let's get rid of it and enjoy the new world where we always know how much memory we are freeing. :^)
2021-12-26Kernel: Consolidate kmalloc_aligned() and use kfree_sized() withinAndreas Kling
This patch does two things: - Combines kmalloc_aligned() and kmalloc_aligned_cxx(). Templatizing the alignment parameter doesn't seem like a valuable enough optimization to justify having two almost-identical implementations. - Stores the real allocation size of an aligned allocation along with the other alignment metadata, and uses it to call kfree_sized() instead of kfree().
2021-12-26Kernel: Use kfree_sized() in SlabAllocatorAndreas Kling
2021-12-26Kernel: Assert that a KmallocSubheap fits inside a pageIdan Horowitz
Since we allocate the subheap in the first page of the given storage let's assert that the subheap can actually fit in a single page, to prevent the possible future headache of trying to debug the cause of random kernel memory corruption :^)
2021-12-26Kernel: Make kmalloc expansions scale to incoming allocation requestAndreas Kling
This allows kmalloc() to satisfy arbitrary allocation requests instead of being limited to a static subheap expansion size.
2021-12-26Kernel: Allocate page tables for the entire kmalloc VM range up frontAndreas Kling
This avoids getting caught with our pants down when heap expansion fails due to missing page tables. It also avoids a circular dependency on kmalloc() by way of HashMap::set() in MemoryManager::ensure_pte().
2021-12-26Kernel: Write to debug log when creating new kmalloc subheapsAndreas Kling
2021-12-25Kernel: Set NX bit on expanded kmalloc memory mappings if supportedAndreas Kling
We never want to execute kmalloc memory.
2021-12-25Kernel: Remove unused function declaration for kmalloc_impl()Andreas Kling
2021-12-25Kernel: Make kmalloc heap expansion kmalloc-freeAndreas Kling
Previously, the heap expansion logic could end up calling kmalloc recursively, which was quite messy and hard to reason about. This patch redesigns heap expansion so that it's kmalloc-free: - We make a single large virtual range allocation at startup - When expanding, we bump allocate VM from that region - When expanding, we populate page tables directly ourselves, instead of going via MemoryManager. This makes heap expansion a great deal simpler. However, do note that it introduces two new flaws that we'll need to deal with eventually: - The single virtual range allocation is limited to 64 MiB and once exhausted, kmalloc() will fail. (Actually, it will PANIC for now..) - The kmalloc heap can no longer shrink once expanded. Subheaps stay in place once constructed.
2021-12-09Kernel: Add missing include to SlabAllocatorHendiadyoin1