Age | Commit message (Collapse) | Author |
|
This method will be used to ease usage with the structure when we need
to do virtual pointer arithmetics.
|
|
The file does not contain any specific architectural code, thus it can
be moved to the Kernel/Arch directory.
|
|
|
|
If we unregister from the RegionTree before unmapping, there's a race
where a new region can get inserted at the same address that we're about
to unmap. If this happens, ~Region() will then unmap the newly inserted
region, which now finds itself with cleared-out page table entries.
|
|
Let's not have a way to grab at the RegionTree from outside of MM.
|
|
This didn't need to be in RegionTree, and since it's specific to kernel
VM anyway, let's move it to MemoryManager.
|
|
This had no business being in RegionTree, since RegionTree doesn't track
identity-mapped regions anyway. (We allow *any* address to be identity
mapped, not just the ones that are part of the RegionTree's range.)
|
|
Let's encapsulate looking up regions so clients don't have to dig into
RegionTree internals.
|
|
This allows clients to remove a region from the tree without reaching
into the RegionTree internals.
|
|
This patch adds RegionTree::get_lock() which exposes the internal lock
inside RegionTree. We can then lock it from the outside when doing
lookups or traversal.
This solution is not very beautiful, we should find a way to protect
this data with SpinlockProtected or something similar. This is a stopgap
patch to try and fix the currently flaky CI.
|
|
Since there is no separate virtual range allocator anymore, this is
no longer used for anything.
|
|
|
|
|
|
This let's us skip an O(logn) tree traversal.
|
|
Since find_largest_not_above returns the highest region that is below
the end of the request range, no region after it can intersect with it.
|
|
Thanks to Idan for spotting this! :^)
|
|
Thanks to Idan for spotting this! :^)
|
|
|
|
|
|
It's a bit nicer if functions that allocate ranges have some kind of
name that includes both "allocate" and "range". :^)
|
|
|
|
Functions that allocate and/or place a Region now take a parameter
that tells it whether to randomize unspecified addresses.
|
|
...and remove the last remaining client of the API. It's no longer
possible to ask the RegionTree for a VM range. You can only ask it to
place your Region somewhere in available space.
|
|
This patch move AddressSpace (the per-process memory manager) to using
the new atomic "place" APIs in RegionTree as well, just like we did for
MemoryManager in the previous commit.
This required updating quite a few places where VM allocation and
actually committing a Region object to the AddressSpace were separated
by other code.
All you have to do now is call into AddressSpace once and it'll take
care of everything for you.
|
|
Instead of first allocating the VM range, and then inserting a region
with that range into the MM region tree, we now do both things in a
single atomic operation:
- RegionTree::place_anywhere(Region&, size, alignment)
- RegionTree::place_specifically(Region&, address, size)
To reduce the number of things we do while locking the region tree,
we also require callers to provide a constructed Region object.
|
|
This has been replaced with the allocation-free RegionTree. :^)
|
|
This patch ports MemoryManager to RegionTree as well. The biggest
difference between this and the userspace code is that kernel regions
are owned by extant OwnPtr<Region> objects spread around the kernel,
while userspace regions are owned by the AddressSpace itself.
For kernelspace, there are a couple of situations where we need to make
large VM reservations that never get backed by regular VMObjects
(for example the kernel image reservation, or the big kmalloc range.)
Since we can't make a VM reservation without a Region object anymore,
this patch adds a way to create unbacked Region objects that can be
used for this exact purpose. They have no internal VMObject.)
|
|
RegionTree holds an IntrusiveRedBlackTree of Region objects and vends a
set of APIs for allocating memory ranges.
It's used by AddressSpace at the moment, and will be used by MM soon.
|
|
This patch stops using VirtualRangeAllocator in AddressSpace and instead
looks for holes in the region tree when allocating VM space.
There are many benefits:
- VirtualRangeAllocator is non-intrusive and would call kmalloc/kfree
when used. This new solution is allocation-free. This was a source
of unpleasant MM/kmalloc deadlocks.
- We consolidate authority on what the address space looks like in a
single place. Previously, we had both the range allocator *and* the
region tree both being used to determine if an address was valid.
Now there is only the region tree.
- Deallocation of VM when splitting regions is no longer complicated,
as we don't need to keep two separate trees in sync.
|
|
This means we never need to allocate when inserting/removing regions
from the address space.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Now that we reclaim the memory range that is created by KASLR before
the start of the kernel image, there's no need to be conservative with
the KASLR offset.
|
|
This ensures we don't just waste the memory range between the default
base load address and the actual load address that was shifted by the
KASLR offset.
|
|
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#cother-other-default-operation-rules
"The compiler is more likely to get the default semantics right and
you cannot implement these functions better than the compiler."
|
|
Allocating a WeakPtr can fail, so this let's us properly propagate said
failure.
|
|
If we crashed in the middle of mapping in Regions, some of the regions
may not have a page directory yet, and will result in a crash when
Region::remap() is called.
|
|
This reduces the amount of time in which not fully-initialized Regions
are present inside an AddressSpace's region tree.
|
|
|
|
|
|
|
|
If someone specifically wants contiguous memory in the low-physical-
address-for-DMA range ("super pages"), they can use the
allocate_dma_buffer_pages() helper.
|
|
Function-local `static constexpr` variables can be `constexpr`. This
can reduce memory consumption, binary size, and offer additional
compiler optimizations.
These changes result in a stripped x86_64 kernel binary size reduction
of 592 bytes.
|
|
As make<T> is infallible, it really should not be used anywhere in the
Kernel. Instead replace with fallible `new (nothrow)` calls, that will
eventually be error-propagated.
|
|
This reverts commit 1c5ffaae41be4e67f81b46c3bfdce7f54a1dc8e0.
This broke shared memory as used by OutOfProcessWebView. Let's do
a revert until we can figure out what went wrong.
|
|
When a page fault led to the mapping of a new physical page, we were
updating the page tables for *every* region that shared the same
underlying VMObject.
Let's just not do that, avoiding a bunch of unnecessary page table
updates and TLB invalidations.
|