Age | Commit message (Collapse) | Author |
|
This replaces the previous owning address space pointer. This commit
should not change any of the existing functionality, but it lays down
the groundwork needed to let us properly access the region table under
the address space spinlock during page fault handling.
|
|
|
|
We add this basic functionality to the Kernel so Userspace can request a
particular virtual memory mapping to be immutable. This will be useful
later on in the DynamicLoader code.
The annotation of a particular Kernel Region as immutable implies that
the following restrictions apply, so these features are prohibited:
- Changing the region's protection bits
- Unmapping the region
- Annotating the region with other virtual memory flags
- Applying further memory advises on the region
- Changing the region name
- Re-mapping the region
|
|
Now that AddressSpace itself is always SpinlockProtected, we don't
need to also wrap the RegionTree. Whoever has the AddressSpace locked
is free to poke around its tree.
|
|
This allows sys$mprotect() to honor the original readable & writable
flags of the open file description as they were at the point we did the
original sys$mmap().
IIUC, this is what Dr. POSIX wants us to do:
https://pubs.opengroup.org/onlinepubs/9699919799/functions/mprotect.html
Also, remove the bogus and racy "W^X" checking we did against mappings
based on their current inode metadata. If we want to do this, we can do
it properly. For now, it was not only racy, but also did blocking I/O
while holding a spinlock.
|
|
We were holding the MM lock across all of the region unmapping code.
This was previously necessary since the quickmaps used during unmapping
required holding the MM lock.
Now that it's no longer necessary, we can leave the MM lock alone here.
|
|
This makes locking them much more straightforward, and we can remove
a bunch of confusing use of AddressSpace::m_lock. That lock will also
be converted to use of SpinlockProtected in a subsequent patch.
|
|
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
|
|
|
|
The MM lock is not required for this, it's just a simple ref-counted
pointer assignment.
|
|
At the point at which we try to map the Region it was already added to
the Process region tree, so we have to make sure to remove it before
freeing it in the mapping failure path, otherwise the tree will contain
a dangling pointer to the free'd instance.
|
|
This should avoid some allocations during simple cases of munmap,
mprotect and msync, where you usually don't have a lot of regions anyway
|
|
This is basically unchanged since the beginning of 2020, which is a year
before we had proper ASLR.
Now that we have a proper ASLR implementation, we can turn this down a
bit, as it is no longer our only protection against predictable dynamic
loader addresses, and it actually obstructs the default loading address
of x86_64 quite frequently.
|
|
Let's encapsulate looking up regions so clients don't have to dig into
RegionTree internals.
|
|
This allows clients to remove a region from the tree without reaching
into the RegionTree internals.
|
|
This patch adds RegionTree::get_lock() which exposes the internal lock
inside RegionTree. We can then lock it from the outside when doing
lookups or traversal.
This solution is not very beautiful, we should find a way to protect
this data with SpinlockProtected or something similar. This is a stopgap
patch to try and fix the currently flaky CI.
|
|
Since there is no separate virtual range allocator anymore, this is
no longer used for anything.
|
|
This let's us skip an O(logn) tree traversal.
|
|
Functions that allocate and/or place a Region now take a parameter
that tells it whether to randomize unspecified addresses.
|
|
This patch move AddressSpace (the per-process memory manager) to using
the new atomic "place" APIs in RegionTree as well, just like we did for
MemoryManager in the previous commit.
This required updating quite a few places where VM allocation and
actually committing a Region object to the AddressSpace were separated
by other code.
All you have to do now is call into AddressSpace once and it'll take
care of everything for you.
|
|
RegionTree holds an IntrusiveRedBlackTree of Region objects and vends a
set of APIs for allocating memory ranges.
It's used by AddressSpace at the moment, and will be used by MM soon.
|
|
This patch stops using VirtualRangeAllocator in AddressSpace and instead
looks for holes in the region tree when allocating VM space.
There are many benefits:
- VirtualRangeAllocator is non-intrusive and would call kmalloc/kfree
when used. This new solution is allocation-free. This was a source
of unpleasant MM/kmalloc deadlocks.
- We consolidate authority on what the address space looks like in a
single place. Previously, we had both the range allocator *and* the
region tree both being used to determine if an address was valid.
Now there is only the region tree.
- Deallocation of VM when splitting regions is no longer complicated,
as we don't need to keep two separate trees in sync.
|
|
This means we never need to allocate when inserting/removing regions
from the address space.
|
|
|
|
|
|
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#cother-other-default-operation-rules
"The compiler is more likely to get the default semantics right and
you cannot implement these functions better than the compiler."
|
|
This reduces the amount of time in which not fully-initialized Regions
are present inside an AddressSpace's region tree.
|
|
|
|
We don't need to hold these locks when tearing down the region tree.
Release them as soon as unmapping is finished.
|
|
...and deal with the fallout by adding missing includes everywhere.
|
|
|
|
|
|
When mapping or unmapping completely inaccessible memory regions,
we don't need to update the page tables at all. This saves a bunch of
time in some situations, most notably during dynamic linking, where we
make a large VM reservation and immediately throw it away. :^)
|
|
This optimization was added when region lookup was O(n), before we had
the O(log n) RedBlackTree. Let's remove it to simplify the code, as we
have no evidence that it remains valuable.
|
|
The CPU will not cache TLB entries for non-present mappings, so we
can simply skip flushing the TLB after creating entirely new mappings.
|
|
Grab the page directory and MM locks once at the start of address space
teardown, then hold onto them across all the region unmapping work.
|
|
When deleting an entire AddressSpace, we don't need to do TLB flushes
at all (since the entire page directory is going away anyway).
We also don't need to deallocate VM ranges one by one, since the entire
VM range allocator will be deleted anyway.
|
|
|
|
Fixes #11402.
|
|
|
|
|
|
Instead of signalling allocation failure with a bool return value
(false), we now use ErrorOr<void> and return ENOMEM as appropriate.
This allows us to use TRY() and MUST() with Vector. :^)
|
|
We now use AK::Error and AK::ErrorOr<T> in both kernel and userspace!
This was a slightly tedious refactoring that took a long time, so it's
not unlikely that some bugs crept in.
Nevertheless, it does pass basic functionality testing, and it's just
real nice to finally see the same pattern in all contexts. :^)
|
|
pvs-studio flagged this as a potential optimization.
|
|
There are a number of places that don't have an error propagation path
right now, so I've added FIXME's about that.
|
|
..and use TRY() at the call sites to propagate errors. :^)
|
|
This allows us to use TRY() in a few places.
|
|
|
|
This achieves two things:
- The allocator can report more specific errors
- Callers can (and now do) use TRY() :^)
|
|
- Return KResultOr<T> in places
- Propagate errors
- Use TRY()
|