Age | Commit message (Collapse) | Author |
|
This ensures that both mutable and immutable access to the protected
data of a process is serialized.
Note that there may still be multiple TOCTOU issues around this, as we
have a bunch of convenience accessors that make it easy to introduce
them. We'll need to audit those as well.
|
|
By protecting all the RefPtr<Custody> objects that may be accessed from
multiple threads at the same time (with spinlocks), we remove the need
for using LockRefPtr<Custody> (which is basically a RefPtr with a
built-in spinlock.)
|
|
Instead, allocate when acquiring the lock on m_fds struct, which is
safer to do in terms of safely mutating the m_fds struct, because we
don't use the big process lock in this syscall.
|
|
I missed one instance of these. Thanks Anthony Iacono for spotting it!
|
|
This patch adds the NGROUPS_MAX constant and enforces it in
sys$setgroups() to ensure that no process has more than 32 supplementary
group IDs.
The number doesn't mean anything in particular, just had to pick a
number. Perhaps one day we'll have a reason to change it.
|
|
Now that these operate on the neatly atomic and immutable Credentials
object, they should no longer require the process big lock for
synchronization. :^)
|
|
This patch adds a new object to hold a Process's user credentials:
- UID, EUID, SUID
- GID, EGID, SGID, extra GIDs
Credentials are immutable and child processes initially inherit the
Credentials object from their parent.
Whenever a process changes one or more of its user/group IDs, a new
Credentials object is constructed.
Any code that wants to inspect and act on a set of credentials can now
do so without worrying about data races.
|
|
|
|
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
|
|
Instead of having two separate implementations of AK::RefCounted, one
for userspace and one for kernelspace, there is now RefCounted and
AtomicRefCounted.
|
|
|
|
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
|
|
Unlike Clang, GCC does not support 8-byte atomics on i686 with the
-mno-80387 flag set, so until that is fixed, implement a minimal set of
atomics that are currently required.
|
|
|
|
|
|
Signal dispatch is already protected by the global scheduler lock, but
in some cases we also took Thread::m_lock for some reason. This led to
a number of different deadlocks that started showing up with 4+ CPU's
attached to the system.
As a first step towards solving this, simply don't take the thread lock
and let the scheduler lock cover it.
Eventually, we should work in the other direction and break the
scheduler lock into much finer-grained locks, but let's get out of the
deadlock swamp first.
|
|
|
|
This is not necessary, and is a leftover from before Thread started
using the ListedRefCounted pattern to be safely removed from lists on
the last call to unref().
|
|
We can use simple atomic variables with relaxed ordering for this,
and avoid locking altogether.
|
|
We only need to hold the VMObject lock while inspecting and/or updating
the physical page array in the VMObject.
|
|
As soon as we've saved CR2 (the faulting address), we can re-enable
interrupt processing. This should make the kernel more responsive under
heavy fault loads.
|
|
This fixes an issue where a sharing process would map the "lazy
committed page" early and then get stuck with that page even after
it had been replaced in the VMObject by a page fault.
Regressed in 27c1135d307efde8d9baef2affb26be568d50263, which made it
happen every time with the backing bitmaps used for WebContent.
|
|
Region::physical_page() now takes the VMObject lock while accessing the
physical pages array, and returns a RefPtr<PhysicalPage>. This ensures
that the array access is safe.
Region::physical_page_slot() now VERIFY()'s that the VMObject lock is
held by the caller. Since we're returning a reference to the physical
page slot in the VMObject's physical page array, this is the best we
can do here.
|
|
Note that SMP is still off by default, but this basically removes the
weird "SMP on but threads don't get scheduled" behavior we had by
default. If you pass "smp=on" to the kernel, you now get SMP. :^)
|
|
We really only need the VMObject lock when accessing the physical pages
array, so once we have a strong pointer to the physical page we want to
remap, we can give up the VMObject lock.
This fixes a deadlock I encountered while building DOOM on SMP.
|
|
|
|
|
|
The MM lock is not required for this, it's just a simple ref-counted
pointer assignment.
|
|
We always want to grab the page directory lock before the MM lock.
This fixes a deadlock I encountered when building DOOM with make -j4.
|
|
When handling a page fault, we only need to remap the faulting region in
the current process. There's no need to traverse *all* regions that map
the same VMObject and remap them cross-process as well.
Those other regions will get remapped lazily by their own page fault
handlers eventually. Or maybe they won't and we avoided some work. :^)
|
|
- Instead of holding the VMObject lock across physical page allocation
and quick-map + copy, we now only hold it when updating the VMObject's
physical page slot.
|
|
Make sure we reject the unveil attempt with EPERM if the veil was locked
by another thread while we were parsing argument (and not holding the
veil state spinlock.)
Thanks Brian for spotting this! :^)
Amendment to #14907.
|
|
To ensure that we stay on the same CPU that acquired the spinlock until
we're completely unlocked, we now leave the critical section *before*
re-enabling interrupts.
|
|
Protecting it with a mutex meant that anyone unref()'ing a Custody
might need to block on said mutex.
|
|
|
|
We want to grab g_scheduler_lock *before* Thread::m_block_lock.
This appears to have fixed a deadlock that I encountered while building
DOOM with make -j2.
|
|
Path resolution may do blocking I/O so we must not do it while holding
a spinlock. There are tons of problems like this throughout the kernel
and we need to find and fix all of them.
|
|
This fixes an issue where we could get preempted after acquiring the
current Processor pointer, but before calling methods on it.
I strongly suspect this was the cause of "Processor::current() == this"
assertion failures.
|
|
I don't think this code needs to be ALWAYS_INLINE, and moving it out
of line will make backtraces nicer.
|
|
The unveil syscall uses the UnveilData struct which is already
SpinlockProtected, so there is no need to take the big lock.
|
|
This matches out general macro use, and specifically other verification
macros like VERIFY(), VERIFY_NOT_REACHED(), VERIFY_INTERRUPTS_ENABLED(),
and VERIFY_INTERRUPTS_DISABLED().
|
|
This system call mainly accesses the file descriptor table, and this is
already guarded by MutexProtected.
|
|
Instead of locking it twice, we now frontload all the work that doesn't
touch the fd table, and then only lock it towards the end of the
syscall.
The benefit here is simplicity. The downside is that we do a bit of
unnecessary work in the EMFILE error case, but we don't need to optimize
that case anyway.
|
|
If the final copy_to_user() call fails when writing the file descriptors
to the output array, we have to make sure the file descriptors don't
remain in the process file descriptor table. Otherwise they are
basically leaked, as userspace is not aware of them.
This matches the behavior of our sys$socketpair() implementation.
|
|
This system call mainly accesses the file descriptor table, and this is
already guarded by MutexProtected.
|
|
We don't need to explicitly check for EMFILE conditions before doing
anything in sys$pipe(). The fd allocation code will take care of it
for us anyway.
|
|
We ensure that when we call SharedInodeVMObject::sync we lock the inode
lock before calling Inode virtual write_bytes method directly to avoid
assertion on the unlocked inode lock, as it was regressed recently. This
is not a complete fix as the need to lock from each path before calling
the write_bytes method should be avoided because it can lead to
hard-to-find bugs, and this commit only fixes the problem temporarily.
|
|
Previously, when starved for pages, *all* clean file-backed memory
would be released, which is quite excessive.
This patch instead releases just 1 page, since only 1 page is needed
to satisfy the request to `allocate_physical_page()`
|
|
Previously, we could only release *all* clean pages.
This patch makes it possible to release a specific amount of clean
pages. If the attempted number of pages to release is more than the
amount of clean pages, all clean pages will be released.
|
|
This knocks 70 MiB off our idle footprint, (from 350 MiB to 280 MiB.)
|