Age | Commit message (Collapse) | Author |
|
These two methods are big lock specific, so verify our mutex' behavior.
|
|
If the regular exclusive and shared lists were empty (which they
always should be for the big lock), we were not unblocking any waiters.
|
|
When we lock a mutex, eventually `Thread::block` is invoked which could
in turn invoke `Process::big_lock().restore_exclusive_lock()`. This
would then try to add the current thread to a different blocked thread
list then the one in use for the original mutex being locked, and
because it's an intrusive list, the thread is removed from its original
list during the `.append()`. When the original mutex eventually
unblocks, we no longer have the thread in the intrusive blocked threads
list and we panic.
Solve this by making the big lock mutex special and giving it its own
blocked thread list. Because the process big lock is temporary and is
being actively removed from e.g. syscalls, it's a matter of time before
we can also remove the fix introduced by this commit.
Fixes issue #9401.
|
|
|
|
|
|
This allows you to instantiate SpinlockProtected<T> where T requires
constructor arguments.
|
|
It's more accurate to say that we're blocking on a mutex, rather than
blocking on a lock. The previous terminology made sense when this code
was using something called Kernel::Lock, but since it was renamed to
Kernel::Mutex, this updates brings the language back in sync.
|
|
This commit removes the usage of HashMap in Mutex, thereby making Mutex
be allocation-free.
In order to achieve this several simplifications were made to Mutex,
removing unused code-paths and extra VERIFYs:
* We no longer support 'upgrading' a shared lock holder to an
exclusive holder when it is the only shared holder and it did not
unlock the lock before relocking it as exclusive. NOTE: Unlike the
rest of these changes, this scenario is not VERIFY-able in an
allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG
debug flag was added, this flag lets Mutex allocate in order to
detect such cases when debugging a deadlock.
* We no longer support checking if a Mutex is locked by the current
thread when the Mutex was not locked exclusively, the shared version
of this check was not used anywhere.
* We no longer support force unlocking/relocking a Mutex if the Mutex
was not locked exclusively, the shared version of these functions
was not used anywhere.
|
|
|
|
Clang-tidy pointed out that the `need_to_block = true;` block was
duplicate, and if we collapse these if statements, we should do so
fully.
|
|
As per clang-tidy.
|
|
|
|
Spinlocks are tied to the platform they are built for, this is why they
have been moved into the Arch folder. They are still available via
"Locking/Spinlock.h"
An Aarch64 stub has been created
|
|
The platform independent Processor.h file includes the shared processor
code and includes the specific platform header file.
All references to the Arch/x86/Processor.h file have been replaced with
a reference to Arch/Processor.h.
|
|
There are a few violations with signal handling that I won't be able to
fix it until later this week. So lets put lock rank enforcement under a
debug option for now so other folks don't hit these crashes until rank
enforcement is more fleshed out.
|
|
This makes the user-facing type only take the node member pointer, and
lets the compiler figure out the other needed types from that.
|
|
|
|
|
|
This change adds a static lock hierarchy / ranking to the Kernel with
the goal of reducing / finding deadlocks when running with SMP enabled.
We have seen quite a few lock ordering deadlocks (locks taken in a
different order, on two different code paths). As we properly annotate
locks in the system, then these facilities will find these locking
protocol violations automatically
The `LockRank` enum documents the various locks in the system and their
rank. The implementation guarantees that a thread holding one or more
locks of a lower rank cannot acquire an additional lock with rank that
is greater or equal to any of the currently held locks.
|
|
The default template argument is only used in one place, and it
looks like it was probably just an oversight. The rest of the Kernel
code all uses u8 as the type. So lets make that the default and remove
the unused template argument, as there doesn't seem to be a reason to
allow the size to be customizable.
|
|
This will provide better debug ability when the size comparison fails.
|
|
This is the idiomatic way to declare type aliases in modern C++.
Flagged by Sonar Cloud as a "Code Smell", but I happen to agree
with this particular one. :^)
|
|
...to is_owned_by_current_processor(). As Tom pointed out, this is
much more accurate. :^)
|
|
Rename these API's to make it more clear what they are checking.
|
|
|
|
This should help prevent deadlocks where a thread blocks on a Mutex
while interrupts are disabled, and makes it impossible for the holder of
the Mutex to make forward progress because it cannot be scheduled in.
Hide it behind a new debug macro LOCK_IN_CRITICAL_DEBUG for now, because
Ext2FS takes a series of Mutexes from the page fault handler, which
executes with interrupts disabled.
|
|
|
|
This closes the race window between Processor::current() and a context
switch happening before in_irq().
|
|
This matches MutexLocker, and doesn't sound like it's a lock itself.
|
|
|
|
Same treatment as MutexProtected<T>: inheritance and helper class is
removed, SpinLockProtected now holds a T and a SpinLock.
|
|
|
|
This patch removes the MutexContendedResource<T> helper class,
and MutexProtected<T> no longer inherits from T.
Instead, MutexProtected<T> simply has a T and a Mutex.
The LockedResource<T, LockMode> helper class is made a private nested
class in MutexProtected.
|
|
Let's make it obvious what we're protecting it with.
|
|
|
|
|
|
Enable the LOCK_DEBUG functionality for these new APIs, as it looks
like we want to move the whole system to use this in the not so distant
future. :^)
|
|
The LOCK_DEBUG conditional code is pretty ugly for a feature that we
only use rarely. We can remove a significant amount of this code by
utilizing a zero sized fake type when not building in LOCK_DEBUG mode.
This lets us keep the same API, but just let the compiler optimize it
away when don't actually care about the location the caller came from.
|
|
Introduce a zero sized type to represent a SourceLocation, when we
don't want to compile with SourceLocation support.
|
|
We have to disable interrupts before capturing the current Processor*,
or we risk storing the wrong one if we get preempted and resume on a
different CPU.
Caught by the VERIFY in RecursiveSpinLock::unlock()
|
|
Leave interrupts enabled so that we can still process IRQs. Critical
sections should only prevent preemption by another thread.
Co-authored-by: Tom <tomut@yahoo.com>
|
|
By making these functions static we close a window where we could get
preempted after calling Processor::current() and move to another
processor.
Co-authored-by: Tom <tomut@yahoo.com>
|
|
A protected value is a variable with enforced locking semantics. The
value is protected with a Mutex and can only be accessed through a
Locked object that holds a MutexLocker to said Mutex. Therefore, the
value itself cannot be accessed except through the proper locking
mechanism, which enforces correct locking semantics.
The Locked object has knowledge of shared and exclusive lock types and
will only return const-correct references and pointers. This should
help catch incorrect locking usage where a shared lock is acquired but
the user then modifies the locked value.
This is not a perfect solution because dereferencing the Locked object
returns the value, so the caller could defeat the protected value
semantics once it acquires a lock by keeping a pointer or a reference
to the value around. Then again, this is C++ and we can't protect
against malicious users from within the kernel anyways, but we can
raise the threshold above "didn't pay attention".
|
|
This is some syntaxic sugar to use a ContendedResource object with
reference counting. This effectively dissociates merely holding a
reference to an object and actually using the object in a way that
requires locking it against concurrent use.
|
|
|
|
|
|
|
|
|
|
|
|
|