summaryrefslogtreecommitdiff
path: root/Kernel/Locking
AgeCommit message (Collapse)Author
2023-01-02Kernel: Turn lock ranks into template parameterskleines Filmröllchen
This step would ideally not have been necessary (increases amount of refactoring and templates necessary, which in turn increases build times), but it gives us a couple of nice properties: - SpinlockProtected inside Singleton (a very common combination) can now obtain any lock rank just via the template parameter. It was not previously possible to do this with SingletonInstanceCreator magic. - SpinlockProtected's lock rank is now mandatory; this is the majority of cases and allows us to see where we're still missing proper ranks. - The type already informs us what lock rank a lock has, which aids code readability and (possibly, if gdb cooperates) lock mismatch debugging. - The rank of a lock can no longer be dynamic, which is not something we wanted in the first place (or made use of). Locks randomly changing their rank sounds like a disaster waiting to happen. - In some places, we might be able to statically check that locks are taken in the right order (with the right lock rank checking implementation) as rank information is fully statically known. This refactoring even more exposes the fact that Mutex has no lock rank capabilites, which is not fixed here.
2022-12-03Everywhere: Run clang-formatLinus Groh
2022-08-26Kernel: Move Spinlock functions back to arch independent Locking folderTimon Kruiper
Now that the Spinlock code is not dependent on architectural specific code anymore, we can move it back to the Locking folder. This also means that the Spinlock implemenation is now used for the aarch64 kernel.
2022-08-26Kernel: Use InterruptsState in Spinlock codeTimon Kruiper
This commit updates the lock function from Spinlock and RecursiveSpinlock to return the InterruptsState of the processor, instead of the processor flags. The unlock functions would only look at the interrupt flag of the processor flags, so we now use the InterruptsState enum to clarify the intent, and such that we can use the same Spinlock code for the aarch64 build. To not break the build, all the call sites are updated aswell.
2022-08-20Kernel: Make self-contained locking smart pointers their own classesAndreas Kling
Until now, our kernel has reimplemented a number of AK classes to provide automatic internal locking: - RefPtr - NonnullRefPtr - WeakPtr - Weakable This patch renames the Kernel classes so that they can coexist with the original AK classes: - RefPtr => LockRefPtr - NonnullRefPtr => NonnullLockRefPtr - WeakPtr => LockWeakPtr - Weakable => LockWeakable The goal here is to eventually get rid of the Lock* classes in favor of using external locking.
2022-08-19Kernel: Require lock rank for Spinlock constructionkleines Filmröllchen
All users which relied on the default constructor use a None lock rank for now. This will make it easier to in the future remove LockRank and actually annotate the ranks by searching for None.
2022-07-19Kernel: Don't check that interrupts are enabled during early bootkleines Filmröllchen
The interrupts enabled check in the Kernel mutex is there so that we don't lock mutexes within a spinlock, because mutexes reenable interrupts and that will mess up the spinlock in more ways than one if the thread moves processors. This check is guarded behind a debug flag because it's too hard to fix all the problems at once, but we regressed and weren't even getting to init stage 2 with it enabled. With this commit, we get to stage 2 again. In early boot, there are no interrupts enabled and spinlocks used, so we can sort of kind of safely ignore the interrupt state. There might be a better solution with another boot state flag that checks whether APs are up (because they have interrupts enabled from the start) but that seems overkill.
2022-04-09Kernel: Verify mutex big lock behaviorJelle Raaijmakers
These two methods are big lock specific, so verify our mutex' behavior.
2022-04-09Kernel: Unblock big lock waiters correctlyJelle Raaijmakers
If the regular exclusive and shared lists were empty (which they always should be for the big lock), we were not unblocking any waiters.
2022-04-06Kernel: Track big lock blocked threads in separate listJelle Raaijmakers
When we lock a mutex, eventually `Thread::block` is invoked which could in turn invoke `Process::big_lock().restore_exclusive_lock()`. This would then try to add the current thread to a different blocked thread list then the one in use for the original mutex being locked, and because it's an intrusive list, the thread is removed from its original list during the `.append()`. When the original mutex eventually unblocks, we no longer have the thread in the intrusive blocked threads list and we panic. Solve this by making the big lock mutex special and giving it its own blocked thread list. Because the process big lock is temporary and is being actively removed from e.g. syscalls, it's a matter of time before we can also remove the fix introduced by this commit. Fixes issue #9401.
2022-04-05Kernel: Protect Mutex's thread lists with a spinlockAndreas Kling
2022-04-01Everywhere: Run clang-formatIdan Horowitz
2022-03-08Kernel: Make SpinlockProtected constructor forward all argumentsAndreas Kling
This allows you to instantiate SpinlockProtected<T> where T requires constructor arguments.
2022-01-30Kernel: Update terminology around Thread's "blocking mutex"Andreas Kling
It's more accurate to say that we're blocking on a mutex, rather than blocking on a lock. The previous terminology made sense when this code was using something called Kernel::Lock, but since it was renamed to Kernel::Mutex, this updates brings the language back in sync.
2022-01-29Kernel: Stop using HashMap in MutexIdan Horowitz
This commit removes the usage of HashMap in Mutex, thereby making Mutex be allocation-free. In order to achieve this several simplifications were made to Mutex, removing unused code-paths and extra VERIFYs: * We no longer support 'upgrading' a shared lock holder to an exclusive holder when it is the only shared holder and it did not unlock the lock before relocking it as exclusive. NOTE: Unlike the rest of these changes, this scenario is not VERIFY-able in an allocation-free way, as a result the new LOCK_SHARED_UPGRADE_DEBUG debug flag was added, this flag lets Mutex allocate in order to detect such cases when debugging a deadlock. * We no longer support checking if a Mutex is locked by the current thread when the Mutex was not locked exclusively, the shared version of this check was not used anywhere. * We no longer support force unlocking/relocking a Mutex if the Mutex was not locked exclusively, the shared version of these functions was not used anywhere.
2021-12-26Kernel: Remove no-longer-used Lockable templateAndreas Kling
2021-12-15Kernel: Collapse blocking logic for exclusive Mutex' restore_lock()Hendiadyoin1
Clang-tidy pointed out that the `need_to_block = true;` block was duplicate, and if we collapse these if statements, we should do so fully.
2021-12-15Kernel: Add implied auto-specifiers in LockingHendiadyoin1
As per clang-tidy.
2021-12-15Kernel: Add missing includes in LockingHendiadyoin1
2021-10-15Kernel: Move spinlock into ArchJames Mintram
Spinlocks are tied to the platform they are built for, this is why they have been moved into the Arch folder. They are still available via "Locking/Spinlock.h" An Aarch64 stub has been created
2021-10-14Kernel: Add per platform Processor.h headersJames Mintram
The platform independent Processor.h file includes the shared processor code and includes the specific platform header file. All references to the Arch/x86/Processor.h file have been replaced with a reference to Arch/Processor.h.
2021-09-14Kernel: Disable lock rank enforcement by default for nowBrian Gianforcaro
There are a few violations with signal handling that I won't be able to fix it until later this week. So lets put lock rank enforcement under a debug option for now so other folks don't hit these crashes until rank enforcement is more fleshed out.
2021-09-10AK+Everywhere: Reduce the number of template parameters of IntrusiveListAli Mohammad Pur
This makes the user-facing type only take the node member pointer, and lets the compiler figure out the other needed types from that.
2021-09-08Kernel: Fix a typo in LockRank::Process's commentIdan Horowitz
2021-09-07Kernel/Locking: Add lock rank tracking to Spinlock/RecursiveSpinlockBrian Gianforcaro
2021-09-07Kernel/Locking: Add lock rank tracking per thread to find deadlocksBrian Gianforcaro
This change adds a static lock hierarchy / ranking to the Kernel with the goal of reducing / finding deadlocks when running with SMP enabled. We have seen quite a few lock ordering deadlocks (locks taken in a different order, on two different code paths). As we properly annotate locks in the system, then these facilities will find these locking protocol violations automatically The `LockRank` enum documents the various locks in the system and their rank. The implementation guarantees that a thread holding one or more locks of a lower rank cannot acquire an additional lock with rank that is greater or equal to any of the currently held locks.
2021-09-05Kernel: Make all Spinlocks use u8 for storage, remove templateBrian Gianforcaro
The default template argument is only used in one place, and it looks like it was probably just an oversight. The rest of the Kernel code all uses u8 as the type. So lets make that the default and remove the unused template argument, as there doesn't seem to be a reason to allow the size to be customizable.
2021-09-05Kernel: Switch static_asserts of a type size to AK::AssertSizeBrian Gianforcaro
This will provide better debug ability when the size comparison fails.
2021-09-05Kernel: Declare type aliases with "using" instead of "typedef"Brian Gianforcaro
This is the idiomatic way to declare type aliases in modern C++. Flagged by Sonar Cloud as a "Code Smell", but I happen to agree with this particular one. :^)
2021-08-29Kernel: Rename Spinlock::is_owned_by_current_thread()Andreas Kling
...to is_owned_by_current_processor(). As Tom pointed out, this is much more accurate. :^)
2021-08-29Kernel: {Mutex,Spinlock}::own_lock() => is_locked_by_current_thread()Andreas Kling
Rename these API's to make it more clear what they are checking.
2021-08-29Kernel: Use StringView instead of C strings in MutexAndreas Kling
2021-08-28Kernel: Verify interrupts are disabled when interacting with MutexesAndrew Kaster
This should help prevent deadlocks where a thread blocks on a Mutex while interrupts are disabled, and makes it impossible for the holder of the Mutex to make forward progress because it cannot be scheduled in. Hide it behind a new debug macro LOCK_IN_CRITICAL_DEBUG for now, because Ext2FS takes a series of Mutexes from the page fault handler, which executes with interrupts disabled.
2021-08-23Kernel: Remove unused ScopedLockRelease classAndreas Kling
2021-08-23Kernel: Convert Processor::in_irq() to static current_in_irq()Andreas Kling
This closes the race window between Processor::current() and a context switch happening before in_irq().
2021-08-22Kernel: Rename ScopedSpinlock => SpinlockLockerAndreas Kling
This matches MutexLocker, and doesn't sound like it's a lock itself.
2021-08-22Kernel: Rename SpinLock => SpinlockAndreas Kling
2021-08-22Kernel: Simplify SpinLockProtected<T>Andreas Kling
Same treatment as MutexProtected<T>: inheritance and helper class is removed, SpinLockProtected now holds a T and a SpinLock.
2021-08-22Kernel: Rename SpinLockProtectedValue<T> => SpinLockProtected<T>Andreas Kling
2021-08-22Kernel: Simplify MutexProtected<T>Andreas Kling
This patch removes the MutexContendedResource<T> helper class, and MutexProtected<T> no longer inherits from T. Instead, MutexProtected<T> simply has a T and a Mutex. The LockedResource<T, LockMode> helper class is made a private nested class in MutexProtected.
2021-08-22Kernel: Rename ProtectedValue<T> => MutexProtected<T>Andreas Kling
Let's make it obvious what we're protecting it with.
2021-08-22Kernel: Remove some unused classes from Kernel/Locking/Andreas Kling
2021-08-13Kernel: Convert lock debug APIs to east constBrian Gianforcaro
2021-08-13Kernel: Add lock debugging to ProtectedValue / RefCountedContendedBrian Gianforcaro
Enable the LOCK_DEBUG functionality for these new APIs, as it looks like we want to move the whole system to use this in the not so distant future. :^)
2021-08-13Kernel: Reduce LOCK_DEBUG ifdefs by utilizing Kernel::LockLocationBrian Gianforcaro
The LOCK_DEBUG conditional code is pretty ugly for a feature that we only use rarely. We can remove a significant amount of this code by utilizing a zero sized fake type when not building in LOCK_DEBUG mode. This lets us keep the same API, but just let the compiler optimize it away when don't actually care about the location the caller came from.
2021-08-13Kernel: Introduce LockLocation abstraction from SourceLocationBrian Gianforcaro
Introduce a zero sized type to represent a SourceLocation, when we don't want to compile with SourceLocation support.
2021-08-11Kernel/SMP: Fix RecursiveSpinLock remembering the wrong CPU when lockingAndreas Kling
We have to disable interrupts before capturing the current Processor*, or we risk storing the wrong one if we get preempted and resume on a different CPU. Caught by the VERIFY in RecursiveSpinLock::unlock()
2021-08-10Kernel/SMP: Change critical sections to not disable interruptsAndreas Kling
Leave interrupts enabled so that we can still process IRQs. Critical sections should only prevent preemption by another thread. Co-authored-by: Tom <tomut@yahoo.com>
2021-08-10Kernel/SMP: Make entering/leaving critical sections multi-processor safeAndreas Kling
By making these functions static we close a window where we could get preempted after calling Processor::current() and move to another processor. Co-authored-by: Tom <tomut@yahoo.com>
2021-08-07Kernel: Introduce ProtectedValueJean-Baptiste Boric
A protected value is a variable with enforced locking semantics. The value is protected with a Mutex and can only be accessed through a Locked object that holds a MutexLocker to said Mutex. Therefore, the value itself cannot be accessed except through the proper locking mechanism, which enforces correct locking semantics. The Locked object has knowledge of shared and exclusive lock types and will only return const-correct references and pointers. This should help catch incorrect locking usage where a shared lock is acquired but the user then modifies the locked value. This is not a perfect solution because dereferencing the Locked object returns the value, so the caller could defeat the protected value semantics once it acquires a lock by keeping a pointer or a reference to the value around. Then again, this is C++ and we can't protect against malicious users from within the kernel anyways, but we can raise the threshold above "didn't pay attention".