summaryrefslogtreecommitdiff
path: root/Kernel/Scheduler.cpp
AgeCommit message (Collapse)Author
2023-01-27Kernel: Factor our PreviousMode into RegisterState::previous_modeTimon Kruiper
Various places in the kernel were manually checking the cs register for x86_64, however to share this with aarch64 a function in RegisterState is added, and the call-sites are updated. While we're here the PreviousMode enum is renamed to ExecutionMode.
2023-01-02Kernel: Turn lock ranks into template parameterskleines Filmröllchen
This step would ideally not have been necessary (increases amount of refactoring and templates necessary, which in turn increases build times), but it gives us a couple of nice properties: - SpinlockProtected inside Singleton (a very common combination) can now obtain any lock rank just via the template parameter. It was not previously possible to do this with SingletonInstanceCreator magic. - SpinlockProtected's lock rank is now mandatory; this is the majority of cases and allows us to see where we're still missing proper ranks. - The type already informs us what lock rank a lock has, which aids code readability and (possibly, if gdb cooperates) lock mismatch debugging. - The rank of a lock can no longer be dynamic, which is not something we wanted in the first place (or made use of). Locks randomly changing their rank sounds like a disaster waiting to happen. - In some places, we might be able to statically check that locks are taken in the right order (with the right lock rank checking implementation) as rank information is fully statically known. This refactoring even more exposes the fact that Mutex has no lock rank capabilites, which is not fixed here.
2022-12-29Kernel: Add Processor::wait_for_interrupt and use it in SchedulerTimon Kruiper
This removes the x86 specific hlt instruction from the scheduler, and allows us to run the scheduler code for aarch64 by implementing Processor::wait_for_interrupt for aarch64.
2022-12-29Kernel: Remove debug printing of code segmentTimon Kruiper
This allows us to use the same code for aarch64.
2022-12-28Kernel: Remove i686 supportLiav A
2022-10-18Kernel: Call Processor::are_interrupts_enabled in Scheduler::idle_loopTimon Kruiper
This expresses the intent better, and we shouldn't be calling global functions anyway.
2022-10-18Kernel: Add even more AARCH64 stubsGunnar Beutner
2022-10-17Kernel: Move InterruptDisabler out of Arch directoryTimon Kruiper
The code in this file is not architecture specific, so it can be moved to the base Kernel directory.
2022-10-16Kernel: Don't directly include <Kernel/Arch/x86/TrapFrame.h>Gunnar Beutner
This adds a new arch-independent header which in turn includes the correct header for the build architecture.
2022-10-14Kernel: Move Scheduler current time method to the TimeManagement codeLiav A
2022-10-14Kernel: Abstract platform-specific current time methods from SchedulerLiav A
This change ensures that the scheduler doesn't depend on a platform specific or arch-specific code when it initializes itself, but rather we ensure that in compile-time we will generate the appropriate code to find the correct arch-specific current time methods.
2022-10-12Kernel: Only use the TSC when it is invariantMarkus Pfeifenberger
2022-09-20Kernel/x86: Move RTC and CMOS code to x86 arch-specific subdirectoryLiav A
The RTC and CMOS are currently only supported for x86 platforms and use specific x86 instructions to produce only certain x86 plaform operations and results, therefore, we move them to the Arch/x86 specific directory.
2022-08-26Kernel: Reorganize and colorize the scheduler thread list dumpTim Schumacher
2022-08-26Kernel: Show more (b)locking info when dumping the process listTim Schumacher
2022-08-26Kernel: Use InterruptsState in Spinlock codeTimon Kruiper
This commit updates the lock function from Spinlock and RecursiveSpinlock to return the InterruptsState of the processor, instead of the processor flags. The unlock functions would only look at the interrupt flag of the processor flags, so we now use the InterruptsState enum to clarify the intent, and such that we can use the same Spinlock code for the aarch64 build. To not break the build, all the call sites are updated aswell.
2022-08-26Kernel: Remove global MM lock in favor of SpinlockProtectedAndreas Kling
Globally shared MemoryManager state is now kept in a GlobalData struct and wrapped in SpinlockProtected. A small set of members are left outside the GlobalData struct as they are only set during boot initialization, and then remain constant. This allows us to access those members without taking any locks.
2022-08-20Kernel: Make self-contained locking smart pointers their own classesAndreas Kling
Until now, our kernel has reimplemented a number of AK classes to provide automatic internal locking: - RefPtr - NonnullRefPtr - WeakPtr - Weakable This patch renames the Kernel classes so that they can coexist with the original AK classes: - RefPtr => LockRefPtr - NonnullRefPtr => NonnullLockRefPtr - WeakPtr => LockWeakPtr - Weakable => LockWeakable The goal here is to eventually get rid of the Lock* classes in favor of using external locking.
2022-08-19Kernel: Require lock rank for Spinlock constructionkleines Filmröllchen
All users which relied on the default constructor use a None lock rank for now. This will make it easier to in the future remove LockRank and actually annotate the ranks by searching for None.
2022-08-18Kernel: Schedule threads on all processors when SMP is enabledAndreas Kling
Note that SMP is still off by default, but this basically removes the weird "SMP on but threads don't get scheduled" behavior we had by default. If you pass "smp=on" to the kernel, you now get SMP. :^)
2022-07-12Everywhere: Add sv suffix to strings relying on StringView(char const*)sin-ack
Each of these strings would previously rely on StringView's char const* constructor overload, which would call __builtin_strlen on the string. Since we now have operator ""sv, we can replace these with much simpler versions. This opens the door to being able to remove StringView(char const*). No functional changes.
2022-06-05Kernel: Unify Kernel task names for consistencyBrian Gianforcaro
This change unifies the naming convention for kernel tasks. The goal of this change is to: - Make the task names more descriptive, so users can more easily understand their purpose in System Monitor. - Unify the naming convention so they are consistent.
2022-06-02Kernel: Implement InterruptDisabler using generic Processor functionsTimon Kruiper
Now that the code does not use architectural specific code, it is moved to the generic Arch directory and the paths are modified accordingly.
2022-04-01Everywhere: Run clang-formatIdan Horowitz
2022-03-22Kernel: Fix typo in a commentLinus Groh
2022-02-21Kernel: Try to dispatch pending signals on context switchIdan Horowitz
This ensures that processes that don't perform any syscalls will also eventually receive signals.
2022-01-30Kernel: Don't mark current thread as inactive after successful exec()Andreas Kling
At the end of sys$execve(), we perform a context switch from the old executable into the new executable. However, the Kernel::Thread object we are switching to is the *same* thread as the one we are switching from. So we must not assume the from_thread and to_thread are different threads. We had a bug caused by this misconception, where the "from" thread would always get marked as "inactive" when switching to a new thread. This meant that threads would always get switched into "inactive" mode on first context switch into them. If a thread then tried blocking on a kernel mutex within its first time slice, we'd end up in Thread::block(Mutex&) with an inactive thread. Once a thread is inactive, the scheduler believes it's okay to reactivate the thread (by scheduling it.) If a thread got re-scheduled prematurely while setting up a mutex block, things would fall apart and we'd crash in Thread::block() due to the thread state being "Runnable" instead of the expected "Running".
2022-01-30Kernel: Remove unused bool return values from scheduler functionsAndreas Kling
Turns out nobody actually cared whether the scheduler switched to a new thread or not (which is what we were returning.)
2022-01-30Kernel: Simplify x86 IOPL sanity checkAndreas Kling
Move this architecture-specific sanity check (IOPL must be 0) out of Scheduler and into the x86 enter_thread_context(). Also do this for every thread and not just userspace ones.
2022-01-30Kernel: VERIFY that Scheduler::context_switch() always has a from-threadAndreas Kling
We always context_switch() from somewhere, so there's no need to handle the case where from_thread is null.
2022-01-30Kernel: Make Thread::State an `enum class` and use it consistentlyAndreas Kling
It was annoyingly hard to spot these when we were using them with different amounts of qualification everywhere. This patch uses Thread::State::Foo everywhere instead of Thread::Foo or just Foo.
2022-01-30Kernel: Don't dispatch signals in Processor::enter_current()Andreas Kling
Signal dispatch is already taken care of elsewhere, so there appears to be no need for the hack in enter_current(). This also allows us to remove the Thread::m_in_block flag, simplifying thread blocking logic somewhat. Verified with the original repro for #4336 which this was meant to fix.
2022-01-16Kernel: Use kernelputstr instead of dbgln when printing backtracesIdan Horowitz
This will allow us to eventually switch dbgln in the kernel to an allocation-free (although length-bounded) formatter.
2021-12-30Kernel: Simplify some if statementsHendiadyoin1
2021-12-30Kernel: Add some implied auto qualifiersHendiadyoin1
2021-12-28Kernel: Remove the KString::try_create(String::formatted(...)) patternDaniel Bertalan
We can now directly create formatted KStrings with KString::formatted. :^)
2021-12-21AK+Everywhere: Replace __builtin bit functionsNick Johnson
In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount}, this commit removes all calls to these functions and replaces them with the equivalent functions in AK/BuiltinWrappers.h.
2021-12-01Kernel: Add an x86 include check+error in x86/TrapFrame.hJames Mintram
2021-09-10AK+Everywhere: Reduce the number of template parameters of IntrusiveListAli Mohammad Pur
This makes the user-facing type only take the node member pointer, and lets the compiler figure out the other needed types from that.
2021-09-07Kernel: Store process names as KStringAndreas Kling
2021-09-06Kernel: Make Threads always have a nameAndreas Kling
We previously allowed Thread to exist in a state where its m_name was null, and had to work around that in various places. This patch removes that possibility and forces those who would create a thread (or change the name of one) to provide a NonnullOwnPtr<KString> with the name.
2021-08-29Kernel: Rename Spinlock::is_owned_by_current_thread()Andreas Kling
...to is_owned_by_current_processor(). As Tom pointed out, this is much more accurate. :^)
2021-08-29Kernel: {Mutex,Spinlock}::own_lock() => is_locked_by_current_thread()Andreas Kling
Rename these API's to make it more clear what they are checking.
2021-08-29Kernel: Move "in-scheduler" flag from SchedulerData to ProcessorAndreas Kling
This avoids a race between getting the processor-specific SchedulerData and accessing it. (Switching to a different CPU in that window means that we're operating on the wrong SchedulerData.) Co-authored-by: Tom <tomut@yahoo.com>
2021-08-23Kernel: Rename Processor::id() => current_id()Andreas Kling
And let id() be the non-static version that gives you the ID of a Processor object.
2021-08-23Kernel: Convert Processor::in_irq() to static current_in_irq()Andreas Kling
This closes the race window between Processor::current() and a context switch happening before in_irq().
2021-08-22Kernel: Rename ScopedSpinlock => SpinlockLockerAndreas Kling
This matches MutexLocker, and doesn't sound like it's a lock itself.
2021-08-22Kernel: Rename SpinLock => SpinlockAndreas Kling
2021-08-22Kernel: Rename SpinLockProtectedValue<T> => SpinLockProtected<T>Andreas Kling
2021-08-15Kernel: Lock thread list while in Thread::unref()Andreas Kling
This patch does three things: - Convert the global thread list from a HashMap to an IntrusiveList - Combine the thread list and its lock into a SpinLockProtectedValue - Customize Thread::unref() so it locks the list while unreffing This closes the same race window for Thread as @sin-ack's recent changes did for Process. Note that the HashMap->IntrusiveList conversion means that we lose O(1) lookups, but the majority of clients of this list are doing traversal, not lookup. Once we have an intrusive hashing solution, we should port this to use that, but for now, this gets rid of heap allocations during a sensitive time.