Age | Commit message (Collapse) | Author |
|
Various places in the kernel were manually checking the cs register for
x86_64, however to share this with aarch64 a function in RegisterState
is added, and the call-sites are updated. While we're here the
PreviousMode enum is renamed to ExecutionMode.
|
|
This step would ideally not have been necessary (increases amount of
refactoring and templates necessary, which in turn increases build
times), but it gives us a couple of nice properties:
- SpinlockProtected inside Singleton (a very common combination) can now
obtain any lock rank just via the template parameter. It was not
previously possible to do this with SingletonInstanceCreator magic.
- SpinlockProtected's lock rank is now mandatory; this is the majority
of cases and allows us to see where we're still missing proper ranks.
- The type already informs us what lock rank a lock has, which aids code
readability and (possibly, if gdb cooperates) lock mismatch debugging.
- The rank of a lock can no longer be dynamic, which is not something we
wanted in the first place (or made use of). Locks randomly changing
their rank sounds like a disaster waiting to happen.
- In some places, we might be able to statically check that locks are
taken in the right order (with the right lock rank checking
implementation) as rank information is fully statically known.
This refactoring even more exposes the fact that Mutex has no lock rank
capabilites, which is not fixed here.
|
|
This removes the x86 specific hlt instruction from the scheduler, and
allows us to run the scheduler code for aarch64 by implementing
Processor::wait_for_interrupt for aarch64.
|
|
This allows us to use the same code for aarch64.
|
|
|
|
This expresses the intent better, and we shouldn't be calling global
functions anyway.
|
|
|
|
The code in this file is not architecture specific, so it can be moved
to the base Kernel directory.
|
|
This adds a new arch-independent header which in turn includes the
correct header for the build architecture.
|
|
|
|
This change ensures that the scheduler doesn't depend on a platform
specific or arch-specific code when it initializes itself, but rather we
ensure that in compile-time we will generate the appropriate code to
find the correct arch-specific current time methods.
|
|
|
|
The RTC and CMOS are currently only supported for x86 platforms and use
specific x86 instructions to produce only certain x86 plaform operations
and results, therefore, we move them to the Arch/x86 specific directory.
|
|
|
|
|
|
This commit updates the lock function from Spinlock and
RecursiveSpinlock to return the InterruptsState of the processor,
instead of the processor flags. The unlock functions would only look at
the interrupt flag of the processor flags, so we now use the
InterruptsState enum to clarify the intent, and such that we can use the
same Spinlock code for the aarch64 build.
To not break the build, all the call sites are updated aswell.
|
|
Globally shared MemoryManager state is now kept in a GlobalData struct
and wrapped in SpinlockProtected.
A small set of members are left outside the GlobalData struct as they
are only set during boot initialization, and then remain constant.
This allows us to access those members without taking any locks.
|
|
Until now, our kernel has reimplemented a number of AK classes to
provide automatic internal locking:
- RefPtr
- NonnullRefPtr
- WeakPtr
- Weakable
This patch renames the Kernel classes so that they can coexist with
the original AK classes:
- RefPtr => LockRefPtr
- NonnullRefPtr => NonnullLockRefPtr
- WeakPtr => LockWeakPtr
- Weakable => LockWeakable
The goal here is to eventually get rid of the Lock* classes in favor of
using external locking.
|
|
All users which relied on the default constructor use a None lock rank
for now. This will make it easier to in the future remove LockRank and
actually annotate the ranks by searching for None.
|
|
Note that SMP is still off by default, but this basically removes the
weird "SMP on but threads don't get scheduled" behavior we had by
default. If you pass "smp=on" to the kernel, you now get SMP. :^)
|
|
Each of these strings would previously rely on StringView's char const*
constructor overload, which would call __builtin_strlen on the string.
Since we now have operator ""sv, we can replace these with much simpler
versions. This opens the door to being able to remove
StringView(char const*).
No functional changes.
|
|
This change unifies the naming convention for kernel tasks.
The goal of this change is to:
- Make the task names more descriptive, so users can more
easily understand their purpose in System Monitor.
- Unify the naming convention so they are consistent.
|
|
Now that the code does not use architectural specific code, it is moved
to the generic Arch directory and the paths are modified accordingly.
|
|
|
|
|
|
This ensures that processes that don't perform any syscalls will also
eventually receive signals.
|
|
At the end of sys$execve(), we perform a context switch from the old
executable into the new executable.
However, the Kernel::Thread object we are switching to is the *same*
thread as the one we are switching from. So we must not assume the
from_thread and to_thread are different threads.
We had a bug caused by this misconception, where the "from" thread would
always get marked as "inactive" when switching to a new thread.
This meant that threads would always get switched into "inactive" mode
on first context switch into them.
If a thread then tried blocking on a kernel mutex within its first time
slice, we'd end up in Thread::block(Mutex&) with an inactive thread.
Once a thread is inactive, the scheduler believes it's okay to
reactivate the thread (by scheduling it.) If a thread got re-scheduled
prematurely while setting up a mutex block, things would fall apart and
we'd crash in Thread::block() due to the thread state being "Runnable"
instead of the expected "Running".
|
|
Turns out nobody actually cared whether the scheduler switched to a new
thread or not (which is what we were returning.)
|
|
Move this architecture-specific sanity check (IOPL must be 0) out of
Scheduler and into the x86 enter_thread_context(). Also do this for
every thread and not just userspace ones.
|
|
We always context_switch() from somewhere, so there's no need to handle
the case where from_thread is null.
|
|
It was annoyingly hard to spot these when we were using them with
different amounts of qualification everywhere.
This patch uses Thread::State::Foo everywhere instead of Thread::Foo
or just Foo.
|
|
Signal dispatch is already taken care of elsewhere, so there appears to
be no need for the hack in enter_current().
This also allows us to remove the Thread::m_in_block flag, simplifying
thread blocking logic somewhat.
Verified with the original repro for #4336 which this was meant to fix.
|
|
This will allow us to eventually switch dbgln in the kernel to an
allocation-free (although length-bounded) formatter.
|
|
|
|
|
|
We can now directly create formatted KStrings with KString::formatted.
:^)
|
|
In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount},
this commit removes all calls to these functions and replaces them with
the equivalent functions in AK/BuiltinWrappers.h.
|
|
|
|
This makes the user-facing type only take the node member pointer, and
lets the compiler figure out the other needed types from that.
|
|
|
|
We previously allowed Thread to exist in a state where its m_name was
null, and had to work around that in various places.
This patch removes that possibility and forces those who would create a
thread (or change the name of one) to provide a NonnullOwnPtr<KString>
with the name.
|
|
...to is_owned_by_current_processor(). As Tom pointed out, this is
much more accurate. :^)
|
|
Rename these API's to make it more clear what they are checking.
|
|
This avoids a race between getting the processor-specific SchedulerData
and accessing it. (Switching to a different CPU in that window means
that we're operating on the wrong SchedulerData.)
Co-authored-by: Tom <tomut@yahoo.com>
|
|
And let id() be the non-static version that gives you the ID of a
Processor object.
|
|
This closes the race window between Processor::current() and a context
switch happening before in_irq().
|
|
This matches MutexLocker, and doesn't sound like it's a lock itself.
|
|
|
|
|
|
This patch does three things:
- Convert the global thread list from a HashMap to an IntrusiveList
- Combine the thread list and its lock into a SpinLockProtectedValue
- Customize Thread::unref() so it locks the list while unreffing
This closes the same race window for Thread as @sin-ack's recent changes
did for Process.
Note that the HashMap->IntrusiveList conversion means that we lose O(1)
lookups, but the majority of clients of this list are doing traversal,
not lookup. Once we have an intrusive hashing solution, we should port
this to use that, but for now, this gets rid of heap allocations during
a sensitive time.
|