Age | Commit message (Collapse) | Author |
|
We previously allowed Thread to exist in a state where its m_name was
null, and had to work around that in various places.
This patch removes that possibility and forces those who would create a
thread (or change the name of one) to provide a NonnullOwnPtr<KString>
with the name.
|
|
...to is_owned_by_current_processor(). As Tom pointed out, this is
much more accurate. :^)
|
|
Rename these API's to make it more clear what they are checking.
|
|
This avoids a race between getting the processor-specific SchedulerData
and accessing it. (Switching to a different CPU in that window means
that we're operating on the wrong SchedulerData.)
Co-authored-by: Tom <tomut@yahoo.com>
|
|
And let id() be the non-static version that gives you the ID of a
Processor object.
|
|
This closes the race window between Processor::current() and a context
switch happening before in_irq().
|
|
This matches MutexLocker, and doesn't sound like it's a lock itself.
|
|
|
|
|
|
This patch does three things:
- Convert the global thread list from a HashMap to an IntrusiveList
- Combine the thread list and its lock into a SpinLockProtectedValue
- Customize Thread::unref() so it locks the list while unreffing
This closes the same race window for Thread as @sin-ack's recent changes
did for Process.
Note that the HashMap->IntrusiveList conversion means that we lose O(1)
lookups, but the majority of clients of this list are doing traversal,
not lookup. Once we have an intrusive hashing solution, we should port
this to use that, but for now, this gets rid of heap allocations during
a sensitive time.
|
|
By making these functions static we close a window where we could get
preempted after calling Processor::current() and move to another
processor.
Co-authored-by: Tom <tomut@yahoo.com>
|
|
Co-authored-by: Tom <tomut@yahoo.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To add a new per-CPU data structure, add an ID for it to the
ProcessorSpecificDataID enum.
Then call ProcessorSpecific<T>::initialize() when you are ready to
construct the per-CPU data structure on the current CPU. It can then
be accessed via ProcessorSpecific<T>::get().
This patch replaces the existing hard-coded mechanisms for Scheduler
and MemoryManager per-CPU data structure.
|
|
|
|
|
|
All of them "static member accessed through instance".
|
|
Depending on the values it might be difficult to figure out whether a
value is decimal or hexadecimal. So let's make this more obvious. Also
this allows copying and pasting those numbers into GNOME calculator and
probably also other apps which auto-detect the base.
|
|
The non CPU specific code of the kernel shouldn't need to deal with
architecture specific registers, and should instead deal with an
abstract view of the machine. This allows us to remove a variety of
architecture specific ifdefs and helps keep the code slightly more
portable.
We do this by exposing the abstract representation of instruction
pointer, stack pointer, base pointer, return register, etc on the
RegisterState struct.
|
|
This switches tracking CPU usage to more accurately measure time in
user and kernel land using either the TSC or another time source.
This will also come in handy when implementing a tickless kernel mode.
|
|
As threads come and go, we can't simply account for how many time
slices the threads at any given point may have been using. We need to
also account for threads that have since disappeared. This means we
also need to track how many time slices we have expired globally.
However, because this doesn't account for context switches outside of
the system timer tick values may still be under-reported. To solve this
we will need to track more accurate time information on each context
switch.
This also fixes top's cpu usage calculation which was still based on
the number of context switches.
Fixes #6473
|
|
|
|
This will dump stack traces of all threads when pressing
Ctrl+Shift+Alt+F12
|
|
Threads that don't make syscalls still need to be killed, and we can
do that at any time we want so long the thread is in user mode and
not somehow blocked (e.g. page fault).
|
|
This reverts commit 3c3a1726df847aff9db73862040d9f7a3b9fc907.
We cannot blindly kill threads just because they're not executing in a
system call. Being blocked (including in a page fault) needs proper
unblocking and potentially kernel stack cleanup before we can mark a
thread as Dying.
Fixes #8691
|
|
This re-arranges the order of how things are initialized so that we
try to initialize process and thread management earlier. This is
neccessary because a lot of the code uses the Lock class, which really
needs to have a running scheduler in place so that we can properly
preempt.
This also enables us to potentially initialize some things in parallel.
|
|
If no other thread is ready to be run we don't need to switch to the
idle thread and wait for the next timer interrupt. We can just give
the thread another timeslice and keep it running.
|
|
|
|
This was an old SerenityOS-specific syscall for donating the remainder
of the calling thread's time-slice to another thread within the same
process.
Now that Threading::Lock uses a pthread_mutex_t internally, we no
longer need this syscall, which allows us to get rid of a surprising
amount of unnecessary scheduler logic. :^)
|
|
|
|
|
|
|
|
|
|
We're using software context switches so calling this struct tss is
somewhat misleading.
|
|
|
|
|
|
This adds just enough stubs to make the kernel compile on x86_64. Obviously
it won't do anything useful - in fact it won't even attempt to boot because
Multiboot doesn't support ELF64 binaries - but it gets those compiler errors
out of the way so more progress can be made getting all the missing
functionality in place.
|
|
This also removes a lot of CPU.h includes infavor for Sections.h
|
|
This does not add any functional changes
|
|
Steps to reproduce:
$ cat loop.c
int main() { for (;;); }
$ gcc -o loop loop.c
$ ./loop
Terminating this process wasn't previously possible because we only
checked whether the thread should be terminated on syscall exit.
|
|
After marking a thread for death we might end up finalizing the thread
while it still has code to run, e.g. via:
Thread::block -> Thread::dispatch_one_pending_signal
-> Thread::dispatch_signal -> Process::terminate_due_to_signal
-> Process::die -> Process::kill_all_threads -> Thread::set_should_die
This marks the thread for death. It isn't destroyed at this point
though.
The scheduler then gets invoked via:
Thread::block -> Thread::relock_process
At that point we still have a registered blocker on the stack frame
which belongs to Thread::block. Thread::relock_process drops the
critical section which allows the scheduler to run.
When the thread is then scheduled out the scheduler sets the thread
state to Thread::Dying which allows the finalizer to destroy the Thread
object and its associated resources including the kernel stack.
This probably also affects objects other than blockers which rely
on their destructor to be run, however the problem was most noticible
because blockers are allocated on the stack of the dying thread and
cause an access violation when another thread touches the blocker
which belonged to the now-dead thread.
Fixes #7823.
|
|
There were a few cases where we could end up logging profiling events
before or after the associated process or thread exists in the profile:
After enabling profiling we might end up with CPU samples before we
had a chance to synthesize process/thread creation events.
After a thread exits we would still log associated kmalloc/kfree
events. Instead we now just ignore those events.
|
|
They were sorta unneeded. :^)
|
|
Since `s_mm_lock` is a RecursiveSpinlock, if a kernel thread gets
preempted while accidentally hold the lock during switch_context,
another thread running on the same processor could end up manipulating
the state of the memory manager even though they should not be able to.
It will just bump the recursion count and keep going.
This appears to be the root cause of weird bugs like: #7359
Where page protection magically appears to be wrong during execution.
To avoid these cases lets guard this specific unfortunate case and make
sure it can never go unnoticed ever again.
The assert was Tom's idea to help debug this, so I am going to tag him
as co-author of this commit.
Co-Authored-By: Tom <tomut@yahoo.com>
|
|
|