summaryrefslogtreecommitdiff
path: root/Kernel/Thread.h
AgeCommit message (Collapse)Author
2020-08-19Kernel: Remove an unimplemented function (#3210)Muhammad Zahalqa
2020-08-16AK: Rename KB, MB, GB to KiB, MiB, GiBNico Weber
The SI prefixes "k", "M", "G" mean "10^3", "10^6", "10^9". The IEC prefixes "Ki", "Mi", "Gi" mean "2^10", "2^20", "2^30". Let's use the correct name, at least in code. Only changes the name of the constants, no other behavior change.
2020-08-15Kernel: Briefly resume stopped threads when being killedTom
We need to briefly put Stopped threads back into Running state so that the kernel stacks can get cleaned up when they're being killed. Fixes #3130
2020-08-11Kernel: Always return from Thread::wait_onTom
We need to always return from Thread::wait_on, even when a thread is being killed. This is necessary so that the kernel call stack can clean up and release references held by it. Then, right before transitioning back to user mode, we check if the thread is supposed to die, and at that point change the thread state to Dying to prevent further scheduling of this thread. This addresses some possible resource leaks similar to #3073
2020-08-10Kernel: More PID/TID typingBen Wiederhake
2020-08-10Kernel: PID/TID typingBen Wiederhake
This compiles, and contains exactly the same bugs as before. The regex 'FIXME: PID/' should reveal all markers that I left behind, including: - Incomplete conversion - Issues or things that look fishy - Actual bugs that will go wrong during runtime
2020-08-06Kernel: Dequeue dying threads from WaitQueueTom
If a thread is waiting but getting killed, we need to dequeue the thread from the WaitQueue so that a potential wake before finalization doesn't happen.
2020-08-03Kernel: Consolidate timeout logicTom
Allow passing in an optional timeout to Thread::block and move the timeout check out of Thread::Blocker. This way all Blockers implicitly support timeouts and don't need to implement it themselves. Do however allow them to override timeouts (e.g. for sockets).
2020-08-03Kernel: Fix a few Thread::block related racesTom
We need to have a Thread lock to protect threading related operations, such as Thread::m_blocker which is used in Thread::block. Also, if a Thread::Blocker indicates that it should be unblocking immediately, don't actually block the Thread and instead return immediately in Thread::block.
2020-08-02Kernel: Fix signal delivery when no syscall is madeTom
This fixes a regression introduced by the new software context switching where the Kernel would not deliver a signal unless the process is making system calls. This is because the TSS no longer updates the CS value, so the scheduler never considered delivery as the process always appeared to be in kernel mode. With software context switching we can just set up the signal trampoline at any time and when the processor returns back to user mode it'll get executed. This should fix e.g. killing programs that are stuck in some tight loop that doesn't make any system calls and is only pre-empted by the timer interrupt. Fixes #2958
2020-08-02Kernel: Remove ProcessInspectionHandle and make Process RefCountedTom
By making the Process class RefCounted we don't really need ProcessInspectionHandle anymore. This also fixes some race conditions where a Process may be deleted while still being used by ProcFS. Also make sure to acquire the Process' lock when accessing regions. Last but not least, there's no reason why a thread can't be scheduled while being inspected, though in practice it won't happen anyway because the scheduler lock is held at the same time.
2020-07-25Kernel: Allow Thread::sleep for more than 388 daysBen Wiederhake
Because Thread::sleep is an internal interface, it's easy to check that there are only few callers: Process::sys$sleep, usleep, and nanosleep are happy with this increased size, because now they support the entire range of their arguments (assuming small-ish values for ticks_per_second()). SyncTask doesn't care. Note that the old behavior wasn't "cap out at 388 days", which would have been reasonable. Instead, the code resulted in unsigned overflow, meaning that a very long sleep would "on average" end after about 194 days, sometimes much quicker.
2020-07-07Kernel: Fix checking BlockResultTom
We now have BlockResult::WokeNormally and BlockResult::NotBlocked, both of which indicate no error. We can no longer just check for BlockResult::WokeNormally and assume anything else must be an interruption.
2020-07-07Kernel+LibELF: Expose ELF Auxiliary Vector to UserspaceAndrew Kaster
The AT_* entries are placed after the environment variables, so that they can be found by iterating until the end of the envp array, and then going even further beyond :^)
2020-07-06Kernel: Enhance WaitQueue to remember pending wakesTom
If WaitQueue::wake_all, WaitQueue::wake_one, or WaitQueue::wake_n is called but nobody is currently waiting, we should remember that fact and prevent someone from waiting after such a request. This solves a race condition where the Finalizer thread is notified to finalize a thread, but it is not (yet) waiting on this queue. Fixes #2693
2020-07-06Kernel: Various context switch fixesTom
These changes solve a number of problems with the software context swithcing: * The scheduler lock really should be held throughout context switches * Transitioning from the initial (idle) thread to another needs to hold the scheduler lock * Transitioning from a dying thread to another also needs to hold the scheduler lock * Dying threads cannot necessarily be finalized if they haven't switched out of it yet, so flag them as active while a processor is running it (the Running state may be switched to Dying while it still is actually running)
2020-07-06Kernel: Require a reason to be passed to Thread::wait_onTom
The Lock class still permits no reason, but for everything else require a reason to be passed to Thread::wait_on. This makes it easier to diagnose why a Thread is in Queued state.
2020-07-03Kernel: Fix retreiving frame pointer from a threadTom
If we're trying to walk the stack for another thread, we can no longer retreive the EBP register from Thread::m_tss. Instead, we need to look at the top of the kernel stack, because all threads not currently running were last in kernel mode. Context switches now always trigger a brief switch to kernel mode, and Thread::m_tss only is used to save ESP and EIP. Fixes #2678
2020-07-03Kernel: Fix signal deliveryTom
When delivering urgent signals to the current thread we need to check if we should be unblocked, and if not we need to yield to another process. We also need to make sure that we suppress context switches during Process::exec() so that we don't clobber the registers that it sets up (eip mainly) by a context switch. To be able to do that we add the concept of a critical section, which are similar to Process::m_in_irq but different in that they can be requested at any time. Calls to Scheduler::yield and Scheduler::donate_to will return instantly without triggering a context switch, but the processor will then asynchronously trigger a context switch once the critical section is left.
2020-07-02Kernel: Remove no-longer-used GDT selector from ThreadAndreas Kling
Now that we use software context switching, each thread no longer has its own GDT entry (yay!) so we can get rid of this Thread member. :^)
2020-07-01Kernel: Turn Thread::current and Process::current into functionsTom
This allows us to query the current thread and process on a per processor basis
2020-07-01Kernel/LibCore: Expose processor id where a thread last ranTom
2020-07-01Kernel: Implement software context switching and Processor structureTom
Moving certain globals into a new Processor structure for each CPU allows us to eventually run an instance of the scheduler on each CPU.
2020-06-22LibC: Implement pselectNico Weber
pselect() is similar() to select(), but it takes its timeout as timespec instead of as timeval, and it takes an additional sigmask parameter. Change the sys$select parameters to match pselect() and implement select() in terms of pselect().
2020-05-20AK+Kernel: Help the compiler inline a bunch of trivial methodsSergey Bugaev
If these methods get inlined, the compiler is able to statically eliminate most of the assertions. Alas, it doesn't realize this, and believes inlining them to be too expensive. So give it a strong hint that it's not the case. This *decreases* the kernel binary size.
2020-04-26Kernel: Add timeout support to Thread::wait_onBrian Gianforcaro
This change plumbs a new optional timeout option to wait_on. The timeout is enabled by enqueing a timer on the timer queue while we are waiting. We can then see if we were woken up or timed out by checking if we are still on the wait queue or not.
2020-04-22Kernel: Make Process and Thread non-copyable and non-movableAndreas Kling
2020-04-13ptrace: Add PT_SETREGSItamar
PT_SETTREGS sets the regsiters of the traced thread. It can only be used when the tracee is stopped. Also, refactor ptrace. The implementation was getting long and cluttered the alraedy large Process.cpp file. This commit moves the bulk of the implementation to Kernel/Ptrace.cpp, and factors out peek & poke to separate methods of the Process class.
2020-04-13CPU: Handle breakpoint trapItamar
Also, start working on the debugger app.
2020-04-11Kernel: Include the current instruction pointer in profile samplesAndreas Kling
We were missing the innermost instruction pointer when sampling. This makes the instruction-level profile info a lot cooler! :^)
2020-03-28Kernel: Add 'ptrace' syscallItamar
This commit adds a basic implementation of the ptrace syscall, which allows one process (the tracer) to control another process (the tracee). While a process is being traced, it is stopped whenever a signal is received (other than SIGCONT). The tracer can start tracing another thread with PT_ATTACH, which causes the tracee to stop. From there, the tracer can use PT_CONTINUE to continue the execution of the tracee, or use other request codes (which haven't been implemented yet) to modify the state of the tracee. Additional request codes are PT_SYSCALL, which causes the tracee to continue exection but stop at the next entry or exit from a syscall, and PT_GETREGS which fethces the last saved register set of the tracee (can be used to inspect syscall arguments and return value). A special request code is PT_TRACE_ME, which is issued by the tracee and causes it to stop when it calls execve and wait for the tracer to attach.
2020-03-23AK: Reduce header dependency graph of String.hAndreas Kling
String.h no longer pulls in StringView.h. We do this by moving a bunch of String functions out-of-line.
2020-03-08AK: Add global FlatPtr typedef. It's u32 or u64, based on sizeof(void*)Andreas Kling
Use this instead of uintptr_t throughout the codebase. This makes it possible to pass a FlatPtr to something that has u32 and u64 overloads.
2020-03-01Kernel: Restore the previous thread state on SIGCONT after SIGSTOPAndreas Kling
When stopping a thread with the SIGSTOP signal, we now store the thread state in Thread::m_stop_state. That state is then restored on SIGCONT. This fixes an issue where previously-blocked threads would unblock upon resume. Now they simply resume in the Blocked state, and it's up to the regular unblocking mechanism to unblock them. Fixes #1326.
2020-02-18Kernel: Reset FPU state on exec()Andreas Kling
2020-02-17Kernel: Replace "current" with Thread::current and Process::currentAndreas Kling
Suggested by Sergey. The currently running Thread and Process are now Thread::current and Process::current respectively. :^)
2020-02-16Kernel: Reduce header dependencies of Process and ThreadAndreas Kling
2020-02-16Kernel: Add forward declaration headerAndreas Kling
2020-02-16Kernel: Move all code into the Kernel namespaceAndreas Kling
2020-02-16Kernel: Rename RegisterDump => RegisterStateAndreas Kling
2020-02-02Kernel: Update Thread::raw_backtrace() signature to use uintptr_tAndreas Kling
2020-01-27Kernel: Expose the signal that stopped a thread via sys$waitpid()Andreas Kling
2020-01-26Kernel: read()/write() should respect timeouts when used on a socketsAndreas Kling
Move timeout management to the ReadBlocker and WriteBlocker classes. Also get rid of the specialized ReceiveBlocker since it no longer does anything that ReadBlocker can't do.
2020-01-20Kernel: Use the templated copy_to/from_user() in more placesAndreas Kling
These ensure that the "to" and "from" pointers have the same type, and also that we copy the correct number of bytes.
2020-01-18Meta: Add license header to source filesAndreas Kling
As suggested by Joshua, this commit adds the 2-clause BSD license as a comment block to the top of every source file. For the first pass, I've just added myself for simplicity. I encourage everyone to add themselves as copyright holders of any file they've added or modified in some significant way. If I've added myself in error somewhere, feel free to replace it with the appropriate copyright holder instead. Going forward, all new source files should include a license header.
2020-01-12Kernel: Fix Lock racing to the WaitQueueAndreas Kling
There was a time window between releasing Lock::m_lock and calling into the lock's WaitQueue where someone else could take m_lock and bring two threads into a deadlock situation. Fix this issue by holding Lock::m_lock until interrupts are disabled by either Thread::wait_on() or WaitQueue::wake_one().
2020-01-10Kernel: Fix kernel null deref on process crash during join_thread()Andreas Kling
The join_thread() syscall is not supposed to be interruptible by signals, but it was. And since the process death mechanism piggybacked on signal interrupts, it was possible to interrupt a pthread_join() by killing the process that was doing it, leading to confusing due to some assumptions being made by Thread::finalize() for threads that have a pending joiner. This patch fixes the issue by making "interrupted by death" a distinct block result separate from "interrupted by signal". Then we handle that state in join_thread() and tidy things up so that thread finalization doesn't get confused by the pending joiner being gone. Test: Tests/Kernel/null-deref-crash-during-pthread_join.cpp
2020-01-09Kernel: Remove unused variable Thread::m_userspace_stack_regionAndreas Kling
2020-01-01Kernel: Switch to eagerly restoring x86 FPU state on context switchAndreas Kling
Lazy FPU restore is well known to be vulnerable to timing attacks, and eager restore is a lot simpler anyway, so let's just do it eagerly.
2019-12-30Kernel: Also add a process boosting mechanismAndreas Kling
Let's also have set_process_boost() for giving all threads in a process the same boost.