Age | Commit message (Collapse) | Author |
|
When switching to the new address space, we also have to switch the
Process::m_master_tls_* variables as they may refer to a region in
the old address space.
This was causing `su` to not run correctly.
Regression from 65641187ffb15e3512fcf9c260c02287f83b5d09.
|
|
|
|
The idea was to remove this file in bd2011406, but that did not actually
happen. Let's actually remove it.
|
|
|
|
And make sure to also restore it in sys$sigreturn.
|
|
With this change, we are now able to successfully boot into the text
mode! :^)
|
|
Specifically this commit implements two setters set_userspace_sp and
set_ip in RegisterState.h, and also adds a stack pointer getter (sp) in
ThreadRegisters.h. Contributed by konrad, thanks for that.
|
|
And also vice versa. Contributed by konrad, thanks for that.
|
|
This commit also removes the unnecessary ifdefs from
sys/arch/aarch64/regs.h. Contributed by konrad, thanks for that.
|
|
This makes the code architecture independent, and thus makes it work for
aarch64.
|
|
Setting the page table base register (ttbr0_el1) is not enough, and will
not flush the TLB caches, in contrary with x86_64 where setting the CR3
register will actually flush the caches. This commit adds the necessary
code to properly flush the TLB caches when context switching. This
commit also changes Processor::flush_tlb_local to use the vmalle1
variant, as previously we would be flushing the tlb's of all the cores
in the inner-shareable domain.
|
|
For now just return 0 as we have no RTC support on aarch64 yet, and add
a FIXME to return the correct value.
|
|
Previously we had a race condition in the page fault handling: We were
relying on the affected Region staying alive while handling the page
fault, but this was not actually guaranteed, as an munmap from another
thread could result in the region being removed concurrently.
This commit closes that hole by extending the lifetime of the region
affected by the page fault until the handling of the page fault is
complete. This is achieved by maintaing a psuedo-reference count on the
region which counts the number of in-progress page faults being handled
on this region, and extending the lifetime of the region while this
counter is non zero.
Since both the increment of the counter by the page fault handler and
the spin loop waiting for it to reach 0 during Region destruction are
serialized using the appropriate AddressSpace spinlock, eventual
progress is guaranteed: As soon as the region is removed from the tree
no more page faults on the region can start.
And similarly correctness is ensured: The counter is incremented under
the same lock, so any page faults that are being handled will have
already incremented the counter before the region is deallocated.
|
|
This replaces the previous owning address space pointer. This commit
should not change any of the existing functionality, but it lays down
the groundwork needed to let us properly access the region table under
the address space spinlock during page fault handling.
|
|
Instead of setting up the new address space on it's own, and only swap
to the new address space at the end, we now immediately swap to the new
address space (while still keeping the old one alive) and only revert
back to the old one if we fail at any point.
This is done to ensure that the process' active address space (aka the
contents of m_space) always matches actual address space in use by it.
That should allow us to eventually make the page fault handler process-
aware, which will let us properly lock the process address space lock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
For some reason GCC did not complain about this.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
All accesses to shared mutable data are already serialized behind the
process address space spinlock.
|
|
The current way we handle sync commands is very ugly and depends on lot
of preconditions. Now that we have an end_io handler for a request, we
can use WaitQueue to do sync commands more elegantly.
This does depend on block layer sending one request at a time but this
change is a step forward towards better IO handling.
|
|
There was a private variable named m_current_request which was used to
track a single request at a time. This guarantee is given by the block
layer where we wait on each IO. This design will break down in the
driver once the block layer removes that constraint.
Redesign the IO handling in a completely asynchronous way by maintaining
requests up to queue depth. NVMeIO struct is introduced to track an IO
submitted along with other information such whether the IO is still
being processed and an endio callback which will be called during the
end of a request.
A hashmap private variable is created which will key based on the
command id of a request with a value of NVMeIO. endio handler will come
in handy if we are doing a sync request and we want to wake up the wait
queue during the end.
This change also simplified the code by removing some special condition
in submit_sqe function, etc that were marked as FIXME for a long time.
|
|
Using sq_tail as cid makes an inherent assumption that we send only
one IO at a time. Use an atomic variable instead for command id of a
submission queue entry.
As sq_tail is not used as cid anymore, remove m_prev_sq_tail which used
to hold the last used sq_tail value.
|
|
This function is already serialized by access to process protected data.
|
|
The SID was duplicated between the process credentials and protected
data. And to make matters worse, the credentials SID was not updated in
sys$setsid.
This patch fixes this by removing the SID from protected data and
updating the credentials SID everywhere.
|
|
This function is now serialized by access to the process group list,
and to the current process's protected data.
|
|
This closes two race windows:
- ProcessGroup removed itself from the "all process groups" list in its
destructor. It was possible to walk the list between the last unref()
and the destructor invocation, and grab a pointer to a ProcessGroup
that was about to get deleted.
- sys$setsid() could end up creating a process group that already
existed, as there was a race window between checking if the PGID
is used, and actually creating a ProcessGroup with that PGID.
|
|
No need for LockRefPtr here, as the pointer never changes after
initialization.
|
|
|
|
Now that it's no longer using LockRefPtr, we can actually move it into
protected data. (LockRefPtr couldn't be stored there because protected
data is immutable at times, and LockRefPtr uses some of its own bits
for locking.)
|
|
TTY was only stored in Process::m_tty, so make that a SpinlockProtected.
|
|
This was some pre-SMP historical artifact.
|
|
|
|
These syscalls are already protected by existing locking mechanisms,
including the mutex inside InodeWatcher.
|
|
Same as sys$kill, nothing here that isn't already protected by existing
locks.
|
|
This syscall sends a signal to other threads or itself. This mechanism
is already guarded by locking mechanisms, and widely used within the
kernel without help from the big lock.
|
|
These are artifacts from the pre-SMP times.
|
|
Same deal as sys$times, nothing here that needs locking at the moment.
|
|
Instead of templatizing on a bool parameter, use an enum for clarity.
|
|
...and also make the Process tick counters clock_t instead of u32.
It seems harmless to get interrupted in the middle of reading these
counters and reporting slightly fewer ticks in some category.
|
|
Expand the following types from 32-bit to 64-bit:
- blkcnt_t
- blksize_t
- dev_t
- nlink_t
- suseconds_t
- clock_t
This matches their size on other 64-bit systems.
|
|
The body of this syscall is already serialized by calling
with_mutable_protected_data().
|
|
Yet another syscall that only messes with the current thread.
|
|
Another one that only touches the current thread.
|
|
Another one that only messes with the current thread.
|
|
This syscall is only concerned with the current thread.
|