summaryrefslogtreecommitdiff
path: root/Kernel/Arch/x86/common/Processor.cpp
AgeCommit message (Collapse)Author
2022-04-01Everywhere: Run clang-formatIdan Horowitz
2022-03-27Kernel: Support all AMD-defined CPUID feature flags for EAX=80000001hLinus Groh
We're now able to detect all the AMD-defined CPUID feature flags from ECX/EDX for EAX=80000001h :^)
2022-03-27Kernel: Support all Intel-defined extended CPUID feature flags for EAX=7Linus Groh
We're now able to detect all the extended CPUID feature flags from EBX/ECX/EDX for EAX=7 :^)
2022-03-27Kernel: Support all Intel-defined CPUID feature flags for EAX=1Linus Groh
We're now able to detect all the regular CPUID feature flags from ECX/EDX for EAX=1 :^) None of the new ones are being used for anything yet, but they will show up in /proc/cpuinfo and subsequently lscpu and SystemMonitor. Note that I replaced the periods from the SSE 4.1 and 4.2 instructions with underscores, which matches the internal enum names, Linux's /proc/cpuinfo and the general pattern of replacing special characters with underscores to limit feature names to [a-z0-9_]. The enum member stringification has been moved to a new function for better re-usability and to avoid cluttering up Processor.cpp.
2022-03-27Kernel: Implement CPUFeature as an ArbitrarySizedEnumLinus Groh
This will make it possible to add many, many more CPU features - more than the current limit 32 and later limit of 64 if we stick with an enum class to be specific :^)
2022-03-27Kernel: Reorder code in Processor::cpu_detect() for readabilityLinus Groh
Checks of ECX go before EDX, and the bit indices are now ordered properly. Additionally, handling of the EDX[11] bit has been moved into a lambda function to keep the series of if statements neatly together. All of this makes it *a lot* easier to follow along and compare the implementation to the tables in the Intel manual, e.g. to find missing checks.
2022-03-22Kernel: Add and use bitwise operators to CPUFeatureHendiadyoin1
2022-02-11Kernel: Workaround QEMU hypervisor.framework CPUID max leaf bugIdan Horowitz
This works around issue #10382 until it is fixed on QEMU's side. Patch from Anonymous.
2022-02-09Kernel: Change static constexpr variables to constexpr where possibleLenny Maiorani
Function-local `static constexpr` variables can be `constexpr`. This can reduce memory consumption, binary size, and offer additional compiler optimizations. These changes result in a stripped x86_64 kernel binary size reduction of 592 bytes.
2022-01-30Kernel: Simplify x86 IOPL sanity checkAndreas Kling
Move this architecture-specific sanity check (IOPL must be 0) out of Scheduler and into the x86 enter_thread_context(). Also do this for every thread and not just userspace ones.
2022-01-30Kernel: Make Thread::State an `enum class` and use it consistentlyAndreas Kling
It was annoyingly hard to spot these when we were using them with different amounts of qualification everywhere. This patch uses Thread::State::Foo everywhere instead of Thread::Foo or just Foo.
2022-01-30Kernel: Don't dispatch signals in Processor::enter_current()Andreas Kling
Signal dispatch is already taken care of elsewhere, so there appears to be no need for the hack in enter_current(). This also allows us to remove the Thread::m_in_block flag, simplifying thread blocking logic somewhat. Verified with the original repro for #4336 which this was meant to fix.
2022-01-30Kernel: Remove unnecessary includes from Thread.hAndreas Kling
...and deal with the fallout by adding missing includes everywhere.
2022-01-26Kernel: Implement Page Attribute Table (PAT) support and Write-CombineTom
This allows us to enable Write-Combine on e.g. framebuffers, significantly improving performance on bare metal. To keep things simple we right now only use one of up to three bits (bit 7 in the PTE), which maps to the PA4 entry in the PAT MSR, which we set to the Write-Combine mode on each CPU at boot time.
2022-01-16Kernel: Make Processor::capture_stack_trace fallible using ErrorOrIdan Horowitz
2022-01-16Kernel: Specify inline capacity of return type in capture_stack_traceIdan Horowitz
Since the inline capacity of the Vector return type was not specified explicitly, the vector was automatically copied to a 0-length inline capacity one, essentially eliminating the optimization.
2022-01-12Kernel: Convert Processor::features_string() API to KStringBrian Gianforcaro
2022-01-04Kernel: Replace incorrect loop condition in write_raw_gdt_entryIdan Horowitz
Contradictory to the comment above it, this while loop was actually clearing the selectors above or equal to the edited one (instead of the selectors that were skipped when the gdt was extended), this wasn't really an issue so far, as all calls to this function did extend the GDT, which meant this condition was always false, but future calls to this function that will try to edit an existing entry would fail.
2022-01-04Kernel: Use enum instead of magic numbers for GDT descriptor typesIdan Horowitz
Some of the enum members were also renamed to reflect the fact that the segment sizes are not necessarily 32bit (64bit on x86_64).
2021-12-30Kernel: Tighten String-related includesDaniel Bertalan
2021-12-30Kernel: Fix incorrect SFMASK MSR value clobbering reserved bitsOwen Smith
Also improve the comments around that initialisation code.
2021-12-28Kernel: Implement and use the syscall/sysret instruction pair on x86_64Owen Smith
2021-12-28Kernel: Reorder the 64-bit GDT a bitOwen Smith
Add a kernel data segment and make the user code segment come after the data segment. We need the GDT to be in a certain order to support the syscall and sysret instruction pair.
2021-12-21AK+Everywhere: Replace __builtin bit functionsNick Johnson
In order to reduce our reliance on __builtin_{ffs, clz, ctz, popcount}, this commit removes all calls to these functions and replaces them with the equivalent functions in AK/BuiltinWrappers.h.
2021-12-19Kernel: Stop ProcFS stack walk on bogus userspace->kernel traversalAndreas Kling
Unsurprisingly, the /proc/PID/stacks/TID stack walk had the same arbitrary memory read problem as the perf event stack walk. It would be nice if the kernel had a single stack walk implementation, but that's outside the scope of this commit.
2021-10-15Kernel: Split ScopedCritical so header is platform independentJames Mintram
A new header file has been created in the Arch/ folder while the implementation has been moved into a CPP living in the X86 folder.
2021-10-14Kernel: Add per platform Processor.h headersJames Mintram
The platform independent Processor.h file includes the shared processor code and includes the specific platform header file. All references to the Arch/x86/Processor.h file have been replaced with a reference to Arch/Processor.h.
2021-10-14Kernel: Remove unused includesJames Mintram
2021-10-14Kernel: Add header includes closer to their useJames Mintram
2021-10-07Kernel: Add Processor::time_spent_idle()Idan Horowitz
2021-10-05Kernel: Detect and store the virtual address bit width during CPU initIdan Horowitz
2021-09-10Kernel: Replace inline assembly for turning on IA32_EFER.NXE with MSRIdan Horowitz
This fixes a triple fault that occurs when compiling serenity with the i686 clang toolchain. (The underlying issue is that the old inline assembly did not specify that it clobbered the eax/ecx/edx registers and as such the compiler assumed they were not changed and used their values across it) Co-authored-by: Brian Gianforcaro <bgianf@serenityos.org>
2021-09-06Kernel: Rename ProcessPagingScope => ScopedAddressSpaceSwitcherAndreas Kling
2021-09-05Kernel: Make copy_{from,to}_user() return KResult and use TRY()Andreas Kling
This makes EFAULT propagation flow much more naturally. :^)
2021-09-04Kernel: Add x2APIC supportTom
This allows addressing all cores on more modern processors. For now, we still have a hardcoded limit of 64 due to s_processors being a static array.
2021-08-30Kernel: Fix Clang not initializing `s_bsp_processor` correctlyDaniel Bertalan
Initializing the variable this way fixes a kernel panic in Clang where the object was zero-initialized, so the `m_in_scheduler` contained the wrong value. GCC got it right, but we're better off making this change, as leaving uninitialized fields in constant-initialized objects can cause other weird situations like this. Also, initializing only a single field to a non-zero value isn't worth the cost of no longer fitting in `.bss`. Another two variables suffer from the same problem, even though their values are supposed to be zero. Removing these causes the `_GLOBAL_sub_I_` function to no longer be generated and the (not handled) `.init_array` section to be omitted.
2021-08-23Kernel: Consolidate I386/X86_64 implementations of do_init_context()Andreas Kling
We can use ThreadRegisters::set_flags() to avoid the #ifdef's here.
2021-08-23Kernel: Fix some trivial clang-tidy warnings in x86/common/Processor.cppAndreas Kling
2021-08-23Kernel: Rename Processor::id() => current_id()Andreas Kling
And let id() be the non-static version that gives you the ID of a Processor object.
2021-08-22Kernel: Rename ScopedSpinlock => SpinlockLockerAndreas Kling
This matches MutexLocker, and doesn't sound like it's a lock itself.
2021-08-22Kernel: Rename SpinLock => SpinlockAndreas Kling
2021-08-19Kernel: Make Process::current() return a Process& instead of Process*Idan Horowitz
This has several benefits: 1) We no longer just blindly derefence a null pointer in various places 2) We will get nicer runtime error messages if the current process does turn out to be null in the call location 3) GCC no longer complains about possible nullptr dereferences when compiling without KUBSAN
2021-08-10Kernel/SMP: Change critical sections to not disable interruptsAndreas Kling
Leave interrupts enabled so that we can still process IRQs. Critical sections should only prevent preemption by another thread. Co-authored-by: Tom <tomut@yahoo.com>
2021-08-10Kernel/SMP: Make entering/leaving critical sections multi-processor safeAndreas Kling
By making these functions static we close a window where we could get preempted after calling Processor::current() and move to another processor. Co-authored-by: Tom <tomut@yahoo.com>
2021-08-09Kernel/SMP: Don't process SMP messages in non-SMP modeAndreas Kling
Processing SMP messages outside of non-SMP mode is a waste of time, and now that we don't rely on the side effects of calling the message processing function, let's stop calling it entirely. :^)
2021-08-09Kernel/SMP: Process the deferred call queue in exit_trap()Andreas Kling
We were previously relying on a side effect of the critical section in smp_process_pending_messages(): when exiting that section, it would process any pending deferred calls. Instead of relying on that, make the deferred invocations explicit by calling deferred_call_execute_pending() in exit_trap(). This ensures that deferred calls get processed before entering the scheduler at the end of exit_trap(). Since thread unblocking happens via deferred calls, the threads don't have to wait until the next scheduling opportunity when they could be ready *now*. :^) This was the main reason Tom's SMP branch ran slowly in non-SMP mode.
2021-08-09Kernel/SMP: Don't process SMP messages in exit_trap() in non-SMP modeAndreas Kling
2021-08-09Kernel/SMP: Don't enable interrupts in Processor::exit_trapAndreas Kling
Enter a critical section in Processor::exit_trap so that processing SMP messages doesn't enable interrupts upon leaving. We need to delay this until the end where we call into the Scheduler if exiting the trap results in being outside of a critical section and irq handler. Co-authored-by: Tom <tomut@yahoo.com>
2021-08-09Kernel/SMP: Mark s_smp_enabled READONLY_AFTER_INITAndreas Kling
We can't enter/leave SMP mode once the kernel is up and running.
2021-08-09Kernel/SMP: Make SMP message queueing work correctlyAndreas Kling
- Use the receiver's per-CPU entry in the message, instead of the sender's. (Using the sender's entry wasn't safe for broadcast messages since the same entry ended up on multiple message queues.) - Retry the CAS until it *succeeds* instead of *fails*. This closes a race window, and also ensures a correct return value. The return value is used by the caller to decide whether to broadcast an IPI. This was the main reason smp=on was so slow. We had CPUs busy-waiting until someone else triggered an IPI and moved things along. - Add a CPU pause hint to the spin loop. :^)