Age | Commit message (Collapse) | Author |
|
While the "regular" quickmap (used to temporarily map a physical page
at a known address for quick access) has been per-CPU for a while,
we also have the PD (page directory) and PT (page table) quickmaps
used by the memory management code to edit page tables. These have been
global, which meant that SMP systems had to keep fighting over them.
This patch makes *all* quickmaps per-CPU. We reserve virtual addresses
for up to 64 CPUs worth of quickmaps for now.
Note that all quickmaps are still protected by the MM lock, and we'll
have to fix that too, before seeing any real throughput improvements.
|
|
Now that we reclaim the memory range that is created by KASLR before
the start of the kernel image, there's no need to be conservative with
the KASLR offset.
|
|
Now that the shared bottom 2 MiB virtual address mappings are gone
userspace can use lower virtual addresses.
|
|
This enables further work on implementing KASLR by adding relocation
support to the pre-kernel and updating the kernel to be less dependent
on specific virtual memory layouts.
|
|
Instead of manually redeclaring those variables in various files this
now adds a header file for them.
|
|
|
|
This implements a simple bootloader that is capable of loading ELF64
kernel images. It does this by using QEMU/GRUB to load the kernel image
from disk and pass it to our bootloader as a Multiboot module.
The bootloader then parses the ELF image and sets it up appropriately.
The kernel's entry point is a C++ function with architecture-native
code.
Co-authored-by: Liav A <liavalb@gmail.com>
|
|
|
|
Building the x86_64 kernel with ENABLE_EXTRA_KERNEL_DEBUG_SYMBOLS
results in an image that is larger than 0x2000000 bytes.
|
|
We had an inconsistency in valid user addresses. is_user_range() was
checking against the kernel base address, but previous changes caused
the maximum valid user addressable range to be 32 MiB below that.
This patch stops mmap(MAP_FIXED) of a range between these two bounds
from panic-ing the kernel in RangeAllocator::allocate_specific.
|
|
By moving the PhysicalPage classes out of the kernel heap into a static
array, one for each physical page, we can avoid the added overhead and
easily find them by indexing into an array.
This also wraps the PhysicalPage into a PhysicalPageEntry, which allows
us to re-use each slot with information where to find the next free
page.
|
|
|
|
This is to make the 0xc0000000 less a magic number, and will make it
easier in the future to move the Kernel around
|
|
This also removes a lot of CPU.h includes infavor for Sections.h
|