mirror of
https://gitlab.com/sortix/sortix.git
synced 2023-02-13 20:55:38 -05:00
51e3de971c
Pardon the big ass-commit, this took months to develop and debug and the refactoring got so far that a clean merge became impossible. The good news is that this commit does quite a bit of cleaning up and generally improves the kernel quality. This makes the kernel fully pre-emptive and multithreaded. This was done by rewriting the interrupt code, the scheduler, introducing new threading primitives, and rewriting large parts of the kernel. During the past few commits the kernel has had its device drivers thread secured; this commit thread secures large parts of the core kernel. There still remains some parts of the kernel that is _not_ thread secured, but this is not a problem at this point. Each user-space thread has an associated kernel stack that it uses when it goes into kernel mode. This stack is by default 8 KiB since that value works for me and is also used by Linux. Strange things tends to happen on x86 in case of a stack overflow - there is no ideal way to catch such a situation right now. The system call conventions were changed, too. The %edx register is now used to provide the errno value of the call, instead of the kernel writing it into a registered global variable. The system call code has also been updated to better reflect the native calling conventions: not all registers have to be preserved. This makes system calls faster and simplifies the assembly. In the kernel, there is no longer the event.h header or the hacky method of 'resuming system calls' that closely resembles cooperative multitasking. If a system call wants to block, it should just block. The signal handling was also improved significantly. At this point, signals cannot interrupt kernel threads (but can always interrupt user-space threads if enabled), which introduces some problems with how a SIGINT could interrupt a blocking read, for instance. This commit introduces and uses a number of new primitives such as kthread_lock_mutex_signal() that attempts to get the lock but fails if a signal is pending. In this manner, the kernel is safer as kernel threads cannot be shut down inconveniently, but in return for complexity as blocking operations must check they if they should fail. Process exiting has also been refactored significantly. The _exit(2) system call sets the exit code and sends SIGKILL to all the threads in the process. Once all the threads have cleaned themselves up and exited, a worker thread calls the process's LastPrayer() method that unmaps memory, deletes the address space, notifies the parent, etc. This provides a very robust way to terminate processes as even half-constructed processes (during a failing fork for instance) can be gracefully terminated. I have introduced a number of kernel threads to help avoid threading problems and simplify kernel design. For instance, there is now a functional generic kernel worker thread that any kernel thread can schedule jobs for. Interrupt handlers run with interrupts off (hence they cannot call kthread_ functions as it may deadlock the system if another thread holds the lock) therefore they cannot use the standard kernel worker threads. Instead, they use a special purpose interrupt worker thread that works much like the generic one expect that interrupt handlers can safely queue work with interrupts off. Note that this also means that interrupt handlers cannot allocate memory or print to the kernel log/screen as such mechanisms uses locks. I'll introduce a lock free algorithm for such cases later on. The boot process has also changed. The original kernel init thread in kernel.cpp creates a new bootstrap thread and becomes the system idle thread. Note that pid=0 now means the kernel, as there is no longer a system idle process. The bootstrap thread launches all the kernel worker threads and then creates a new process and loads /bin/init into it and then creates a thread in pid=1, which starts the system. The bootstrap thread then quietly waits for pid=1 to exit after which it shuts down/reboots/panics the system. In general, the introduction of race conditions and dead locks have forced me to revise a lot of the design and make sure it was thread secure. Since early parts of the kernel was quite hacky, I had to refactor such code. So it seems that the risk of dead locks forces me to write better code. Note that a real preemptive multithreaded kernel simplifies the construction of blocking system calls. My hope is that this will trigger a clean up of the filesystem code that current is almost beyond repair. Almost all of the kernel was modified during this refactoring. To the extent possible, these changes have been backported to older non-multithreaded kernel, but many changes were tightly coupled and went into this commit. Of interest is the implementation of the kthread_ api based on the design of pthreads; this library allows easy synchronization mechanisms and includes C++-style scoped locks. This commit also introduces new worker threads and tested mechanisms for interrupt handlers to schedule work in a kernel worker thread. A lot of code have been rewritten from scratch and has become a lot more stable and correct. Share and enjoy!
234 lines
7.4 KiB
C++
234 lines
7.4 KiB
C++
/*******************************************************************************
|
|
|
|
Copyright(C) Jonas 'Sortie' Termansen 2011, 2012.
|
|
|
|
This file is part of Sortix.
|
|
|
|
Sortix is free software: you can redistribute it and/or modify it under the
|
|
terms of the GNU General Public License as published by the Free Software
|
|
Foundation, either version 3 of the License, or (at your option) any later
|
|
version.
|
|
|
|
Sortix is distributed in the hope that it will be useful, but WITHOUT ANY
|
|
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
|
|
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
|
|
details.
|
|
|
|
You should have received a copy of the GNU General Public License along with
|
|
Sortix. If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
memorymanagement.cpp
|
|
Handles memory for the x64 architecture.
|
|
|
|
*******************************************************************************/
|
|
|
|
#include <sortix/kernel/platform.h>
|
|
#include <sortix/kernel/memorymanagement.h>
|
|
#include <libmaxsi/memory.h>
|
|
#include "multiboot.h"
|
|
#include "x86-family/memorymanagement.h"
|
|
#include "interrupt.h"
|
|
|
|
namespace Sortix
|
|
{
|
|
namespace Page
|
|
{
|
|
extern size_t stackused;
|
|
extern size_t stacklength;
|
|
void ExtendStack();
|
|
}
|
|
|
|
namespace Memory
|
|
{
|
|
extern addr_t currentdir;
|
|
|
|
void InitCPU()
|
|
{
|
|
// The x64 boot code already set up virtual memory and identity
|
|
// mapped the first 2 MiB. This code finishes the job such that
|
|
// virtual memory is fully usable and manageable.
|
|
|
|
// boot.s already initialized everything from 0x1000UL to 0xE000UL
|
|
// to zeroes. Since these structures are already used, doing it here
|
|
// will be very dangerous.
|
|
|
|
PML* const BOOTPML4 = (PML* const) 0x21000UL;
|
|
PML* const BOOTPML3 = (PML* const) 0x26000UL;
|
|
PML* const BOOTPML2 = (PML* const) 0x27000UL;
|
|
PML* const BOOTPML1 = (PML* const) 0x28000UL;
|
|
|
|
// First order of business is to map the virtual memory structures
|
|
// to the pre-defined locations in the virtual address space.
|
|
addr_t flags = PML_PRESENT | PML_WRITABLE;
|
|
|
|
// Fractal map the PML1s.
|
|
BOOTPML4->entry[511] = (addr_t) BOOTPML4 | flags;
|
|
|
|
// Fractal map the PML2s.
|
|
BOOTPML4->entry[510] = (addr_t) BOOTPML3 | flags | PML_FORK;
|
|
BOOTPML3->entry[511] = (addr_t) BOOTPML4 | flags;
|
|
|
|
// Fractal map the PML3s.
|
|
BOOTPML3->entry[510] = (addr_t) BOOTPML2 | flags | PML_FORK;
|
|
BOOTPML2->entry[511] = (addr_t) BOOTPML4 | flags;
|
|
|
|
// Fractal map the PML4s.
|
|
BOOTPML2->entry[510] = (addr_t) BOOTPML1 | flags | PML_FORK;
|
|
BOOTPML1->entry[511] = (addr_t) BOOTPML4 | flags;
|
|
|
|
// Add some predefined room for forking address spaces.
|
|
PML* const FORKPML2 = (PML* const) 0x29000UL;
|
|
PML* const FORKPML1 = (PML* const) 0x2A000UL;
|
|
|
|
BOOTPML3->entry[0] = (addr_t) FORKPML2 | flags | PML_FORK;
|
|
BOOTPML2->entry[0] = (addr_t) FORKPML1 | flags | PML_FORK;
|
|
|
|
currentdir = (addr_t) BOOTPML4;
|
|
|
|
// The virtual memory structures are now available on the predefined
|
|
// locations. This means the virtual memory code is bootstrapped. Of
|
|
// course, we still have no physical page allocator, so that's the
|
|
// next step.
|
|
|
|
PML* const PHYSPML3 = (PML* const) 0x2B000UL;
|
|
PML* const PHYSPML2 = (PML* const) 0x2C000UL;
|
|
PML* const PHYSPML1 = (PML* const) 0x2D000UL;
|
|
PML* const PHYSPML0 = (PML* const) 0x2E000UL;
|
|
|
|
BOOTPML4->entry[509] = (addr_t) PHYSPML3 | flags;
|
|
PHYSPML3->entry[0] = (addr_t) PHYSPML2 | flags;
|
|
PHYSPML2->entry[0] = (addr_t) PHYSPML1 | flags;
|
|
PHYSPML1->entry[0] = (addr_t) PHYSPML0 | flags;
|
|
|
|
Page::stackused = 0;
|
|
Page::stacklength = 4096UL / sizeof(addr_t);
|
|
|
|
// The physical memory allocator should now be ready for use. Next
|
|
// up, the calling function will fill up the physical allocator with
|
|
// plenty of nice physical pages. (see Page::InitPushRegion)
|
|
}
|
|
|
|
// Please note that even if this function exists, you should still clean
|
|
// up the address space of a process _before_ calling
|
|
// DestroyAddressSpace. This is just a hack because it currently is
|
|
// impossible to clean up PLM1's using the MM api!
|
|
// ---
|
|
// TODO: This function is duplicated in {x86,x64}/memorymanagement.cpp!
|
|
// ---
|
|
void RecursiveFreeUserspacePages(size_t level, size_t offset)
|
|
{
|
|
PML* pml = PMLS[level] + offset;
|
|
for ( size_t i = 0; i < ENTRIES; i++ )
|
|
{
|
|
addr_t entry = pml->entry[i];
|
|
if ( !(entry & PML_PRESENT) ) { continue; }
|
|
if ( !(entry & PML_USERSPACE) ) { continue; }
|
|
if ( !(entry & PML_FORK) ) { continue; }
|
|
if ( level > 1 ) { RecursiveFreeUserspacePages(level-1, offset * ENTRIES + i); }
|
|
addr_t addr = pml->entry[i] & PML_ADDRESS;
|
|
// No need to unmap the page, we just need to mark it as unused.
|
|
Page::PutUnlocked(addr);
|
|
}
|
|
}
|
|
|
|
void DestroyAddressSpace(addr_t fallback, void (*func)(addr_t, void*), void* user)
|
|
{
|
|
// Look up the last few entries used for the fractal mapping. These
|
|
// cannot be unmapped as that would destroy the world. Instead, we
|
|
// will remember them, switch to another adress space, and safely
|
|
// mark them as unused. Also handling the forking related pages.
|
|
addr_t fractal3 = (PMLS[4] + 0)->entry[510UL];
|
|
addr_t fork2 = (PMLS[3] + 510UL)->entry[0];
|
|
addr_t fractal2 = (PMLS[3] + 510UL)->entry[510];
|
|
addr_t fork1 = (PMLS[2] + 510UL * 512UL + 510UL)->entry[0];
|
|
addr_t fractal1 = (PMLS[2] + 510UL * 512UL + 510UL)->entry[510];
|
|
addr_t dir = currentdir;
|
|
|
|
// We want to free the pages, but we are still using them ourselves,
|
|
// so lock the page allocation structure until we are done.
|
|
Page::Lock();
|
|
|
|
// In case any pages wasn't cleaned at this point.
|
|
#warning Page::Put calls may internally Page::Get and then reusing pages we are not done with just yet
|
|
RecursiveFreeUserspacePages(TOPPMLLEVEL, 0);
|
|
|
|
// Switch to the address space from when the world was originally
|
|
// created. It should contain the kernel, the whole kernel, and
|
|
// nothing but the kernel.
|
|
PML* const BOOTPML4 = (PML* const) 0x21000UL;
|
|
if ( !fallback )
|
|
fallback = (addr_t) BOOTPML4;
|
|
|
|
if ( func )
|
|
func(fallback, user);
|
|
else
|
|
SwitchAddressSpace(fallback);
|
|
|
|
// Ok, now we got marked everything left behind as unused, we can
|
|
// now safely let another thread use the pages.
|
|
Page::Unlock();
|
|
|
|
// These are safe to free since we switched address space.
|
|
Page::Put(fractal3 & PML_ADDRESS);
|
|
Page::Put(fractal2 & PML_ADDRESS);
|
|
Page::Put(fractal1 & PML_ADDRESS);
|
|
Page::Put(fork2 & PML_ADDRESS);
|
|
Page::Put(fork1 & PML_ADDRESS);
|
|
Page::Put(dir & PML_ADDRESS);
|
|
}
|
|
|
|
const size_t KERNEL_STACK_SIZE = 256UL * 1024UL;
|
|
const addr_t KERNEL_STACK_END = 0xFFFF800000001000UL;
|
|
const addr_t KERNEL_STACK_START = KERNEL_STACK_END + KERNEL_STACK_SIZE;
|
|
const addr_t VIDEO_MEMORY = KERNEL_STACK_START;
|
|
const size_t VIDEO_MEMORY_MAX_SIZE = 4UL * 1024UL * 1024UL * 1024UL;
|
|
const addr_t INITRD = VIDEO_MEMORY + VIDEO_MEMORY_MAX_SIZE;
|
|
size_t initrdsize = 0;
|
|
const addr_t HEAPUPPER = 0xFFFFFE8000000000UL;
|
|
|
|
addr_t GetInitRD()
|
|
{
|
|
return INITRD;
|
|
}
|
|
|
|
size_t GetInitRDSize()
|
|
{
|
|
return initrdsize;
|
|
}
|
|
|
|
void RegisterInitRDSize(size_t size)
|
|
{
|
|
initrdsize = size;
|
|
}
|
|
|
|
addr_t GetHeapLower()
|
|
{
|
|
return Page::AlignUp(INITRD + initrdsize);
|
|
}
|
|
|
|
addr_t GetHeapUpper()
|
|
{
|
|
return HEAPUPPER;
|
|
}
|
|
|
|
addr_t GetKernelStack()
|
|
{
|
|
return KERNEL_STACK_START;
|
|
}
|
|
|
|
size_t GetKernelStackSize()
|
|
{
|
|
return KERNEL_STACK_SIZE;
|
|
}
|
|
|
|
addr_t GetVideoMemory()
|
|
{
|
|
return VIDEO_MEMORY;
|
|
}
|
|
|
|
size_t GetMaxVideoMemorySize()
|
|
{
|
|
return VIDEO_MEMORY_MAX_SIZE;
|
|
}
|
|
}
|
|
}
|