Writing Operating System In Rust!
"If you can't explain it simply, you don't understand it well enough." — Albert Einstein
Hello there!1
This is a book on how to write a functioning operating system in rust, from scratch.
I will not use ANY2 external libraries, and all of the thought process, code and implementations will be explained and documented here as well as in this repo which will contain all of the implementation!
Base Knowledge
This book will be technical, and will assume a little bit of a programming knowledge background, but not necessarily in rust
If you are not coming from a low level programming knowledge that's fine!
Just make sure you know this stuff, and probably similar stuff that I am forgetting. Also if in any place on this book I take some things for granted, please, open an issue here and let me know so I could explain it better.
The base things that I expect you to know are:
-
Some assembly knowledge. (just understand simple movs, and arithmetic operations, at a very basic level3)
-
Some knowledge on memory. (what's a pointer, what's an address)
-
A knowledge in rust is not that important, but knowing at least one programming language is Important. I myself have some more learning to do on Rust, and in this book I will also explain some great features that it have!
-
A lot of motivation to learn and understand. Although this is a complex subject, in this book I break it down into simple blocks of knowledge that are logical and easier to understand.
Chapters Of This Book
-
Compiling a stand alone binary
-
Boot loading, Debugging, stages and some legacy stuff
-
Important cpu modes and instructions
-
Paging, writing out own malloc
-
Utilizing the Interrupt Descriptor Table
-
File systems and Disk Drivers
-
Thinking in terms of processes
-
Writing a shell
-
Running our first program!
-
To be continued (Hopefully virtualization section and loading a vm of other OS)
-
Definitely not a star wars reference ↩
-
Only libraries that remove boilerplate code will be used (And obviously be explained). ↩
-
This is only relevant to the starting stages and some optimizations, and probably a day of learning will be enough ↩
Chapter 1
"The journey of a thousand miles begins with one step." — Lao Tzu
Let's start our Operating System journey! There is a lot to learn, but every journey has to start somewhere :)
In this chapter we discuss:
- How to make a standalone1 binary
- How to make our binary bootable
-
A binary that doesn't require an operating system. ↩
Making a Standalone Binary
"Machines take me by surprise with great frequency." — Alan Turing
The first step in making our operating system, is to make a program that can be compiled, and executed, without any dependency.
This is not a straight forward task, because every program that we use in our daily life uses at least one, very important dependency, The Standard Library
This library is some of the time provided by the operating system itself, for example libc for the linux operating system, or the winapi for windows operating system, and most of the time it is wrapped around by our programming languages.
It name may vary per language, but here are some popular names:
- Rust -> std::*
- C++ -> std::*
- C -> stdlib.h, libc.so
- Python -> Modules like os, sys, math
- Java -> java.*, javax.*
- Go -> fmt, os
This library is linked1 to our code by default, and provides us with the ability to access our operating system.
Most of the time, programming languages tend to add additional functionality to their standard library. For example, the Rust Standard library, adds the println!
macro for printing to screen, smart collections like a Vec
, or a LinkedList
, as well as Box
for safe memory management, a lot of useful traits, very smart iterators and much much more!
Unfortunately, we won't have this luxury of a library and we will to implement it all ourselves!
But don't worry, Rust has an ace up it sleeve and it provides with the fantastic Core library, which is a dependency free base for the standard library, and more over, it provides us with traits, and structures that can be linked into our own os, for example, once we write our memory allocator2, we could create a Vec
from the core library, and we can tell it to use our own allocator!
So without further ado, Let's get started!
Making a Rust Project
First, make sure you have rust, installation instruction can be found here
Afterwards, you can create the project with the following command
$ cargo init <project_name>
$ cd <project_name>
If you have done everything correct, you project should look like this
<project_name>/
├── Cargo.toml
├── src/
│ └── main.rs
and the main file, should look something like this:
fn main() { println!("Hello World!"); }
This can easily be run on you computer with cargo run
but, because you are running it on a regular computer, with a functioning operating system it uses the standard library.
Ignoring The Standard Library
As mentioned before we don't want to depend on the standard library because it is meant for already existing operating systems. To ignore it, simply add #![no_std]
on the top of our main file, this attribute tells the compiler that we don't want to use the standard library.
Now, if we then try to compile our crate, we get this error massage:
#![allow(unused)] fn main() { error: cannot find macro `println` in this scope --> src/main.rs:4:5 | 4 | println!("Hello, world!"); | ^^^^^^^ error: `#[panic_handler]` function required, but not found error: unwinding panics are not supported without std | = help: using nightly cargo, use -Zbuild-std with panic="abort" to avoid unwinding = note: since the core library is usually precompiled with panic="unwind", rebuilding your crate with panic="abort" may not be enough to fix the problem }
When breaking this error down we see there are 3 main errors
- Cannot find macro
println
#[panic handler]
function is required- Unwinding panics are not supported without std.
The first error is more obvious, because we don't have our standard library, the println
does not exist, so we simply need to remove the line that uses it, the other errors will require their own section.
Defining a Panic Handler
Rust doesn't offer a standard exception like other languages, for example, in python an exception could be raised like this
def failing_function(x: str):
if not isinstance(x, str):
raise TypeError("The type of x is not string!")
Instead, Rust provides us with the panic!
macro, which will call the Panic Handler Function
. This function is very important and it will be called every time the panic!
macro will be invoked, for example:
fn main() { panic!("This is a custom message"); }
Normally, the Standard Library provides us with an implementation of the Panic Handler Function, which will typically print the line number, and file in which the error occurred. But, because we are now not using the Standard Library, we need to define the implementation of the function ourselves.
This function can be any function, it just have to include the attribute #[panic_handler]
, this attribute is added, so the compiler will know which function to use when invoking the panic!
macro, to enforce that only one function of this type exists, and to also enforce the input argument and the output type.
If we create an empty function for the panic handler, we will get this error:
#![allow(unused)] fn main() { error[E0308]: `#[panic_handler]` function has wrong type --> src\main.rs:10:1 | 10 | fn panic_handler() {} | ^^^^^^^^^^^^^^^^^^ incorrect number of function parameters | = note: expected signature `for<'a, 'b> fn(&'a PanicInfo<'b>) -> !` found signature `fn() -> () }
This means that it wants our function will get a reference to a structure called PanicInfo
and will return the !
type.
But what is this struct? and what is this weird type?
The PanicInfo
struct, includes basic information about our panic, such as the location, and message, and it's definition can be found in the core library
#![allow(unused)] fn main() { pub struct PanicInfo<'a> { message: &'a fmt::Arguments<'a>, location: &'a Location<'a>, can_unwind: bool, force_no_backtrace: bool, } }
The !
type is a very special type in rust, called the never
type, as the type name may suggest, it says that a function that return the !
type, should never return, which means our program has come to an end.
In a normal operating system, this is not a problem, just print the panic message + the location and kill the process, so it would not return. But in our own os unfortunately, this is not possible because there is not a process that we can exit. So, how can we prove to Rust we are not returning? by endlessly looping!
So at the end, this is the definition of our handler, which results in the following code
#![no_std] fn main() { } #[panic_handler] pub fn panic_handler(_info: &core::panic::PanicInfo) -> ! { loop {} }
This code unfortunately still doesn't compile, because we didn't handle the last error
What is Unwinding and How to Disable It
In a normal rust execution environment, when a program panics, it means that it has encountered an unrecoverable error
This means, that all of the memory should be cleaned up, so a memory leak doesn't occur. This is where unwinding comes in.
When a rust program panics, and the panic strategy is to unwind, rust goes up the stack of the program, and cleans up the data from each function that it encounters. However, walking back and cleaning up is a lot of work. Rust, therefore, allows you to choose the alternative of immediately aborting, which ends the program without cleaning up. This alternative is also useful in our case, where we don't have the sense of "cleaning up", because we still doesn't have an operating system.
So, to simply switch the panic strategy to abort, we can add the following line to our Cargo.toml
file:
[profile.dev]
panic = "abort"
[profile.release]
panic = "abort"
After we disabled unwinding, we can now, hopefully try to compile our code!
But, by running cargo run
we get the following error
error: using `fn main` requires the standard library | = help: use `#![no_main]` to bypass the Rust generated entrypoint and declare a platform specific entrypoint yourself, usually with `#[no_mangle]`
As per usual, the rust compiler errors are pretty clear, and they tell us exactly what we need to do to fix the problem. In this case, we need to add the #![no_main]
attribute to our crate, and declare a platform specific entrypoint ourselves.
Defining an Entry Point
To define an entry point, we need to understand the linker.
The linker is a program that is responsible to structure our code into segments, define entry point, define the output format, and also link other code to our program. This configuration is controlled by a linker script. For example, a very simple linker script may look like this
OUTPUT_FORMAT(binary)
ENTRY(main)
This will set our entry point to main, and our output into a raw binary, which means the binary header3 of the program will not be included
Then, to make our linker to use this script, we have mainly two options, one is to add some arguments to our build command, and the other one is to create a build script. In this guide we use the following build script.
use std::path::Path; fn main() { let local_path = Path::new(env!("CARGO_MANIFEST_DIR")); println!( "cargo:rustc-link-arg-bins=--script={}", local_path.join("linker.ld").display() ) }
This script tells cargo, to add the -C link-arg=--script=./linker.ld to our compiling command.
But, after we do all this and again, run cargo build
, we get the same error, at first, this doesn't seem logical, because we defined a main function. But, although it is true that we defined one, we didn't consider Rust's default mangling.
This is a very clever idea done by Rust, and without it, things like this wouldn't be possible
#![allow(unused)] fn main() { struct A(u32); impl A { pub fn new(a: u32) -> A { A(a) } } struct B(u32); impl B { pub fn new(b: u32) -> B { B(b) } } }
Because although the function are defined on different structs, they have the same name. But, because of mangling, the actual name of the function would be something like
A::new -> _ZN7mycrate1A3new17h5f3a92c8e3b0a1a2E
B::new -> _ZN7mycrate1B3new17h1c2d3e4f5a6b7c8dE
A similar thing is happening to our main
function, which makes it name not to be exactly 'main' so the entry point is not recognized.
To fix it, we can add the #[unsafe(no_mangle)]
attribute to our main function, which will make it's name to be just 'main'
Which makes this, our final main.rs file!
#![no_std] #![no_main] #[unsafe(no_mangle)] fn main() { } #[panic_handler] pub fn panic_handler(_info: &core::panic::PanicInfo) -> ! { loop {} }
If you followed through, this binary should compile, but, it is still not bootable, which is what I will cover in the next section
-
Linking is the process of combining compiled software builds so they can use each other functions ↩
-
This is a subsystem in our operating system that is responsible for managing memory ↩
-
Operating systems have their own binary header, so they can understand how to treat a binary, some common ones are ELF and PE ↩
Booting Our Binary
"There is no elevator to success — you have to take the stairs." — Zig Ziglar
In the previous section, we created a stand alone binary, which is not linked to any standard library. But if you looked closely, and inspected the binary, you would see that although we defined our output format to be 'binary' in the linker script, we got a different format. Why is that?
Understanding Rust Targets
The compiler of rust, rustc
is a cross-compiler, which means it can compile the same source code into multiple architectures and operating systems.
This provides us with a lot of flexibility, but it is the core reason for our problem. This is because you are probably compiling this code from a computer with a regular operating system (Linux, Windows or MacOS) which rustc supports, which means that is it's default target
. To see your default target, you can run rustc -vV
and look at the host
section.
The target contains information for the rustc
compiler about which header should the binary have, what is the pointer and int size, what instruction set to use, and more information about the features of the cpu that it could utilize.
So, because we compiled our code just with cargo build
, cargo, which under the hood uses rustc
, compiled our code to our default target, which resulted in a binary that is operating specific and not a truly stand alone even though we used #![no_std]
.
Note: if you want to see the information of your computer target, use the following command
rustc +nightly -Z unstable-options --print target-spec-json
Custom Rust Target
To boot our binary, we need to create a custom target that will specify that no vendor or operating system in our target triple is used, and that it will contain the right architecture. But, what architecture we need?
In this guide, the operating system that we build will be compatible with the x86_64 computer architecture (and maybe other architectures in the far far future). So, for that we will need to understand what an x86_64 chip expects at boot time.
BIOS Boot Process
When our computer (or virtual machine) powers on, the first software that the CPU encounters is the BIOS, which is a piece of software that is responsible to perform hardware initialization during the computer start up. It comes pre installed on the motherboard and as an OS developer, we can't interfere or modify the BIOS in any way.
The last thing BIOS does before handing to us the control over the computer, is to load one sector (512 bytes) form the boot device (can be hard-disk, cd-rom, floppy-disk etc) to memory address 0x7c00
if the sector is considered valid
, which means that it has the BIOS Boot Signature
at the end of it, which is the byte sequence 0x55
followed by 0xAA
in offset bytes 510 and 511 respectively.
At this time for backward compatibility reasons, the computer starts at a reduced instruction set, at a 16bit mode called real mode which provides direct access to the BIOS interface, and access to all I/O or peripheral device. This mode lacks support for memory protection, multitasking, or code privileges, and has only 1Mib of address space. Because of these limitation we want to escape it as soon as possible, but that is a problem that we will solve later (Maybe add link to when this is done).
Building Our Target
With this information, we understand that we will need to build a target that will support 16bit real mode. Unfortunately, if we look at all of the available targets, we would see that there is no target that support this unique need, but, luckily, Rust allows us to create custom targets!
As a clue, we can try and peak on the builtin targets, and check if there is something similar that we can borrow. For example, my target, which is the x86_64-unknown-linux-gnu looks like this:
{
"arch": "x86_64",
"cpu": "x86-64",
"crt-static-respected": true,
"data-layout": "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
"dynamic-linking": true,
"env": "gnu",
"has-rpath": true,
"has-thread-local": true,
"link-self-contained": {
"components": [
"linker"
]
},
"linker-flavor": "gnu-lld-cc",
"llvm-target": "x86_64-unknown-linux-gnu",
"max-atomic-width": 64,
"metadata": {
"description": "64-bit Linux (kernel 3.2+, glibc 2.17+)",
"host_tools": true,
"std": true,
"tier": 1
},
"os": "linux",
"plt-by-default": false,
"position-independent-executables": true,
"pre-link-args": {
"gnu-cc": [
"-m64"
],
"gnu-lld-cc": [
"-m64"
]
},
"relro-level": "full",
"stack-probes": {
"kind": "inline"
},
"static-position-independent-executables": true,
"supported-sanitizers": [
"address",
"leak",
"memory",
"thread",
"cfi",
"kcfi",
"safestack",
"dataflow"
],
"supported-split-debuginfo": [
"packed",
"unpacked",
"off"
],
"supports-xray": true,
"target-family": [
"unix"
],
"target-pointer-width": "64"
}
This target has some useful info that we can use, like useful keys, such as arch
, linker-flavor
, cpu
and more, that we will use in our target, and even the data-layout
that we will copy almost entirely. Our final, 16bit target, will look like this:
{
// The general architecture to compile to, x86 cpu architecture in our case
"arch": "x86",
// Specific cpu target - Intel i386 CPU Which is the original 32-bit cpu
// Which is compatible for 16-bit real mode instructions
"cpu": "i386",
// Describes how data is laid out in memory for the LLVM backend, split by '-':
// e -> Little endianness (E for big endianness)
// m:e -> ELF style name mangling
// p:32:32 -> The default pointer is 32-bit with 32-bit address space
// p270:32:32 -> Special pointer type ID-270 with 32-bit size and alignment
// p271:32:32 -> Special pointer type ID-271 with 32-bit size and alignment
// p271:64:64 -> Special pointer type ID-272 with 64-bit size and alignment
// i128:128 -> 128-bit integers are 128-bit aligned
// f64:32:64 -> 64-bit floats are 32-bit aligned, and can also be 64-bit aligned
// n:8:16:32 -> Native integers are 8-bit, 16-bit, 32-bit
// S128 -> Stack is 128-bit aligned
"data-layout": "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128",
// No dynamic linking is supported, because there is no OS runtime loader.
"dynamic-linking": false,
// This target is allowed to produce executable binaries.
"executables": true,
// Use LLD's GNU compatible frontend (`ld.lld`) for linking.
"linker-flavor": "ld.lld",
// Use the Rust provided LLD linker binary (bundled with rustup)
// This makes that our binary can compiled on every machine that has rust.
"linker": "rust-lld",
// LLVM target triple, code16 indicates for 16bit code generation
"llvm-target": "i386-unknown-none-code16",
// The widest atomic operation is 64-bit (TODO! Check if this can be removed)
"max-atomic-width": 64,
// Disable position independent executables
// The position of this executable matters because it is loaded at address 0x7c00
"position-independent-executables": false,
// Disable the redzone optimization, which saves in advance memory
// on a functions stack without moving the stack pointer which saves some instructions
// because the prologue and epilogue of the function are removed
// this is a convention, which means that the guest OS
// won't overwrite this otherwise 'volatile' memory
"disable-redzone": true,
// The default int is 32-bit
"target-c-int-width": "32",
// The default pointer is 32-bit
"target-pointer-width": "32",
// The endianness, little or big
"target-endian": "little",
// panic strategy, also set on cargo.toml
// this aborts execution instead of unwinding
"panic-strategy": "abort",
// There is no target OS
"os": "none",
// There is not target vendor
"vendor": "unknown",
// Use static relocation (no dynamic symbol tables or relocation at runtime)
// Also means that the code is statically linked.
"relocation-model": "static"
}
Now, the only thing left to do before we can run our code, is to include the boot signature at our binary. This can be done in the linker script by adding the following lines:
SECTIONS {
/*
Make the start offset of the file 0x7c00 This is useful,
because if make jump to a function that it's offset in the binary is 0x100,
it will actually be loaded at address 0x7d00 by the BIOS, and not 0x100,
so we need to consider this offset, and that's how we do it.
*/
. = 0x7c00;
/*
Currently, we have nothing on the binary,
if we write the signature now, it will be at the start of the binary.
Because we want the signature to start at the offset of 510 in our binary,
we pad it with zeros.
*/
.fill : {
FILL(0)
. = 0x7c00 + 510;
}
/* Write the boot signature to make the sector bootable */
.magic_number : { SHORT(0xaa55) }
}
To compile our code, we just need to run the following command:
cargo +nightly build --release --target .\16bit_target.json -Z build-std=core
The +nightly
tells rust to use the nightly toolchain, which includes a lot of feature that we will use, including the -Z flag to rustc.
The build-std
flag tells cargo to also compile the core library with the specified target, and not use the precompiled default in our system.
To see that indeed, the boot signature is in the correct place, we can use the Format-Hex
command in windows or the hexdump
command in Linux or MacOS to see the hex of our file.
This should result in a lot of zeros, and at the end, this line, where we can see the boot signature in the right offset
000001F0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 AA
Note: If you are like me, and you don't like to specify a lot of configuration in the command of compiling, these arguments can be specified in the following configuration files.
[toolchain] channel = "nightly"
To define that the default toolchain is the nightly toolchain
[unstable] build-std = ["core"]
To add the unstable
build-std
flag with the core parameter in it.
Running Our Code
Because our code is experimental, we will not want to run it on our machine, because it can make PERMANENT DAMAGE to it. This is because we don't monitor cpu temperature, and other hardware sensors that can help us protect our pc. Instead, we will run our code in QEMU, which is a free and open-source full machine emulator and virtualizer. To download QEMU for your platform, follow the instructions here
To make a sanity check that QEMU indeed works on your machine with our wanted architecture after you downloaded it, run qemu-system-x86_64
on a terminal. This should open a window and in it write some messages it tries to boot from certain devices, and after it fails, it should write it cannot find any bootable device. If that's what you are seeing, it all works as it should!
To provide our code, we need to add the -drive format=raw,file=<path-to-bin-file>
flag to qemu, which will add to our virtual machine a disk drive with our code.
If you are following the walkthrough, this is the command you need to run.
qemu-system-x86_64 -drive format=raw,file=target/16bit_target/release/LearnixOS-Book-Walkthrough
At a first glance, we might think our code still doesn't work, because all we see is a black screen, but, if you notice closely, we don't get more messages of the BIOS trying other boot devices, and we don't get the message of "No bootable device."
.
So why we see black screen? This is because we didn't provide the computer any code to run and our main function is empty, but now we have the platform to write any code that we like!
Hello, World!
To print "Hello, World!", we can utilize the BIOS video interrupt which can help us print ASCII characters to the screen.
For now, don't worry about the code implementation and just use and play with it. This code piece, and a lot more will be explained in the next chapter.
use core::arch::asm; #[unsafe(no_mangle)] fn main() { let msg = b"Hello, World!"; for &ch in msg { unsafe { asm!( "mov ah, 0x0E", // INT 10h function to print a char "mov al, {0}", // The input ASCII char "int 0x10", // Call the BIOS Interrupt Function // --- settings --- in(reg_byte) ch, // {0} Will become the register with the char out("ax") _, // Lock the 'ax' as output reg, so it won't be used elsewhere ); } } unsafe { asm!("hlt"); // Halt the system } }
When we try to compile and run our code, we can see that it's indeed booting, but we don't see any massage.
If you believe me that the code above is correct, and indeed works, we can try and look at the binary file that the compiler emitted with the hexdump
command in Linux or MacOS, or Format-Hex
in Windows.
When we do that, we can notice that it seems that more code was added, but at the end of the file, and not at the start of it, and more over, it is located after the first sector which means it doesn't even loaded by the BIOS. To resolve this, we need to learn about the default segment rustc
generates.
Default Segments In Rust
- .text - Includes the code of our program, which is the machine code that is generated for all of the functions
#![allow(unused)] fn main() { fn some_function(x: u32, y: u32) -> u32 { return x + y; } }
- .data - Includes the initialized data of our program, like static variables.
#![allow(unused)] fn main() { static VAR: u32 = 42; }
- .bss - Includes the uninitialized data of our program
#![allow(unused)] fn main() { static mut MESSAGE: String = MaybeUninit::uninit(); }
- .rodata - Includes the read-only data of our program
#![allow(unused)] fn main() { static mut MESSAGE: &'static str = "Hello World!"; }
- .eh_frame & .eh_frame_hdr - Includes information that is relevant to exception handling and stack unwinding. These section are not relevant for us because we use
panic = "abort"
.
So, to make our linker put the segments in the right position, we need to change the SECTION
segment of our linker script to this.
SECTIONS {
. = 0x7c00;
/*
Rust also mangles segment names.
The "<segment_name>.*" syntax is used to also include all the mangles
*/
.text : { *(.text .text.*) }
.bss : { *(.bss .bss.*) }
.rodata : { *(.rodata .rodata.*) }
.data : { *(.data .data.*) }
/DISCARD/ : {
*(.eh_frame .eh_frame.*)
*(.eh_frame_hdr .eh_frame_hdr.*)
}
. = 0x7c00 + 510;
.magic_number : { SHORT(0xaa55) }
}
Now, when we compile and run our code, we can see our message!

Debugging Our Operating System
"If debugging is the process of removing bugs, then programming must be the process of putting them in." — Edsger W. Dijkstra
Debugging is a crucial part for a good operating system, and especially on the start of the development when we still don't have good debugging methods, like printing, we need to use other methods.
By far the most annoying bug is the triple fault, which will be explained extensively in the Interrupts Chapter. In short, this is an error that is not recoverable, and the CPU will reset itself, and it looks like this:

Although this state is not recoverable, there are some methods to debug it even without having the ability to print.
Reverse Engineering our Code
One of the most useful methods to debug our code is to use a disassembler. A disassembler is a tool that takes binary code and converts it back into assembly language, which can help us understand what the CPU is executing at any given time.
By analyzing the assembly code, we can gain insights into the control flow and data manipulation happening within our operating system. This can be especially helpful when trying to identify the root cause of a bug or unexpected behavior.
Extracting Memory Dumps
Another useful debugging technique is to extract memory dumps. A memory dump is a snapshot of the contents of the system's memory at a specific point in time. By examining the memory dump, we can see the state of various variables, data structures, and the stack at the moment of failure.
This provides valuable information about CPU structures that we are loading to the CPU, which might cause the triple fault if we don't initialize them correctly.
A memory dump can be obtained with the following commands.
First, run the qemu virtual machine with the --monitor stdio
option to enable the QEMU monitor interface in the terminal.
Then, in this terminal, run the following command:
(qemu) pmemsave <start_address> <size> <file name>
// For example
(qemu) pmemsave 0x1000 0x500 memory.dump
// This will create a dump of size 0x500 from 0x1000 - 0x1500.
Minimal Printing
Once we write our kernel, the first thing that we will do is to write a print method with formatting, because it is one of the best ways to debug our code.
In the bootloader, in a time we didn't write our print yet, we will mostly debug with the methods above, but we can for debug purposes print characters, and even small strings using the BIOS like we did in our Hello, World! program.
A Minimal Bootloader
"From a small spark may burst a mighty flame." — Dante Alighieri
In this chapter we will learn what is a bootloader and how can we crate one.
We will make a
Reading From Disk
To Be Continued...
Latest Development is at LearnixOS
Entering Protected Mode
"With great power comes great responsibility." — Voltaire / Spider-Man
To Be Continued...
Latest Development is at LearnixOS
What is Memory Paging?
"The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." — Edsger W. Dijkstra
To Be Continued...
Latest Development is at LearnixOS
Booting the Kernel
"A small thing. Yet it holds everything together." — J.R.R. Tolkien, paraphrased
To Be Continued...
Latest Development is at LearnixOS
Printing To Screen
"Any sufficiently advanced console output is indistinguishable from magic." — Arthur C. Clarke, adapted
To Be Continued...
Latest Development is at LearnixOS
Memory Management
"The art of programming is the art of organizing complexity." — Edsger W. Dijkstra
To Be Continued...
Latest Development is at LearnixOS
Implementing Our Own Malloc
"Controlling complexity is the essence of computer programming." — Brian W. Kernighan
To Be Continued...
Latest Development is at LearnixOS
Interrupts and Exceptions
"In the middle of difficulty lies opportunity." — Albert Einstein
To Be Continued...
Latest Development is at LearnixOS
Utilizing the Interrupt Descriptor Table
"What we call chaos is just patterns we haven’t recognized." — Chuck Palahniuk
To Be Continued...
Latest Development is at LearnixOS
Handling Exceptions
"It’s not the load that breaks you down, it’s the way you carry it." — Lou Holtz
To Be Continued...
Latest Development is at LearnixOS
File Systems and Disk Drivers
To Be Continued...
Latest Development is at LearnixOS
Disk Drivers
To Be Continued...
Latest Development is at LearnixOS
Implementing a File System
To Be Continued...
Latest Development is at LearnixOS
Processes and Scheduling
To Be Continued...
Latest Development is at LearnixOS
Thinking in Terms of Processes
To Be Continued...
Latest Development is at LearnixOS
Implementing a Process Scheduler
To Be Continued...
Latest Development is at LearnixOS
Writing a Shell
To Be Continued...
Latest Development is at LearnixOS