A Beginner's Guide to OS Architecture
The process (a running program) is at the center of everything.
The OS exists to let processes run on the CPU, give each process its own private memory, let processes read and write files, and switch between processes fairly. Every operation a process does flows outward through layers of the OS until it reaches hardware.
| Flow | What Happens | Ends At |
|---|---|---|
| Memory Access | Process uses an address → translated to physical location | RAM |
| File I/O | Process reads/writes file → goes through filesystem layers | Disk |
| Process Ops | Process creates child or loads new program | Ready Queue |
| Scheduling | Timer fires → OS picks which process runs next | Back to a Process |
| Blocking | Process waits for slow I/O → sleeps → wakes when ready | Back to Process |
When your program accesses memory (like reading a variable), it uses a virtual address. The OS and hardware work together to translate this to a real physical location.
Why? So each process thinks it has its own private memory, but they're actually sharing RAM safely.
The address your program sees (like 0x7fff1234)
A hardware cache that remembers recent translations (very fast)
A per-process lookup table: virtual page → physical frame
The actual location in RAM
What if the page isn't in RAM? → Page Fault → OS loads it from disk
When your program calls read() or write(), the request travels down through several layers before reaching the disk.
How user programs ask the kernel to do privileged things
Lets all filesystems (ext4, FAT, etc.) look the same to programs
Recently-used disk blocks kept in RAM (huge speedup!)
Knows how to talk to specific hardware
Processes can create new processes and replace themselves with new programs.
After fork(): Two processes exist. Parent gets child's PID, child gets 0.
After exec(): Same PID, but completely different program running.
The scheduler decides which process gets the CPU. It runs when:
Saving one process's state, loading another's
Processes waiting for CPU time
How long a process runs before being preempted (~10ms)
When a process needs data that isn't ready yet (disk read, network packet), it sleeps instead of wasting CPU cycles waiting.
Key Insight: The CPU is never idle waiting. When one process blocks, another runs.
Every process is in one of these states:
Currently executing on CPU
Waiting for CPU time
Waiting for I/O or event
The most important line in the OS: User Mode vs Kernel Mode
Here's how all the pieces fit together in a complete system:
| OS Job | How It Does It |
|---|---|
| Run programs | Creates processes, loads code, allocates memory |
| Share the CPU | Timer interrupts + scheduler + context switching |
| Provide private memory | Virtual addresses + page tables + TLB |
| Access files | VFS + filesystems + page cache + drivers |
| Protect processes | User/kernel mode + address space isolation |
Everything exists to serve running programs
Each process thinks it has private memory
The only way into the kernel
The timer forces fair sharing
TLB for addresses, page cache for files
Processes don't know they're being paused