How an Operating System Works

A Beginner's Guide to OS Architecture

User Space / Process
Kernel
Hardware

The Big Picture

The process (a running program) is at the center of everything.

The OS exists to let processes run on the CPU, give each process its own private memory, let processes read and write files, and switch between processes fairly. Every operation a process does flows outward through layers of the OS until it reaches hardware.

The Five Main Flows

FlowWhat HappensEnds At
Memory AccessProcess uses an address → translated to physical locationRAM
File I/OProcess reads/writes file → goes through filesystem layersDisk
Process OpsProcess creates child or loads new programReady Queue
SchedulingTimer fires → OS picks which process runs nextBack to a Process
BlockingProcess waits for slow I/O → sleeps → wakes when readyBack to Process

1. Memory Access Flow

When your program accesses memory (like reading a variable), it uses a virtual address. The OS and hardware work together to translate this to a real physical location.

Why? So each process thinks it has its own private memory, but they're actually sharing RAM safely.

Virtual Address

The address your program sees (like 0x7fff1234)

TLB

A hardware cache that remembers recent translations (very fast)

Page Table

A per-process lookup table: virtual page → physical frame

Physical Address

The actual location in RAM

What if the page isn't in RAM?Page Fault → OS loads it from disk

2. File I/O Flow

When your program calls read() or write(), the request travels down through several layers before reaching the disk.

System Call

How user programs ask the kernel to do privileged things

VFS

Lets all filesystems (ext4, FAT, etc.) look the same to programs

Page Cache

Recently-used disk blocks kept in RAM (huge speedup!)

Device Driver

Knows how to talk to specific hardware

3. Process Operations

Processes can create new processes and replace themselves with new programs.

Creating a New Process: fork()

After fork(): Two processes exist. Parent gets child's PID, child gets 0.

Loading a New Program: exec()

After exec(): Same PID, but completely different program running.

4. Scheduling Flow

The scheduler decides which process gets the CPU. It runs when:

  • • A timer interrupt fires (time slice expired)
  • • A process blocks (waiting for I/O)
  • • A process voluntarily yields

Context Switch

Saving one process's state, loading another's

Ready Queue

Processes waiting for CPU time

Time Slice

How long a process runs before being preempted (~10ms)

5. Blocking I/O Flow

When a process needs data that isn't ready yet (disk read, network packet), it sleeps instead of wasting CPU cycles waiting.

Key Insight: The CPU is never idle waiting. When one process blocks, another runs.

Process States

Every process is in one of these states:

Running

Currently executing on CPU

Ready

Waiting for CPU time

Blocked

Waiting for I/O or event

The Privilege Boundary

The most important line in the OS: User Mode vs Kernel Mode

Why two modes?

  • • User programs can't directly access hardware or other processes' memory
  • • They must ask the kernel via system calls
  • • This protects the system from buggy or malicious programs

Putting It All Together

Here's how all the pieces fit together in a complete system:

Summary: What the OS Does

OS JobHow It Does It
Run programsCreates processes, loads code, allocates memory
Share the CPUTimer interrupts + scheduler + context switching
Provide private memoryVirtual addresses + page tables + TLB
Access filesVFS + filesystems + page cache + drivers
Protect processesUser/kernel mode + address space isolation

Key Takeaways

🎯

Process is Central

Everything exists to serve running programs

🎭

Virtual Memory is an Illusion

Each process thinks it has private memory

🚪

System Calls are the Gate

The only way into the kernel

Interrupts Drive Scheduling

The timer forces fair sharing

Caching is Everywhere

TLB for addresses, page cache for files

🔄

Context Switching is Magic

Processes don't know they're being paused