A Simple Container Runtime Implemented in Rust
Implementation of Docker in Rust. This project demonstrates how container runtimes work by implementing core Docker functionality from scratch.
- Overview
- Features
- System Requirements
- Installation
- Quick Start
- Usage
- Architecture
- Development
- Documentation
Rocker is a container runtime implemented in Rust. This project demonstrates how container runtimes work by implementing core Docker functionality from scratch.
The project implements container isolation using Linux kernel features:
- Namespaces: For process, filesystem, network, and IPC isolation
- Cgroups: For resource limiting (CPU, memory)
- pivot_root: For root filesystem isolation
| Command | Description | Status |
|---|---|---|
rocker run |
Create and start containers | ✅ Implemented |
rocker ps |
List all containers | ✅ Implemented |
rocker logs |
View container logs | ✅ Implemented |
rocker stop |
Stop running containers | ✅ Implemented |
rocker rm |
Remove stopped containers | ✅ Implemented |
rocker exec |
Execute commands in running containers | ✅ Implemented |
rocker commit |
Save container as image | ✅ Implemented |
| Command | Description | Status |
|---|---|---|
rocker import |
Import tar file as image | ✅ Implemented |
rocker images |
List all images | ✅ Implemented |
- Memory limits: Restrict container memory usage
- CPU shares: Control CPU time allocation
- CPU sets: Pin containers to specific CPU cores
- UTS namespace: Hostname isolation
- IPC namespace: Inter-process communication isolation
- PID namespace: Process ID isolation
- Mount namespace: Filesystem isolation
- User namespace: User and group ID mapping
- Network namespace: Network stack isolation
- OS: Ubuntu 20.04+ / WSL2
- Kernel: Linux 5.10+ with namespace and cgroup support
- Architecture: x86_64
# Install FUSE overlayfs (for layered filesystem support)
sudo apt install fuse-overlayfs
# Install Rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shEnsure your kernel has the following features enabled:
CONFIG_NAMESPACES=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_MEMCG=y
CONFIG_CPUSETS=y
CONFIG_NET_NS=y
CONFIG_PID_NS=y
CONFIG_IPC_NS=y
CONFIG_UTS_NS=y
git clone https://github.com/AlexiaChen/rocker.git
cd rocker# Build in release mode
cargo build --release
# The binary will be at target/release/rocker
sudo cp target/release/rocker /usr/local/bin/rocker --version
# Output: rocker 0.1.0First, import a container image from a tar file:
# Import busybox image
sudo rocker import base-image/busybox.tar busybox
# List available images
sudo rocker images
# Output:
# REPOSITORY TAG IMAGE ID SIZE CREATED
# busybox latest b38350bb 441.1MB 2026-01-14 17:07:17# Run an interactive shell
sudo rocker run --image busybox /bin/sh
# Run with TTY enabled
sudo rocker run --tty --image busybox /bin/sh
# Run a specific command
sudo rocker run --image busybox "ls -l"
# Run with memory limit
sudo rocker run --image busybox -m 100m /bin/sh
# Run with CPU shares
sudo rocker run --image busybox --cpushare 512 /bin/sh
# Run in background
sudo rocker run --image busybox /bin/sleep 1000# List all containers
sudo rocker ps
# View container logs
sudo rocker logs <CONTAINER_ID>
# Stop a container
sudo rocker stop <CONTAINER_ID>
# Remove a stopped container
sudo rocker rm <CONTAINER_ID>rocker import <TAR_FILE> <IMAGE_NAME>[:TAG]
# Import busybox with default tag (latest)
sudo rocker import base-image/busybox.tar busybox
# Import with specific tag
sudo rocker import alpine.tar alpine:3.18
# Output:
# Imported busybox:latest (ID: b38350bb, Size: 441.1MB)rocker images
# Output format:
# REPOSITORY TAG IMAGE ID SIZE CREATED
# busybox latest b38350bb 441.1MB 2026-01-14 10:00:00
# alpine 3.18 a1b2c3d4 156.2MB 2026-01-14 11:30:00rocker run [OPTIONS] --image <IMAGE> <COMMAND>
Options:
--image <NAME>[:TAG] Image to run (e.g., busybox, alpine:3.18)
-t, --tty Allocate pseudo-terminal
-m, --memory <LIMIT> Memory limit (e.g., 100m, 1g)
--cpushare <SHARES> CPU time weight (default: 1024)
--cpuset <CORES> CPU cores (e.g., 0-1, 0-2)
Examples:
# Interactive shell with image
sudo rocker run --image busybox /bin/sh
# With specific tag
sudo rocker run --image alpine:3.18 /bin/sh
# With TTY and memory limit
sudo rocker run --tty --image busybox -m 256m /bin/sh
# Background container
sudo rocker run --image busybox /bin/sleep 1000rocker ps
# Output format:
# ID NAME PID STATUS COMMAND CREATED
# 1234567890 1234567890 12345 running /bin/sh 2026-01-14 10:00:00rocker logs <CONTAINER_NAME>
# Example:
sudo rocker logs 1234567890rocker stop <CONTAINER_NAME>
# Example:
sudo rocker stop 1234567890rocker rm <CONTAINER_NAME>
# Note: Container must be stopped first
# Example:
sudo rocker rm 1234567890rocker exec <CONTAINER_NAME> <COMMAND>
# Examples:
sudo rocker exec 1234567890 ps aux
sudo rocker exec 1234567890 ls /
sudo rocker exec 1234567890 cat /proc/1/statusrocker commit <CONTAINER_NAME> <IMAGE_NAME>
# Example:
sudo rocker commit 1234567890 myimagerocker/
├── src/
│ ├── rocker/ # CLI application
│ ├── container/ # Container runtime core
│ ├── image/ # Image management
│ ├── cgroups/ # Resource management
│ ├── network/ # Networking (to be implemented)
│ └── namespace/ # Namespace utilities (to be implemented)
├── doc/ # Documentation
├── base-image/ # BusyBox rootfs
└── Cargo.toml # Workspace configuration
Implements the fundamental container operations:
- Process creation with namespace isolation
- Root filesystem setup using
pivot_root - Mount operations for
/procand/dev - Container metadata persistence
Manages container images:
- Import tar files as images
- Image metadata storage and retrieval
- Root filesystem management
- Image tagging and versioning
Manages system resources through Linux cgroups:
- Memory subsystem: Memory limiting
- CPU subsystem: CPU shares allocation
- Cpuset subsystem: CPU core assignment (stubbed)
Command-line interface using clap:
- Argument parsing
- Command dispatch
- User interaction
┌─────────────┐
│ rocker run │──── Generate container ID
└──────┬──────┘
│
├──── Create parent process with namespaces
│
├──── Record container info
│
├──── Apply cgroup limits
│
├──── Execute user command
│
├──── Wait for exit
│
└──── Cleanup (remove metadata, destroy cgroups)
Container metadata is stored at:
/var/run/rocker/{container_name}/
├── config.json # Container metadata (PID, status, command, etc.)
└── container.log # Container output logs (non-TTY containers)
Image data is stored at:
/var/lib/rocker/images/{image_name}/{tag}/
├── image.json # Image metadata (name, tag, size, created time)
└── rootfs/ # Extracted root filesystem
# Debug build
cargo build
# Release build (optimized)
cargo build --release
# Run with debug logging
RUST_LOG=trace ./target/debug/rocker run --tty /bin/sh# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific test
cargo test test_container_info_generationThe project follows these conventions:
- Rust 2024 Edition: Latest Rust features
- Trait-based design: Modular and extensible
- Comprehensive error handling: Using
anyhowcrate - English documentation: Code comments and docs
- Image Management - Image import, storage, and usage
- Container Lifecycle - Detailed container lifecycle management
- Container Images - Root filesystem and image concepts
- Linux Namespaces - Namespace isolation concepts
- Linux Cgroups - Resource management with cgroups
- Union Filesystem - Layered filesystem concepts
- Linux /proc - /proc filesystem overview
- Rocker Tests - Test examples and verification
- Container core with namespace isolation
- Cgroups management (memory, CPU)
- Container lifecycle commands (run, ps, logs, stop, rm, commit)
- Exec command for container interaction
- CLI with modern argument parser
- Image management (import, images)
- Volume mounting (-v flag)
- Cpuset subsystem implementation
- Network module (bridge, IPAM, port mapping)
Most rocker commands require root privileges:
# Always use sudo
sudo rocker run --tty /bin/shIf you get "Container XXX not found":
# Check if container exists
rocker ps
# Verify the container name
ls -la /var/run/rocker/If cgroup operations fail:
# Check cgroup filesystem mount
mount | grep cgroup
# Verify cgroup v2 or v1
cat /proc/filesystems | grep cgroupIf namespace operations fail:
# Check namespace support
ls -la /proc/self/ns/Contributions are welcome! Please feel free to submit issues or pull requests.
This project is licensed under the MIT License.
Enjoy it, just for fun! 🚀