Skip to main content

Benchmarking

This document describes how to run, interpret, and compare sandbox performance benchmarks for Greywall.

Quick Start

# Install dependencies
brew install hyperfine # macOS
# apt install hyperfine # Linux

go install golang.org/x/perf/cmd/benchstat@latest

# Run CLI benchmarks
./scripts/benchmark.sh

# Run Go microbenchmarks
go test -run=^$ -bench=. -benchmem ./internal/sandbox/...

Goals

  1. Quantify sandbox overhead on each platform (sandboxed / unsandboxed ratio)
  2. Compare macOS (Seatbelt) vs Linux (bwrap+Landlock) overhead fairly
  3. Attribute overhead to specific components (proxy startup, bridge setup, wrap generation)
  4. Track regressions over time

Benchmark Types

Layer 1: CLI Benchmarks (scripts/benchmark.sh)

What it measures: Real-world agent cost - full greywall invocation including proxy startup, socat bridges (Linux), and sandbox-exec/bwrap setup.

This is the most realistic benchmark for understanding the cost of running agent commands through Greywall.

# Full benchmark suite
./scripts/benchmark.sh

# Quick mode (fewer runs)
./scripts/benchmark.sh -q

# Custom output directory
./scripts/benchmark.sh -o ./my-results

# Include network benchmarks (requires local server)
./scripts/benchmark.sh --network

Options

OptionDescription
-b, --binary PATHPath to greywall binary (default: ./greywall)
-o, --output DIROutput directory (default: ./benchmarks)
-n, --runs NMinimum runs per benchmark (default: 30)
-q, --quickQuick mode: fewer runs, skip slow benchmarks
--networkInclude network benchmarks

Layer 2: Go Microbenchmarks (internal/sandbox/benchmark_test.go)

What it measures: Component-level overhead - isolates Manager initialization, WrapCommand generation, and execution.

# Run all benchmarks
go test -run=^$ -bench=. -benchmem ./internal/sandbox/...

# Run specific benchmark
go test -run=^$ -bench=BenchmarkWarmSandbox -benchmem ./internal/sandbox/...

# Multiple runs for statistical analysis
go test -run=^$ -bench=. -benchmem -count=10 ./internal/sandbox/... > bench.txt
benchstat bench.txt

Available Benchmarks

BenchmarkDescription
BenchmarkBaseline_*Unsandboxed command execution
BenchmarkManagerInitializeCold initialization (proxies + bridges)
BenchmarkWrapCommandCommand string construction only
BenchmarkColdSandbox_*Full init + wrap + exec per iteration
BenchmarkWarmSandbox_*Pre-initialized manager, just exec
BenchmarkOverheadGrouped comparison of baseline vs sandbox

Layer 3: OS-Level Profiling

What it measures: Kernel/system overhead - context switches, syscalls, page faults.

Linux

# Quick syscall cost breakdown
strace -f -c ./greywall -- true

# Context switches, page faults
perf stat -- ./greywall -- true

# Full profiling (flamegraph-ready)
perf record -F 99 -g -- ./greywall -- git status
perf report

macOS

# Time Profiler via Instruments
xcrun xctrace record --template 'Time Profiler' --launch -- ./greywall -- true

# Quick call-stack snapshot
./greywall -- sleep 5 &
sample $! 5 -file sample.txt

Interpreting Results

Key Metric: Overhead Factor

Overhead Factor = time(sandboxed) / time(unsandboxed)

Compare overhead factors across platforms, not absolute times, because hardware differences swamp absolute timings.

Example Output

Benchmark                      Unsandboxed    Sandboxed    Overhead
true 1.2 ms 45 ms 37.5x
git status 15 ms 62 ms 4.1x
python -c 'pass' 25 ms 73 ms 2.9x

What to Expect

WorkloadLinux OverheadmacOS OverheadNotes
true180-360x8-10xDominated by cold start
echo150-300x6-8xSimilar to true
python3 -c 'pass'10-12x2-3xInterpreter startup dominates
git status50-60x4-5xReal I/O helps amortize
rg40-50x3-4xSearch I/O helps amortize

The overhead factor decreases as the actual workload increases (because sandbox setup is fixed cost). Linux overhead is significantly higher due to bwrap/socat setup.

Cross-Platform Comparison

Fair Comparison Approach

  1. Run benchmarks on each platform independently
  2. Compare overhead factors, not absolute times
  3. Use the same greywall version and workloads
# On macOS
go test -run=^$ -bench=. -count=10 ./internal/sandbox/... > bench_macos.txt

# On Linux
go test -run=^$ -bench=. -count=10 ./internal/sandbox/... > bench_linux.txt

# Compare
benchstat bench_macos.txt bench_linux.txt

Caveats

  • macOS uses Seatbelt (sandbox-exec) - built-in, lightweight kernel sandbox
  • Linux uses bwrap + Landlock, this creates socat bridges for network, incurring significant setup cost
  • Linux cold start is ~10x slower than macOS due to bwrap/socat bridge setup
  • Linux warm path is still ~5x slower than macOS - bwrap execution itself has overhead
  • For long-running agents, this difference is negligible (one-time startup cost)

[!TIP] Running Linux benchmarks inside a VM (Colima, Docker Desktop, etc.) inflates overhead due to virtualization. Use native Linux (bare metal or CI) for fair cross-platform comparison.

GitHub Actions

Benchmarks can be run in CI via the workflow at .github/workflows/benchmark.yml:

# Trigger manually from GitHub UI: Actions > Benchmarks > Run workflow

# Or via gh CLI
gh workflow run benchmark.yml

Results are uploaded as artifacts and summarized in the workflow summary.

Tips

Reducing Variance

  • Run with --min-runs 50 or higher
  • Close other applications
  • Pin CPU frequency if possible (Linux: cpupower frequency-set --governor performance)
  • Run multiple times and use benchstat for statistical analysis

Profiling Hotspots

# CPU profile
go test -run=^$ -bench=BenchmarkWarmSandbox -cpuprofile=cpu.out ./internal/sandbox/...
go tool pprof -http=:8080 cpu.out

# Memory profile
go test -run=^$ -bench=BenchmarkWarmSandbox -memprofile=mem.out ./internal/sandbox/...
go tool pprof -http=:8080 mem.out

Tracking Regressions

  1. Run benchmarks before and after changes
  2. Save results to files
  3. Compare with benchstat
# Before
go test -run=^$ -bench=. -count=10 ./internal/sandbox/... > before.txt

# Make changes...

# After
go test -run=^$ -bench=. -count=10 ./internal/sandbox/... > after.txt

# Compare
benchstat before.txt after.txt

Workload Categories

CategoryCommandsWhat it Stresses
Spawn-onlytrue, echoProcess spawn, wrapper overhead
Interpreterpython3 -c, node -eRuntime startup under sandbox
FS-heavyfile creation, rgLandlock/Seatbelt FS rules
Network (local)curl localhostProxy forwarding overhead
Real toolsgit statusPractical agent workloads

Benchmark Findings (12/28/2025)

Results from GitHub Actions CI runners (Linux: AMD EPYC 7763, macOS: Apple M1 Virtual).

Manager Initialization

PlatformManager.Initialize()
Linux101.9 ms
macOS27.5 µs

Linux initialization is ~3,700x slower because it must:

  • Start HTTP + SOCKS proxies
  • Create Unix socket bridges for socat
  • Set up bwrap namespace configuration

macOS only generates a Seatbelt profile string (very cheap).

Cold Start Overhead (one greywall invocation per command)

WorkloadLinuxmacOS
true215 ms22 ms
Python124 ms33 ms
Git status114 ms25 ms

This is the realistic cost for scripts running greywall -c "command" repeatedly.

Warm Path Overhead (pre-initialized manager)

WorkloadLinuxmacOS
true112 ms20 ms
Python124 ms33 ms
Git status114 ms25 ms

Even with proxies already running, Linux bwrap execution adds ~110ms overhead per command.

Overhead Factors

WorkloadLinux OverheadmacOS Overhead
true (cold)~360x~10x
true (warm)~187x~8x
Python (warm)~11x~2x
Git status (warm)~54x~4x

Overhead decreases as the actual workload increases (sandbox setup is fixed cost).

Impact on Agent Usage

Long-Running Agents (greywall claude, greywall codex)

For agents that run as a child process under greywall:

PhaseCost
Startup (once)Linux: ~215ms, macOS: ~22ms
Per tool callNegligible (baseline fork+exec only)

Child processes inherit the sandbox - no re-initialization, no WrapCommand overhead. The per-command cost is just normal process spawning:

CommandLinuxmacOS
true0.6 ms2.3 ms
git status2.1 ms5.9 ms
Python script11 ms15 ms

Bottom line: For greywall <agent> usage, sandbox overhead is a one-time startup cost. Tool calls inside the agent run at native speed.

Per-Command Invocation (greywall -c "command")

For scripts or CI running greywall per command:

SessionLinux CostmacOS Cost
1 command215 ms22 ms
10 commands2.15 s220 ms
50 commands10.75 s1.1 s

Consider keeping the manager alive (daemon mode) or batching commands to reduce overhead.

Additional Notes

  • Manager.Initialize() starts HTTP + SOCKS proxies; on Linux also creates socat bridges
  • Cold start includes all initialization; hot path is just WrapCommand + exec
  • -m (monitor mode) spawns additional monitoring processes, so we'll have to benchmark separately
  • Keep workloads under the repo - avoid /tmp since Linux bwrap does --tmpfs /tmp
  • debug mode changes logging, so always benchmark with debug off