High-performance SOCKS5 proxy server written in Zig. Designed for C1M concurrent connections using a multi-core, event-driven architecture powered by libxev.
- Multi-core scaling — Thread-per-core design with
SO_REUSEPORTfor kernel-level load balancing - Zero-allocation relay — Pre-allocated O(1) buffer pool with
MemoryPoolfor connection structs - TCP backpressure — One-buffer-in-flight flow control; reads pause while a write is pending
- DNS offloading — Background thread pool for non-blocking domain resolution
- Authentication — No-auth and Username/Password (RFC 1929)
- Graceful shutdown — SIGINT/SIGTERM stops accepting, drains connections; SIGUSR1 dumps stats
- Observability — HTTP stats endpoint, latency histograms, error/auth/DNS metrics, access log, connection tracking
- Cross-platform — Linux (io_uring/epoll), macOS (kqueue), Windows (IOCP) via libxev
# Build
zig build
# Run (auto-detects CPU cores)
./zig-out/bin/zocks
# Custom port and bind address
./zig-out/bin/zocks --port 8080 --bind 127.0.0.1
# With authentication
./zig-out/bin/zocks --auth admin:secret
# Control worker count
./zig-out/bin/zocks --workers 4
# Tune buffer pool and DNS threads
./zig-out/bin/zocks --buffers 50000 --dns-threads 8
# Enable HTTP stats endpoint
./zig-out/bin/zocks --stats-port 9090Zocks - High-Performance SOCKS5 Proxy Server
Usage: zocks [options]
Options:
-p, --port <port> Listening port (default: 1080)
-b, --bind <addr> Bind address (default: 0.0.0.0)
-w, --workers <n> Worker threads (default: CPU cores)
--auth <user:pass> Enable username/password auth
--buffers <n> Buffer pool size (default: 10000)
--dns-threads <n> DNS resolver threads (default: 4)
--stats-port <port> HTTP stats endpoint port (default: disabled)
--access-log-size <n> Access log entries per worker (default: 1024)
-h, --help Show this help
# HTTP through proxy (DNS resolved by curl)
curl -x socks5://127.0.0.1:1080 http://example.com
# HTTP through proxy (DNS resolved by proxy)
curl -x socks5h://127.0.0.1:1080 http://example.com
# HTTPS through proxy
curl -x socks5h://127.0.0.1:1080 https://example.com
# With authentication
curl -x socks5h://user:pass@127.0.0.1:1080 http://example.com
# IPv6 target
curl -x socks5h://127.0.0.1:1080 http://[::1]:8080/ ┌──────────────────────────────────┐
│ Kernel │
│ SO_REUSEPORT load balancing │
└──┬──────┬──────┬──────┬──────────┘
│ │ │ │
┌─────┴┐ ┌───┴──┐ ┌─┴────┐ ┌┴─────┐
│ W-0 │ │ W-1 │ │ W-2 │ │ W-N │
│ │ │ │ │ │ │ │
│ Loop │ │ Loop │ │ Loop │ │ Loop │
│ Pool │ │ Pool │ │ Pool │ │ Pool │
│ DNS │ │ DNS │ │ DNS │ │ DNS │
└──────┘ └──────┘ └──────┘ └──────┘
Each worker runs an isolated event loop with its own:
- xev.Loop — Completion-based async I/O (kqueue/io_uring/epoll)
- BufferPool — Pre-allocated 8KB buffer slab with O(1) acquire/release
- TimerWheel — Shared idle timeout (13-bucket wheel, one timer per worker instead of per-client)
- MemoryPool — O(1) typed allocator for Client and HandshakeCtx structs
- DnsPool — Configurable background threads for blocking DNS resolution
| Feature | Status |
|---|---|
| SOCKS5 CONNECT (TCP) | Supported |
| IPv4 addresses | Supported |
| Domain names | Supported (async DNS) |
| No Authentication | Supported |
| Username/Password Auth | Supported (RFC 1929) |
| IPv6 addresses | Supported |
| UDP ASSOCIATE | Planned |
| SOCKS5 BIND | Not planned |
| TLS termination | Not planned (pass-through) |
| Signal | Behavior |
|---|---|
SIGINT / SIGTERM |
Graceful shutdown — stops accepting, drains active connections |
SIGUSR1 |
Prints per-worker stats (connections, bytes, errors, auth, DNS, histograms) |
SIGHUP |
Reload config file (auth credentials) |
# Dump stats while running
kill -USR1 $(pgrep zocks)Five built-in observability areas, all zero-allocation in the hot path:
-
Latency Histograms — Fixed 10-bucket histograms (<=1ms to >10s) for DNS resolution, SOCKS5 handshake, and total connection duration. Atomic counters, O(1) record.
-
Enhanced Metrics — Per-worker atomic counters for errors (connect refused, DNS failed, timeout, buffer exhausted, protocol), auth (attempts/successes/failures), DNS (queries/successes/failures), and peak active connections.
-
Connection Tracker — Intrusive linked list of active connections with atomic count. Zero allocation per tracking operation.
-
Access Log — Per-worker ring buffer of completed connection records: target, bytes sent/received, duration, and close reason (success, DNS failed, timeout, idle, auth failed, etc).
-
HTTP Stats Endpoint — Optional HTTP server (
--stats-port) with JSON metrics at/stats, liveness at/health, readiness at/ready. Runs on a separate thread. Suitable for Prometheus/Grafana scraping or k8s health probes.
# Enable stats endpoint
./zig-out/bin/zocks --stats-port 9090
# Query stats
curl http://localhost:9090/stats | jq .
# Health check (k8s liveness probe)
curl http://localhost:9090/health
# Readiness check (returns 503 during shutdown)
curl http://localhost:9090/readyUse --config / -c to load settings from a JSON file. CLI args override config file values.
{
"port": 1080,
"bind": "0.0.0.0",
"workers": 4,
"buffers": 20000,
"dns_threads": 8,
"stats_port": 9090,
"access_log_size": 2048,
"auth": {
"username": "admin",
"password": "secret"
}
}# Start with config file
./zig-out/bin/zocks --config /etc/zocks.json
# CLI args override config file
./zig-out/bin/zocks --config /etc/zocks.json --port 8080
# Hot-reload auth credentials
kill -HUP $(pgrep zocks)Hot-reloadable (via SIGHUP): auth credentials. Restart-required: port, bind, workers, buffers, dns_threads, stats_port.
- Memory: Connection lifecycle is split into two phases. During handshake, a temporary
HandshakeCtx(920 bytes) holds protocol buffers. Once relay starts, it's freed. The relay-phaseClientstruct is 1,104 bytes (dominated by 4 xev completions at 256 bytes each). At 1M idle relay connections: ~1.1 GB. - Backpressure: Each relay direction allows one buffer in flight. Reads pause after dispatching a write and resume when the write completes. Combined with TCP zero-window, this naturally throttles fast senders without accumulating kernel buffers.
- Timers: A shared timer wheel (13 buckets, 5s tick) replaces per-client timers. O(1) register/deregister/touch vs O(log N) heap operations. Saves 256 bytes per connection (one fewer xev.Completion).
- Scaling: Each CPU core runs its own event loop and accept queue. No user-space lock contention between workers. Note:
SO_REUSEPORTon macOS distributes by source IP hash, so localhost connections go to a single worker. Remote traffic distributes across all cores.
Requires Zig 0.14+ (tested with 0.15.2).
# Debug build
zig build
# Release build (optimized)
zig build -Doptimize=ReleaseFast
# Run tests
zig build testA curl-based load test harness is included:
# Start zocks first, then:
./bench/load_test.sh
# Customize
CONCURRENCY=200 PROXY_PORT=1080 TARGET_URL=http://httpbin.org/bytes/4096 ./bench/load_test.shThe harness reports success/failure counts, requests/sec, bytes relayed, and RSS memory usage.
src/
main.zig Entry point, CLI args, worker spawning, signal handlers
Worker.zig Per-core worker: loop + pools + listener
Client.zig Connection state machine (handshake + relay phases)
Socks5.zig RFC 1928/1929 protocol parser and serializer
BufferPool.zig O(1) slab allocator with stack-based free list
DnsPool.zig Background DNS resolution with xev.Async notification
TimerWheel.zig Shared idle timeout wheel (replaces per-client timers)
Histogram.zig Fixed-bucket latency histogram (atomic, O(1))
Metrics.zig Per-worker error/auth/DNS counters + histograms
ConnTracker.zig Active connection tracking (intrusive linked list)
AccessLog.zig Ring buffer of completed connection records
StatsServer.zig HTTP stats/health/ready endpoint
ConfigFile.zig JSON config file parser with hot-reload
config.zig Compile-time + runtime configuration
bench/
load_test.sh Concurrent curl-based load test harness
throughput.sh Sequential throughput benchmark
MIT