This example demonstrates the distributed in-memory cache library's performance in a heavy-read API scenario, showcasing ultra-low latency data propagation and high-throughput read operations.
- Sub-millisecond propagation from writer to all reader instances
- 1m+ req/s throughput with zero-serialization architecture
- Real-time metrics via Prometheus and Grafana
- Automatic load balancing with APISIX gateway
- Bandwidth reduction using HTTP 304 caching
- No CPU cycles wasted on serialization
- No garbage collection pressure
- Faster read latency
graph TB
Client[Client]
Client --> APISIX[APISIX Gateway<br/>Port 9080<br/>Round-Robin Load Balancing]
APISIX --> Reader1[Reader 1<br/>Port 80]
APISIX --> Reader2[Reader 2<br/>Port 80]
APISIX --> Reader3[Reader 3<br/>Port 80]
Reader1 --> Redis[Redis Pub/Sub<br/>+ Cache Store<br/>Port 6379]
Reader2 --> Redis
Reader3 --> Redis
Writer[Writer<br/>Port 8080] --> Redis
style Client fill:#e1f5ff
style APISIX fill:#fff4e6
style Reader1 fill:#e8f5e9
style Reader2 fill:#e8f5e9
style Reader3 fill:#e8f5e9
style Redis fill:#ffebee
style Writer fill:#f3e5f5
- Reader services store pre-serialized
[]bytedata in memory - No JSON marshal/unmarshal on read path
- Direct byte copy from memory to network socket
- Writer publishes posts via distributed cache
- Data propagates to all reader instances via Redis Pub/Sub
- Sub-millisecond propagation latency
- ETag-based caching using MD5 hashes
- Bandwidth optimization for repeat reads
- Client-side cache validation
- APISIX gateway with round-robin distribution
- Multiple reader instances for horizontal scaling
- Health checks and automatic failover
- Docker and Docker Compose
- Go 1.25+ (for running benchmark locally)
- 16GB+ RAM recommended
cd examples/heavy-read-api
# Start services
docker-compose up -d
# or
make start
# Wait for services to be ready (about 10 seconds)
sleep 10
# Run the interactive demo
./benchmark/run_benchmark.shcd examples/heavy-read-api
make startThis will start:
- Redis (port 6379)
- Writer service (port 8080)
- 3 Reader instances (load balanced)
- 1 Direct reader (port 8081 - for testing)
- APISIX Gateway (port 9080)
- Prometheus (port 9090)
- Grafana (port 3000)
# Check all services are running
docker-compose ps
# Or use the Makefile
make healthcurl -X POST http://localhost:8080/create \
-H "Content-Type: application/json" \
-d '{
"id": "post-1",
"title": "My First Post",
"content": "This is a test post demonstrating distributed cache propagation.",
"author": "demo-user"
}'
# Or use the Makefile
make test-write# Via APISIX (load balanced across 3 readers)
curl http://localhost:9080/post?id=post-1
# Direct to a specific reader
curl http://localhost:8081/post?id=post-1
# With ETag support -> needs to be uncommented in reader/main.go
curl -H "If-None-Match: <hash-from-previous-response>" \
http://localhost:9080/post?id=post-1# Make sure services are running
make start
# Run benchmark from repository root
cd ./examples/heavy-read-api/benchmark && ./run_benchmark.shsee your own results in benchmark-report.md
- Open http://localhost:3000
- Login:
admin/admin - Add Prometheus data source:
- URL:
http://prometheus:9090
- URL:
- Import APISIX dashboard (ID: 11719)
Access raw metrics at:
- APISIX: http://localhost:9091/apisix/prometheus/metrics
- Prometheus UI: http://localhost:9090
Edit docker-compose.yaml:
reader:
deploy:
replicas: 5 # Increase from 3 to 5Then restart:
docker-compose up -d --scale reader=5Modify reader/writer initialization in their respective main.go files:
cfg.LocalCacheConfig = dc.LocalCacheConfig{
MaxSize: 100_000_000, // 100MB
NumCounters: 1_000_000, // 1M counters
BufferItems: 64,
}- Accepts POST requests to create posts
- Serializes data once to
[]byte - Publishes to distributed cache using
Set()method - Data propagates to all readers via Redis Pub/Sub
- Subscribe to distributed cache updates
- Store serialized bytes in local memory
- Serve requests with zero-copy writes
- Support HTTP 304 with ETag validation
- Standalone mode (no etcd dependency)
- Round-robin load balancing
- Request ID tracking
- Prometheus metrics export
docker-compose down -v
# or
make stop# Check logs
docker-compose logs -f
# Restart specific service
docker-compose restart reader# Ensure Redis is healthy
docker-compose ps redis
# Check Redis connectivity
docker-compose exec redis redis-cli ping# Ensure all services are up
docker-compose ps
# Wait for services to be ready
sleep 10
# Run benchmark again
cd ./examples/heavy-read-api/benchmark && ./run_benchmark.shThis architecture is ideal for:
- Social Media Feeds: High read-to-write ratio (1000:1+)
- Content Distribution: News articles, blog posts
- Product Catalogs: E-commerce product listings
- Configuration Services: Application settings, feature flags
- Leaderboards: Gaming scores, rankings
| Metric | Value |
|---|---|
| Propagation Latency (P99) | < 100ms |
| Read Latency (P99) | < 5ms |
| Throughput | 1m+ req/s |
| Memory per Reader | ~500MB (for 1m posts) |
| Network Bandwidth | 20% reduction with HTTP 304 |
This example is part of the distributed-cache library.