Skip to content

Add rust-epoll (Rust) — zero-dependency epoll HTTP engine#238

Merged
MDA2AV merged 4 commits intoMDA2AV:mainfrom
BennyFranciscus:add-faf
Mar 29, 2026
Merged

Add rust-epoll (Rust) — zero-dependency epoll HTTP engine#238
MDA2AV merged 4 commits intoMDA2AV:mainfrom
BennyFranciscus:add-faf

Conversation

@BennyFranciscus
Copy link
Copy Markdown
Collaborator

Adds FaF (Fast as Fuck) by @errantmind — a zero-dependency Rust HTTP engine using raw epoll syscalls with SIMD HTTP parsing (SSE4.2).

Architecture

FaF's design is all about eliminating overhead:

  • Raw epoll event loop — no async runtime, no framework abstractions
  • One thread per CPU core with SO_REUSEPORT for kernel-level load distribution
  • Hand-rolled HTTP/1.1 parsing with pipelining support (batches multiple responses into a single send() call)
  • Direct libc syscallsaccept4, recv, send, epoll_*
  • Zero-copy request processing with partial request handling across reads
  • Chunked transfer encoding support for POST bodies

179 stars, zero dependencies in the core library. One of the leanest HTTP engines out there — the no_std version compiles to ~400 lines of assembly and a 7KB binary.

Entry Details

Field Value
Type Engine
Language Rust
Engine epoll + SIMD HTTP parsing
Tests baseline, pipelined, limited-conn

Validation

=== Validating: faf ===
[test] baseline endpoints
  PASS [GET /baseline11?a=13&b=42]
  PASS [POST /baseline11?a=13&b=42 body=20]
  PASS [POST /baseline11?a=13&b=42 chunked body=20]
[test] baseline anti-cheat (randomized inputs)
  PASS [GET /baseline11?a=401&b=444 (random)]
  PASS [POST body=454 (cache check 1)]
  PASS [POST body=942 (cache check 2)]
[test] pipelined endpoint
  PASS [GET /pipeline]

=== Results: 7 passed, 0 failed ===

cc @errantmind — cool project. The approach of using raw syscalls and SIMD for HTTP parsing is really interesting. Curious to see how it performs against the other engines in the arena.

FaF (Fast as Fuck) by errantmind — raw epoll syscalls, one-thread-per-core
with SO_REUSEPORT, hand-rolled HTTP/1.1 parsing with pipelining support.

Engine type: epoll + SIMD HTTP parsing (SSE4.2)
Language: Rust (stable, no nightly required for HttpArena entry)
Tests: baseline, pipelined, limited-conn
Validation: 7/7 passed

Architecture (faithful to errantmind/faf):
- Raw epoll event loop per thread (no async runtime)
- SO_REUSEPORT for kernel-level load distribution
- TCP_NODELAY, SOCK_NONBLOCK
- Pipelined request batching (single write for multiple responses)
- Direct libc syscalls (accept4, recv, send, epoll_*)
- Zero-copy request parsing with partial request handling
- Chunked transfer encoding support for POST bodies
@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

/benchmark

@github-actions
Copy link
Copy Markdown

🚀 Benchmark run triggered for faf (all profiles). Results will be posted here when done.

@github-actions
Copy link
Copy Markdown

Benchmark Results

Framework: faf | Profile: all profiles

faf / baseline / 512c (p=1, r=0, cpu=64)
  Best: 3065447 req/s (CPU: 5404.6%, Mem: 2.0GiB) ===

faf / baseline / 4096c (p=1, r=0, cpu=64)
  Best: 3223047 req/s (CPU: 6112.3%, Mem: 2.1GiB) ===

faf / baseline / 16384c (p=1, r=0, cpu=64)
  Best: 90692 req/s (CPU: 1045.7%, Mem: 2.1GiB) ===

faf / pipelined / 512c (p=16, r=0, cpu=unlimited)
  Best: 44526755 req/s (CPU: 6094.2%, Mem: 4.1GiB) ===

faf / pipelined / 4096c (p=16, r=0, cpu=unlimited)
  Best: 39638032 req/s (CPU: 5839.9%, Mem: 4.1GiB) ===

faf / pipelined / 16384c (p=16, r=0, cpu=unlimited)
  Best: 1492669 req/s (CPU: 783.0%, Mem: 4.1GiB) ===

faf / limited-conn / 512c (p=1, r=10, cpu=unlimited)
  Best: 1909737 req/s (CPU: 5264.7%, Mem: 4.1GiB) ===

faf / limited-conn / 4096c (p=1, r=10, cpu=unlimited)
  Best: 1284281 req/s (CPU: 5374.6%, Mem: 4.1GiB) ===
Full log
  Status codes: 2xx=7377648, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 7377648 / 7377648 responses (100.0%)
  Reconnects: 481437
  CPU: 776.5% | Mem: 4.1GiB

=== Best: 1492669 req/s (CPU: 783.0%, Mem: 4.1GiB) ===
[dry-run] Results not saved (use --save to persist)
httparena-bench-faf
httparena-bench-faf

==============================================
=== faf / limited-conn / 512c (p=1, r=10, cpu=unlimited) ===
==============================================
2c9c1755692d1cd1f54e1d241c4ce71b887a9edd68307bdd9638ea1e6edbed31
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     512 (8/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency    281us    156us    402us   2.18ms   5.76ms

  8570540 requests in 5.00s, 8570382 responses
  Throughput: 1.71M req/s
  Bandwidth:  129.08MB/s
  Status codes: 2xx=8570382, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 8570382 / 8570382 responses (100.0%)
  Reconnects: 856836
  Per-template: 2856801,2856785,2856796
  Per-template-ok: 2856801,2856785,2856796
  CPU: 5575.4% | Mem: 4.1GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     512 (8/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency    249us    158us    397us   2.14ms   5.54ms

  9548718 requests in 5.00s, 9548686 responses
  Throughput: 1.91M req/s
  Bandwidth:  143.81MB/s
  Status codes: 2xx=9548686, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 9550131 / 9548686 responses (100.0%)
  Reconnects: 954869
  Per-template: 3183372,3182890,3182883
  Per-template-ok: 3183372,3182890,3182883
  CPU: 5264.7% | Mem: 4.1GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     512 (8/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency    261us    148us    413us   2.44ms   6.71ms

  9140575 requests in 5.00s, 9140552 responses
  Throughput: 1.83M req/s
  Bandwidth:  137.66MB/s
  Status codes: 2xx=9140552, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 9140502 / 9140552 responses (100.0%)
  Reconnects: 914043
  Per-template: 3046951,3046767,3046784
  Per-template-ok: 3046951,3046767,3046784
  CPU: 5020.5% | Mem: 4.1GiB

=== Best: 1909737 req/s (CPU: 5264.7%, Mem: 4.1GiB) ===
  Input BW: 147.52MB/s (avg template: 81 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-faf
httparena-bench-faf

==============================================
=== faf / limited-conn / 4096c (p=1, r=10, cpu=unlimited) ===
==============================================
c39a8a0ce0d17bf2edee69bccb8be0096d223eb3014bc845e96df9be106c097f
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   2.65ms   2.20ms   3.93ms   7.64ms   39.10ms

  6702533 requests in 5.00s, 5860408 responses
  Throughput: 1.17M req/s
  Bandwidth:  88.26MB/s
  Status codes: 2xx=5860408, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 5860408 / 5860408 responses (100.0%)
  Reconnects: 1423320
  Per-template: 1958958,1952540,1948910
  Per-template-ok: 1958958,1952540,1948910
  CPU: 5667.6% | Mem: 4.1GiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   2.56ms   2.22ms   3.77ms   7.02ms   13.70ms

  7368764 requests in 5.00s, 6421407 responses
  Throughput: 1.28M req/s
  Bandwidth:  96.70MB/s
  Status codes: 2xx=6421407, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 6421407 / 6421407 responses (100.0%)
  Reconnects: 1586478
  Per-template: 2144966,2137965,2138476
  Per-template-ok: 2144966,2137965,2138476
  CPU: 5374.6% | Mem: 4.1GiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   2.68ms   2.25ms   4.22ms   8.74ms   14.50ms

  7063277 requests in 5.00s, 6170555 responses
  Throughput: 1.23M req/s
  Bandwidth:  92.93MB/s
  Status codes: 2xx=6170555, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 6170532 / 6170555 responses (100.0%)
  Reconnects: 1508704
  Per-template: 2054904,2056407,2059221
  Per-template-ok: 2054904,2056407,2059221
  CPU: 4823.1% | Mem: 4.1GiB

=== Best: 1284281 req/s (CPU: 5374.6%, Mem: 4.1GiB) ===
  Input BW: 99.21MB/s (avg template: 81 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-faf
httparena-bench-faf
[skip] faf does not subscribe to json
[skip] faf does not subscribe to upload
[skip] faf does not subscribe to compression
[skip] faf does not subscribe to noisy
[skip] faf does not subscribe to mixed
[skip] faf does not subscribe to static
[skip] faf does not subscribe to async-db
[skip] faf does not subscribe to baseline-h2
[skip] faf does not subscribe to static-h2
[skip] faf does not subscribe to baseline-h3
[skip] faf does not subscribe to static-h3
[skip] faf does not subscribe to unary-grpc
[skip] faf does not subscribe to unary-grpc-tls
[skip] faf does not subscribe to echo-ws
[restore] Restoring CPU governor to performance...

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

@BennyFranciscus backlog issues? big loss at 16k conn

MAX_CONN was 4096 — any fd >= 4096 was silently closed.
At 16384 connections, most fds exceeded that limit, causing the
massive throughput drop (90K vs 3M at 512c).

Changes:
- MAX_CONN: 4096 → 65536
- Listen backlog: 4096 → 65536
- Pre-allocated Vec buffers → lazy HashMap allocation
  (avoids 512MB per-worker memory at 65536 × 8KB)
- Buffers cleaned up on connection close to bound memory
@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

@MDA2AV yep, classic backlog issue. MAX_CONN was hardcoded to 4096 — any fd >= 4096 got immediately closed, so at 16384 connections most accepted fds were just dropped on the floor.

Fixed in the latest push:

  • MAX_CONN: 4096 → 65536
  • Listen backlog: 4096 → 65536
  • Switched from pre-allocated Vec buffers to lazy HashMap allocation (otherwise 65536 × 8KB = 512MB per worker thread)
  • Buffers freed on connection close to bound memory

Should fix the 16k drop. Ready for re-benchmark when you get a chance 🚀

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

/benchmark

@github-actions
Copy link
Copy Markdown

🚀 Benchmark run triggered for faf (all profiles). Results will be posted here when done.

@github-actions
Copy link
Copy Markdown

Benchmark Results

Framework: faf | Profile: all profiles

faf / baseline / 512c (p=1, r=0, cpu=64)
  Best: 3048236 req/s (CPU: 5487.2%, Mem: 49.2MiB) ===

faf / baseline / 4096c (p=1, r=0, cpu=64)
  Best: 3483788 req/s (CPU: 6382.4%, Mem: 114.1MiB) ===

faf / baseline / 16384c (p=1, r=0, cpu=64)
  Best: 3226357 req/s (CPU: 5965.4%, Mem: 313.2MiB) ===

faf / pipelined / 512c (p=16, r=0, cpu=unlimited)
  Best: 43742651 req/s (CPU: 6199.6%, Mem: 52.3MiB) ===

faf / pipelined / 4096c (p=16, r=0, cpu=unlimited)
  Best: 45001772 req/s (CPU: 6083.4%, Mem: 135.7MiB) ===

faf / pipelined / 16384c (p=16, r=0, cpu=unlimited)
  Best: 39878620 req/s (CPU: 5949.8%, Mem: 340.1MiB) ===

faf / limited-conn / 512c (p=1, r=10, cpu=unlimited)
  Best: 1852797 req/s (CPU: 5142.6%, Mem: 118.1MiB) ===

faf / limited-conn / 4096c (p=1, r=10, cpu=unlimited)
  Best: 2045497 req/s (CPU: 5308.0%, Mem: 186.8MiB) ===
Full log
  Bandwidth:  2.75GB/s
  Status codes: 2xx=186832624, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 186832624 / 186832624 responses (100.0%)
  CPU: 5311.0% | Mem: 342.9MiB

=== Best: 39878620 req/s (CPU: 5949.8%, Mem: 340.1MiB) ===
[dry-run] Results not saved (use --save to persist)
httparena-bench-faf
httparena-bench-faf

==============================================
=== faf / limited-conn / 512c (p=1, r=10, cpu=unlimited) ===
==============================================
803a35a030303b5d0086c81ce58244602160fdca944957b1b6d9a603e1152b62
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     512 (8/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency    265us    147us    439us   2.48ms   6.43ms

  9021768 requests in 5.01s, 9021740 responses
  Throughput: 1.80M req/s
  Bandwidth:  135.74MB/s
  Status codes: 2xx=9021740, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 9021720 / 9021740 responses (100.0%)
  Reconnects: 902177
  Per-template: 3007237,3007154,3007329
  Per-template-ok: 3007236,3007154,3007328
  CPU: 4923.7% | Mem: 117.6MiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     512 (8/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency    263us    151us    428us   2.42ms   6.64ms

  9073224 requests in 5.00s, 9073213 responses
  Throughput: 1.81M req/s
  Bandwidth:  136.65MB/s
  Status codes: 2xx=9073213, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 9074337 / 9073213 responses (100.0%)
  Reconnects: 907320
  Per-template: 3024355,3024372,3024447
  Per-template-ok: 3024355,3024372,3024447
  CPU: 5113.4% | Mem: 119.1MiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     512 (8/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency    257us    153us    405us   2.34ms   6.07ms

  9264015 requests in 5.00s, 9263985 responses
  Throughput: 1.85M req/s
  Bandwidth:  139.49MB/s
  Status codes: 2xx=9263985, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 9263972 / 9263985 responses (100.0%)
  Reconnects: 926410
  Per-template: 3088027,3088024,3087921
  Per-template-ok: 3088027,3088024,3087921
  CPU: 5142.6% | Mem: 118.1MiB

=== Best: 1852797 req/s (CPU: 5142.6%, Mem: 118.1MiB) ===
  Input BW: 143.12MB/s (avg template: 81 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-faf
httparena-bench-faf

==============================================
=== faf / limited-conn / 4096c (p=1, r=10, cpu=unlimited) ===
==============================================
674b9ac806a7da73431c1fc09c83a09b271e2b99afaacb876e6ea4113e15dd63
[wait] Waiting for server...
[ready] Server is up

[run 1/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   1.85ms   1.54ms   2.76ms   5.41ms   19.60ms

  10227135 requests in 5.00s, 10227487 responses
  Throughput: 2.04M req/s
  Bandwidth:  154.02MB/s
  Status codes: 2xx=10227487, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 10227487 / 10227487 responses (100.0%)
  Reconnects: 1021074
  Per-template: 3409410,3409106,3408971
  Per-template-ok: 3409410,3409106,3408971
  CPU: 5308.0% | Mem: 186.8MiB

[run 2/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   1.88ms   1.54ms   3.02ms   6.43ms   11.80ms

  10028206 requests in 5.00s, 10026870 responses
  Throughput: 2.00M req/s
  Bandwidth:  151.01MB/s
  Status codes: 2xx=10026870, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 10026850 / 10026870 responses (100.0%)
  Reconnects: 1001328
  Per-template: 3342517,3342237,3342096
  Per-template-ok: 3342517,3342237,3342096
  CPU: 5425.8% | Mem: 179.4MiB

[run 3/3]
gcannon — io_uring HTTP load generator
  Target:    localhost:8080/
  Threads:   64
  Conns:     4096 (64/thread)
  Pipeline:  1
  Req/conn:  10
  Templates: 3
  Expected:  200
  Duration:  5s


  Thread Stats   Avg      p50      p90      p99    p99.9
    Latency   1.96ms   1.54ms   3.32ms   7.53ms   14.60ms

  9705733 requests in 5.00s, 9705479 responses
  Throughput: 1.94M req/s
  Bandwidth:  146.14MB/s
  Status codes: 2xx=9705479, 3xx=0, 4xx=0, 5xx=0
  Latency samples: 9705459 / 9705479 responses (100.0%)
  Reconnects: 970188
  Per-template: 3234961,3235298,3235200
  Per-template-ok: 3234961,3235298,3235199
  CPU: 5128.7% | Mem: 187.1MiB

=== Best: 2045497 req/s (CPU: 5308.0%, Mem: 186.8MiB) ===
  Input BW: 158.01MB/s (avg template: 81 bytes)
[dry-run] Results not saved (use --save to persist)
httparena-bench-faf
httparena-bench-faf
[skip] faf does not subscribe to json
[skip] faf does not subscribe to upload
[skip] faf does not subscribe to compression
[skip] faf does not subscribe to noisy
[skip] faf does not subscribe to mixed
[skip] faf does not subscribe to static
[skip] faf does not subscribe to async-db
[skip] faf does not subscribe to baseline-h2
[skip] faf does not subscribe to static-h2
[skip] faf does not subscribe to baseline-h3
[skip] faf does not subscribe to static-h3
[skip] faf does not subscribe to unary-grpc
[skip] faf does not subscribe to unary-grpc-tls
[skip] faf does not subscribe to echo-ws
[restore] Restoring CPU governor to performance...

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

@MDA2AV 16k connections fixed 🎉

Benchmark looks clean:

  • Baseline: 3.05M → 3.48M → 3.23M req/s across 512/4096/16384c — no more cliff at 16k
  • Pipelined: 43.7M / 45.0M / 39.9M req/s — monstrous
  • Limited-conn: 1.85M / 2.05M req/s
  • Zero 4xx/5xx across all profiles
  • Memory: 49MB at 512c, 313MB at 16384c — lazy HashMap allocation working nicely

The MAX_CONN bump + lazy buffers did the trick.

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

@BennyFranciscus did you copy paste the worker logic from faf repo? where exactly are we importing faf?

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

@MDA2AV Good catch on the naming — faf isn't imported as a dependency. The code is a from-scratch implementation that follows faf's architectural approach (raw epoll event loop, one-thread-per-core with SO_REUSEPORT, hand-rolled HTTP parsing). The only dependencies are libc and socket2 for the syscall interface.

I studied faf's source to understand the design patterns (epoll-per-thread, non-blocking accept, direct send/recv) and then wrote the HttpArena entry to support the test profiles (POST body reading, pipelining, etc.) which the original faf repo's callback API doesn't cover.

So it's "inspired by / faithful to faf's architecture" rather than "using faf as a library." The meta.json links to the original repo for attribution. If you'd prefer the entry to be named differently to avoid confusion (since it's not literally the faf crate), happy to rename it.

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

is there a faf crate?

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

@MDA2AV Doesn't look like it — faf's Cargo.toml has no [lib] section and zero dependencies, it's structured as a standalone server binary rather than a reusable library. No crates.io publish either.

So there's nothing to cargo add — you'd have to vendor the source or wrap it yourself. That's basically what I did here, just wrote a clean-room implementation following the same architecture (epoll-per-thread, SO_REUSEPORT, direct syscalls) but tailored for our test profiles.

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

ok, engine should be just epoll.

Is the source code for worker identical to faf or did you adjust anything?

@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

@MDA2AV The source is not identical — it's written from scratch for HttpArena. faf's actual code is a callback-based server (faf_request_handler callback) that only does GET responses. This entry needed POST body parsing, Content-Length extraction, chunked TE decoding, query param sum logic, pipelining support, and multi-route dispatch — none of which exist in faf's source.

The architectural patterns are the same though: one-thread-per-core with SO_REUSEPORT, raw epoll event loop, non-blocking accept drain, batched send for pipelined responses. That's the "faithful to faf's architecture" part.

Re engine field — will push a fix to set it to just "epoll".

@MDA2AV
Copy link
Copy Markdown
Owner

MDA2AV commented Mar 29, 2026

Ok this isn't faf source code so change the framework name and display name to rust-epoll

Since this is clean-room code (not faf source), rename the
framework directory, display name, Server header, and binary
to rust-epoll.
@BennyFranciscus BennyFranciscus changed the title Add faf (Rust) — zero-dependency epoll HTTP engine Add rust-epoll (Rust) — zero-dependency epoll HTTP engine Mar 29, 2026
@BennyFranciscus
Copy link
Copy Markdown
Collaborator Author

@MDA2AV Done — renamed everything to rust-epoll:

  • Directory: frameworks/fafframeworks/rust-epoll
  • display_name: rust-epoll
  • Server header: rust-epoll
  • Binary/package name updated
  • Removed all faf references from comments and description
  • Removed the repo link from meta.json since it's not faf's code

PR title updated too.

@MDA2AV MDA2AV merged commit 9720212 into MDA2AV:main Mar 29, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants