Open-source AI WAF for self-hosted apps
Gatewarden is an open-source AI WAF for self-hosted apps.
It sits in front of your services, consumes trusted identity headers from your auth layer, applies deterministic enforcement, and uses AI in a reviewable advisory lane for rule suggestions, event analysis, and operator workflows.
- Protects admin and login surfaces with deterministic checks
- Acts as an AI-assisted WAF for self-hosted and internal applications
- Works well with
Caddy forward_auth - Reuses identity context from TinyAuth, oauth2-proxy, or other OIDC-aware front layers
- Stores events, rules, approvals, and settings in SQLite or PostgreSQL
- Exposes a web console for events, rules, approvals, settings, status codes, and latency
- Keeps AI in an advisory role instead of letting it block requests directly
Most self-hosted teams already have:
- a reverse proxy
- an identity layer
- a few fragile path rules
- scattered logs
Gatewarden gives those teams one place to:
- enforce basic security decisions
- inspect what happened
- review rule changes
- add AI-assisted analysis without giving up deterministic control
Gatewarden is early, but already usable as a local or single-node OSS AI WAF deployment.
Current OSS scope:
- Caddy-first integration
- trusted-header identity mapping
- basic login rate limiting
- admin-path protection
- AI-assisted rule suggestions and operator review flows
- console pages for dashboard, events, rules, suggestions, approvals, and settings
- structured observability for status codes and response time from Caddy access logs
- SQLite for lightweight single-node setups and PostgreSQL for production-backed persistence
Not finished yet:
- full OIDC relying-party login for the console
- multi-node sync
- advanced enterprise audit and collaboration workflows
- richer AI rule generation pipeline
Main directories:
app/- Rust HTTP service withforward_auth, console API, policy evaluation, and log ingestcrates/- shared core, gateway, policy, rate-limit, and Caddy integration cratesweb/- Next.js admin consolegatewarden.yaml- runtime configuration
- Create a working directory and place these files inside it:
docker-compose.yamlgatewarden.yaml
- Start Gatewarden:
docker compose up -d- Open the web console in your browser:
http://127.0.0.1:3000
- Point Caddy
forward_authto:
http://127.0.0.1:4000
Port roles:
3000: browser-facing web console4000: Gatewarden API andforward_authendpoint for Caddy
Default mounts:
./gatewarden.yaml->/config/gatewarden.yaml./docker-data->/opt/gatewarden/app/data
The default compose setup:
- pulls
ghcr.io/limitcool/gatewarden:latest - uses SQLite inside
./docker-data - does not require you to set
CONSOLE_API_BASE_URL
It pulls the published image from:
ghcr.io/limitcool/gatewarden:latest
cargo run -p gwafDefault address:
127.0.0.1:4000
pnpm install
pnpm --dir web run devDefault address:
http://127.0.0.1:3010
SQLite remains the default quick-start database:
database:
url: "sqlite://app/data/ingress.db?mode=rwc"For production or external persistence, use PostgreSQL:
database:
url: "postgres://gatewarden:change-me@127.0.0.1:5432/gatewarden"cargo check
cargo test
pnpm --dir web run check
pnpm --dir web run buildGatewarden is designed to work behind a real auth layer as an AI-assisted WAF with deterministic enforcement.
If you already have OIDC:
- keep your current OIDC provider or auth proxy
- let that layer authenticate the user first
- have it emit trusted headers such as
Remote-User,Remote-Email,Remote-Groups,X-Auth-Provider, andX-Authenticated - point
gatewarden.yamlidentity.trusted_headers.*at those real header names
Gatewarden does not need to replace your existing OIDC flow for this model. It consumes trusted identity context after authentication.
Deployment model:
- browser ->
http://127.0.0.1:3000 - Caddy
forward_auth->http://127.0.0.1:4000/api/forward-auth - your upstream app stays behind Caddy as usual
This is why both 3000 and 4000 exist:
3000is the user-facing console4000is the internal Gatewarden API surface that Caddy calls
Reusable Caddyfile snippet:
(gatewarden_forward_auth) {
forward_auth http://127.0.0.1:4000 {
uri /api/forward-auth
copy_headers Remote-User Remote-Email Remote-Groups X-Auth-Provider X-Authenticated X-Request-Id
}
}Structured access log example for status code, host, request ID, user agent, and latency ingestion:
{
log {
output file /var/log/caddy/access.jsonl
format json
}
}
app.example.com {
log {
output file /var/log/caddy/access.jsonl
format json
}
import gatewarden_forward_auth
reverse_proxy http://127.0.0.1:8080 {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Uri {uri}
}
}Then enable log ingestion in gatewarden.yaml and point it at the same JSON log file path:
observability:
caddy_access_log:
enabled: true
path: "app/data/caddy-access.jsonl"
poll_interval_ms: 1000Minimal usage example:
app.example.com {
import gatewarden_forward_auth
reverse_proxy http://127.0.0.1:8080 {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Uri {uri}
}
}Example with a dedicated auth host and two protected apps:
(gatewarden_forward_auth) {
forward_auth http://127.0.0.1:4000 {
uri /api/forward-auth
copy_headers Remote-User Remote-Email Remote-Groups X-Auth-Provider X-Authenticated X-Request-Id
}
}
auth.example.com {
reverse_proxy http://127.0.0.1:9000
}
app.example.com {
import gatewarden_forward_auth
reverse_proxy http://127.0.0.1:8080 {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Uri {uri}
}
}
accounts.example.com {
import gatewarden_forward_auth
reverse_proxy http://127.0.0.1:8081 {
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
header_up X-Forwarded-Host {host}
header_up X-Forwarded-Uri {uri}
}
}To expose status code and latency metrics, enable structured Caddy access logs and point Gatewarden at that file in gatewarden.yaml.
The project uses a YAML runtime config:
gatewarden.yaml
Port mapping before you edit the config:
- browser console:
http://127.0.0.1:3000 - Gatewarden API and Caddy
forward_auth:http://127.0.0.1:4000 server.listen_addringatewarden.yamlrefers to the Gatewarden API listener on4000, not the browser console port- if you use the default Docker Compose setup, the console port is handled by the container image and compose file, not by
gatewarden.yaml
Complete example:
server:
listen_addr: "127.0.0.1:4000"
database:
url: "sqlite://app/data/ingress.db?mode=rwc"
# Production example:
# url: "postgres://gatewarden:change-me@127.0.0.1:5432/gatewarden"
identity:
mode: "trusted_header"
provider_hint: "external-oidc"
trusted_headers:
authenticated: "X-Authenticated"
subject: "Remote-User"
email: "Remote-Email"
groups: "Remote-Groups"
provider: "X-Auth-Provider"
security:
admin_shadow_prefixes:
- "/admin"
login_ip_limit:
rule_id: "protect-login-ip"
path_prefix: "/api/login"
rps: 5
burst: 10
login_user_limit:
rule_id: "protect-login-user"
path_prefix: "/api/login"
rps: 3
burst: 6
console_admin_groups:
- "admin"
protected_hosts:
- "app.example.com"
- "accounts.example.com"
ai:
enabled: false
provider: "openai"
model: "gpt-4.1-mini"
api_key_env: "GATEWARDEN_AI_API_KEY"
# Optional for OpenAI-compatible gateways:
# base_url: "https://api.openai.com/v1"
timeout_ms: 15000
system_prompt: "You are Gatewarden, an AI security analyst. Produce concise, evidence-based, operator-reviewable guidance."
observability:
caddy_access_log:
enabled: true
path: "app/data/caddy-access.jsonl"
poll_interval_ms: 1000
geoip:
enabled: false
database_path: "app/data/GeoLite2-City.mmdb"Important sections:
server.listen_addrdatabase.urlidentity.trusted_headers.*security.admin_shadow_prefixessecurity.login_ip_limit.*security.login_user_limit.*security.console_admin_groupsai.enabledai.providerai.modelai.api_key_envai.base_urlai.timeout_msobservability.caddy_access_log.*observability.geoip.*
Gatewarden now supports real model-backed advisory workflows for:
- asynchronous AI rule suggestions
- per-request AI explanation in the events view
Supported provider values:
openaianthropicgeminigroqdeepseekxaiollama
The default path is:
provider: "openai"model: "gpt-4.1-mini"api_key_env: "GATEWARDEN_AI_API_KEY"
To use the official OpenAI API:
ai:
enabled: true
provider: "openai"
model: "gpt-4.1-mini"
api_key_env: "GATEWARDEN_AI_API_KEY"
base_url: "https://api.openai.com/v1"
timeout_ms: 15000To use an OpenAI-compatible gateway such as OpenRouter, a relay, or your own proxy:
ai:
enabled: true
provider: "openai"
model: "gpt-4.1-mini"
api_key_env: "GATEWARDEN_AI_API_KEY"
base_url: "https://your-openai-compatible-endpoint/v1"
timeout_ms: 15000Environment example:
$env:GATEWARDEN_AI_API_KEY="your-api-key"Important boundary:
- AI stays in the advisory lane
- published enforcement still requires human approval
- realtime blocking and rate limiting remain deterministic
Gatewarden follows a few explicit rules:
Caddy-firstdeterministic enforcementtrusted identity headersAI advisory only
That means:
- identity should come from a trusted upstream auth layer
- blocking and rate limiting remain deterministic and auditable
- AI suggestions stay reviewable before becoming active policy
This repository is the OSS mainline.
- License:
AGPL-3.0-only - OSS repository:
gatewarden - Commercial add-ons: private
gatewarden-enterprise
If you need closed-source deployment, OEM/white-label rights, commercial support, or enterprise-only features, see COMMERCIAL.md.
The repository includes:
- a reusable project mark for GitHub, docs, and product UI
- an application icon for the web console
- live screenshots from the current OSS dashboard


