Skip to content

limitcool/gatewarden

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Gatewarden

English | 简体中文

GitHub release Docker pulls Crates.io License Ask DeepWiki

Gatewarden logo

Open-source AI WAF for self-hosted apps

Gatewarden dashboard overview

Overview

Gatewarden is an open-source AI WAF for self-hosted apps.

It sits in front of your services, consumes trusted identity headers from your auth layer, applies deterministic enforcement, and uses AI in a reviewable advisory lane for rule suggestions, event analysis, and operator workflows.

What This AI WAF Does

  • Protects admin and login surfaces with deterministic checks
  • Acts as an AI-assisted WAF for self-hosted and internal applications
  • Works well with Caddy forward_auth
  • Reuses identity context from TinyAuth, oauth2-proxy, or other OIDC-aware front layers
  • Stores events, rules, approvals, and settings in SQLite or PostgreSQL
  • Exposes a web console for events, rules, approvals, settings, status codes, and latency
  • Keeps AI in an advisory role instead of letting it block requests directly

Why Gatewarden

Most self-hosted teams already have:

  • a reverse proxy
  • an identity layer
  • a few fragile path rules
  • scattered logs

Gatewarden gives those teams one place to:

  • enforce basic security decisions
  • inspect what happened
  • review rule changes
  • add AI-assisted analysis without giving up deterministic control

Screenshots

Dashboard

Gatewarden Dashboard

Events

Gatewarden Events

Rules

Gatewarden Rules

Project Status

Gatewarden is early, but already usable as a local or single-node OSS AI WAF deployment.

Current OSS scope:

  • Caddy-first integration
  • trusted-header identity mapping
  • basic login rate limiting
  • admin-path protection
  • AI-assisted rule suggestions and operator review flows
  • console pages for dashboard, events, rules, suggestions, approvals, and settings
  • structured observability for status codes and response time from Caddy access logs
  • SQLite for lightweight single-node setups and PostgreSQL for production-backed persistence

Not finished yet:

  • full OIDC relying-party login for the console
  • multi-node sync
  • advanced enterprise audit and collaboration workflows
  • richer AI rule generation pipeline

Architecture

Main directories:

  • app/ - Rust HTTP service with forward_auth, console API, policy evaluation, and log ingest
  • crates/ - shared core, gateway, policy, rate-limit, and Caddy integration crates
  • web/ - Next.js admin console
  • gatewarden.yaml - runtime configuration

Quick Start

Docker Compose SOP

  1. Create a working directory and place these files inside it:
  • docker-compose.yaml
  • gatewarden.yaml
  1. Start Gatewarden:
docker compose up -d
  1. Open the web console in your browser:
http://127.0.0.1:3000
  1. Point Caddy forward_auth to:
http://127.0.0.1:4000

Port roles:

  • 3000: browser-facing web console
  • 4000: Gatewarden API and forward_auth endpoint for Caddy

Default mounts:

  • ./gatewarden.yaml -> /config/gatewarden.yaml
  • ./docker-data -> /opt/gatewarden/app/data

The default compose setup:

  • pulls ghcr.io/limitcool/gatewarden:latest
  • uses SQLite inside ./docker-data
  • does not require you to set CONSOLE_API_BASE_URL

It pulls the published image from:

ghcr.io/limitcool/gatewarden:latest

1. Start the backend

cargo run -p gwaf

Default address:

127.0.0.1:4000

2. Start the web console

pnpm install
pnpm --dir web run dev

Default address:

http://127.0.0.1:3010

SQLite remains the default quick-start database:

database:
  url: "sqlite://app/data/ingress.db?mode=rwc"

For production or external persistence, use PostgreSQL:

database:
  url: "postgres://gatewarden:change-me@127.0.0.1:5432/gatewarden"

3. Validate the project

cargo check
cargo test
pnpm --dir web run check
pnpm --dir web run build

Caddy Integration

Gatewarden is designed to work behind a real auth layer as an AI-assisted WAF with deterministic enforcement.

If you already have OIDC:

  • keep your current OIDC provider or auth proxy
  • let that layer authenticate the user first
  • have it emit trusted headers such as Remote-User, Remote-Email, Remote-Groups, X-Auth-Provider, and X-Authenticated
  • point gatewarden.yaml identity.trusted_headers.* at those real header names

Gatewarden does not need to replace your existing OIDC flow for this model. It consumes trusted identity context after authentication.

Deployment model:

  • browser -> http://127.0.0.1:3000
  • Caddy forward_auth -> http://127.0.0.1:4000/api/forward-auth
  • your upstream app stays behind Caddy as usual

This is why both 3000 and 4000 exist:

  • 3000 is the user-facing console
  • 4000 is the internal Gatewarden API surface that Caddy calls

Reusable Caddyfile snippet:

(gatewarden_forward_auth) {
	forward_auth http://127.0.0.1:4000 {
		uri /api/forward-auth
		copy_headers Remote-User Remote-Email Remote-Groups X-Auth-Provider X-Authenticated X-Request-Id
	}
}

Structured access log example for status code, host, request ID, user agent, and latency ingestion:

{
	log {
		output file /var/log/caddy/access.jsonl
		format json
	}
}

app.example.com {
	log {
		output file /var/log/caddy/access.jsonl
		format json
	}

	import gatewarden_forward_auth

	reverse_proxy http://127.0.0.1:8080 {
		header_up X-Real-IP {remote_host}
		header_up X-Forwarded-For {remote_host}
		header_up X-Forwarded-Proto {scheme}
		header_up X-Forwarded-Host {host}
		header_up X-Forwarded-Uri {uri}
	}
}

Then enable log ingestion in gatewarden.yaml and point it at the same JSON log file path:

observability:
  caddy_access_log:
    enabled: true
    path: "app/data/caddy-access.jsonl"
    poll_interval_ms: 1000

Minimal usage example:

app.example.com {
	import gatewarden_forward_auth

	reverse_proxy http://127.0.0.1:8080 {
		header_up X-Real-IP {remote_host}
		header_up X-Forwarded-For {remote_host}
		header_up X-Forwarded-Proto {scheme}
		header_up X-Forwarded-Host {host}
		header_up X-Forwarded-Uri {uri}
	}
}

Example with a dedicated auth host and two protected apps:

(gatewarden_forward_auth) {
	forward_auth http://127.0.0.1:4000 {
		uri /api/forward-auth
		copy_headers Remote-User Remote-Email Remote-Groups X-Auth-Provider X-Authenticated X-Request-Id
	}
}

auth.example.com {
	reverse_proxy http://127.0.0.1:9000
}

app.example.com {
	import gatewarden_forward_auth

	reverse_proxy http://127.0.0.1:8080 {
		header_up X-Real-IP {remote_host}
		header_up X-Forwarded-For {remote_host}
		header_up X-Forwarded-Proto {scheme}
		header_up X-Forwarded-Host {host}
		header_up X-Forwarded-Uri {uri}
	}
}

accounts.example.com {
	import gatewarden_forward_auth

	reverse_proxy http://127.0.0.1:8081 {
		header_up X-Real-IP {remote_host}
		header_up X-Forwarded-For {remote_host}
		header_up X-Forwarded-Proto {scheme}
		header_up X-Forwarded-Host {host}
		header_up X-Forwarded-Uri {uri}
	}
}

To expose status code and latency metrics, enable structured Caddy access logs and point Gatewarden at that file in gatewarden.yaml.

Configuration

The project uses a YAML runtime config:

gatewarden.yaml

Port mapping before you edit the config:

  • browser console: http://127.0.0.1:3000
  • Gatewarden API and Caddy forward_auth: http://127.0.0.1:4000
  • server.listen_addr in gatewarden.yaml refers to the Gatewarden API listener on 4000, not the browser console port
  • if you use the default Docker Compose setup, the console port is handled by the container image and compose file, not by gatewarden.yaml

Complete example:

server:
  listen_addr: "127.0.0.1:4000"

database:
  url: "sqlite://app/data/ingress.db?mode=rwc"
  # Production example:
  # url: "postgres://gatewarden:change-me@127.0.0.1:5432/gatewarden"

identity:
  mode: "trusted_header"
  provider_hint: "external-oidc"
  trusted_headers:
    authenticated: "X-Authenticated"
    subject: "Remote-User"
    email: "Remote-Email"
    groups: "Remote-Groups"
    provider: "X-Auth-Provider"

security:
  admin_shadow_prefixes:
    - "/admin"
  login_ip_limit:
    rule_id: "protect-login-ip"
    path_prefix: "/api/login"
    rps: 5
    burst: 10
  login_user_limit:
    rule_id: "protect-login-user"
    path_prefix: "/api/login"
    rps: 3
    burst: 6
  console_admin_groups:
    - "admin"
  protected_hosts:
    - "app.example.com"
    - "accounts.example.com"

ai:
  enabled: false
  provider: "openai"
  model: "gpt-4.1-mini"
  api_key_env: "GATEWARDEN_AI_API_KEY"
  # Optional for OpenAI-compatible gateways:
  # base_url: "https://api.openai.com/v1"
  timeout_ms: 15000
  system_prompt: "You are Gatewarden, an AI security analyst. Produce concise, evidence-based, operator-reviewable guidance."

observability:
  caddy_access_log:
    enabled: true
    path: "app/data/caddy-access.jsonl"
    poll_interval_ms: 1000
  geoip:
    enabled: false
    database_path: "app/data/GeoLite2-City.mmdb"

Important sections:

  • server.listen_addr
  • database.url
  • identity.trusted_headers.*
  • security.admin_shadow_prefixes
  • security.login_ip_limit.*
  • security.login_user_limit.*
  • security.console_admin_groups
  • ai.enabled
  • ai.provider
  • ai.model
  • ai.api_key_env
  • ai.base_url
  • ai.timeout_ms
  • observability.caddy_access_log.*
  • observability.geoip.*

AI Configuration

Gatewarden now supports real model-backed advisory workflows for:

  • asynchronous AI rule suggestions
  • per-request AI explanation in the events view

Supported provider values:

  • openai
  • anthropic
  • gemini
  • groq
  • deepseek
  • xai
  • ollama

The default path is:

  • provider: "openai"
  • model: "gpt-4.1-mini"
  • api_key_env: "GATEWARDEN_AI_API_KEY"

To use the official OpenAI API:

ai:
  enabled: true
  provider: "openai"
  model: "gpt-4.1-mini"
  api_key_env: "GATEWARDEN_AI_API_KEY"
  base_url: "https://api.openai.com/v1"
  timeout_ms: 15000

To use an OpenAI-compatible gateway such as OpenRouter, a relay, or your own proxy:

ai:
  enabled: true
  provider: "openai"
  model: "gpt-4.1-mini"
  api_key_env: "GATEWARDEN_AI_API_KEY"
  base_url: "https://your-openai-compatible-endpoint/v1"
  timeout_ms: 15000

Environment example:

$env:GATEWARDEN_AI_API_KEY="your-api-key"

Important boundary:

  • AI stays in the advisory lane
  • published enforcement still requires human approval
  • realtime blocking and rate limiting remain deterministic

AI WAF Model

Gatewarden follows a few explicit rules:

  • Caddy-first
  • deterministic enforcement
  • trusted identity headers
  • AI advisory only

That means:

  • identity should come from a trusted upstream auth layer
  • blocking and rate limiting remain deterministic and auditable
  • AI suggestions stay reviewable before becoming active policy

Open Source and Commercial

This repository is the OSS mainline.

  • License: AGPL-3.0-only
  • OSS repository: gatewarden
  • Commercial add-ons: private gatewarden-enterprise

If you need closed-source deployment, OEM/white-label rights, commercial support, or enterprise-only features, see COMMERCIAL.md.

Documentation

Branding Assets

The repository includes:

  • a reusable project mark for GitHub, docs, and product UI
  • an application icon for the web console
  • live screenshots from the current OSS dashboard