Ever felt the need to connect your AI agents to an actual database? Agentic Coop DB does exactly that — give every agent a secure SQL interface over HTTPS, backed by real Postgres, with nothing more than an API key.
Multiple AI agents working on the same project need a shared place to store and query structured data — but you can't expose Postgres on the public internet. Agentic Coop DB lets your agents collaborate on a remote Postgres + pgvector instance using nothing but an HTTPS URL and an API key. Share the results your tokens generated, build on each other's work, and query it all with plain SQL — no new query language, no ORM lock-in.
Status: v0.3 — single-node, container-first, ARM64-friendly. License: Apache-2.0.
A thin auth gateway in front of PostgreSQL 16 + pgvector that does four jobs:
- Authenticate the caller via a workspace-scoped API key.
- Authorize by attaching a Postgres role to the request transaction
(
SET LOCAL ROLE) — Postgres itself decides what the key can run. - Forward the SQL with parameterized binding and a statement timeout.
- Audit every call.
If you can write SQL, you can use it. SELECT, INSERT, UPDATE, DELETE,
CREATE TABLE, CREATE USER, GRANT, pgvector ops — all forwarded.
- Not a new query language, ORM, or schema migrator.
- Not realtime / websocket subscriptions.
- Not object storage.
- Not a serverless function runtime.
- Not multi-region or HA — single-node only in v1.
- Not a web UI — CLI + curl + your own app.
- Not SSO — API keys only.
git clone https://github.com/fheinfling/agentic-coop-db.git
cd agentic-coop-db
make up-local # postgres + api on localhost:8080
./scripts/gen-key.sh default dbadmin # prints acd_dev_<id>_<secret>Works with Coolify, Railway, Dokku, or plain docker compose.
1. Run the container. Point it at any PostgreSQL 16+ instance with pgvector installed. Set these environment variables:
| Variable | Purpose |
|---|---|
AGENTCOOPDB_DATABASE_URL |
Gateway pool URL (postgres://agentcoopdb_gateway@host:5432/dbname) |
AGENTCOOPDB_MIGRATIONS_DATABASE_URL |
Superuser URL for migrations — must be a Postgres superuser such as postgres (postgres://postgres@host:5432/dbname) |
AGENTCOOPDB_OWNER_PASSWORD |
Password for the superuser in the migrations URL |
AGENTCOOPDB_GATEWAY_PASSWORD |
Password to set on the agentcoopdb_gateway role |
AGENTCOOPDB_INSECURE_HTTP |
Set to true if your platform terminates TLS for you |
See deploy/compose.external-pg.yml for a
ready-made template with full comments.
2. Mint an admin key. Exec into the running container:
docker exec <container> /app/agentic-coop-db-server \
-mint-key -mint-workspace default -mint-role dbadmin -mint-env devThe dbadmin role can CREATE TABLE, ALTER, GRANT, and manage schema.
Copy the printed token — it is shown exactly once.
3. Create your tables using the admin key:
curl -X POST https://your-domain/v1/sql/execute \
-H "Authorization: Bearer acd_dev_..." \
-H "Content-Type: application/json" \
-d '{"sql": "CREATE TABLE IF NOT EXISTS notes (id uuid PRIMARY KEY, body text)"}'4. Mint a user key for your application:
docker exec <container> /app/agentic-coop-db-server \
-mint-key -mint-workspace default -mint-role dbuser -mint-env devThe dbuser role can SELECT, INSERT, UPDATE, DELETE — but cannot run DDL,
GRANT, or bypass row-level security. Use this key in your app.
5. Query from your app:
Python
from agentcoopdb import connect
db = connect("https://your-domain", api_key="acd_dev_...")
rows = db.select("SELECT * FROM notes WHERE body = $1", ["hi"])curl
curl -X POST https://your-domain/v1/sql/execute \
-H "Authorization: Bearer acd_dev_..." \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT * FROM notes WHERE body = $1", "params": ["hi"]}'JavaScript
await fetch("https://your-domain/v1/sql/execute", {
method: "POST",
headers: {
Authorization: "Bearer acd_dev_...",
"Content-Type": "application/json",
},
body: JSON.stringify({
sql: "SELECT * FROM notes WHERE body = $1",
params: ["hi"],
}),
});Keys can be managed via CLI inside the container (recommended for initial
bootstrap) or via the HTTP API using a dbadmin key:
# Mint a new key
docker exec <container> /app/agentic-coop-db-server \
-mint-key -mint-workspace <name> -mint-role <role> -mint-env <dev|live|test>
# List all keys (secrets are never shown)
docker exec <container> /app/agentic-coop-db-server -list-keys
# Revoke a key by ID (from -list-keys output)
docker exec <container> /app/agentic-coop-db-server -revoke-key <uuid>| Role | Can do | Cannot do |
|---|---|---|
dbadmin |
CREATE TABLE, ALTER, GRANT, CRUD, bypass RLS, mint keys via API | ALTER SYSTEM, superuser ops, access control-plane tables |
dbuser |
SELECT, INSERT, UPDATE, DELETE | DDL, GRANT, bypass RLS, mint keys |
Use dbadmin for schema setup and administration. Use dbuser for
application-level access — it is the least-privilege default for agents.
The gateway is a natural fit for AI agent workloads:
- HTTP-only — no native Postgres driver required in the agent runtime.
- API-key auth — easy to inject via environment variable or secrets manager.
- Parameterized SQL — the gateway validates that
$Nplaceholder count matches the params array length and uses server-side binding — values are never interpolated into the SQL string, eliminating the most common injection vector in LLM-generated SQL. - pgvector — store and search embeddings alongside structured data.
- Idempotency keys — retryable writes even over flaky networks.
1. Generate a key for the agent (least-privilege: dbuser, not dbadmin):
./scripts/gen-key.sh <workspace> dbuser
# prints: acd_dev_<id>_<secret> ← store in your agent's secrets manager2. Install the SDK:
pip install agentic-coop-db
# Or install directly from GitHub:
pip install "agentic-coop-db @ git+https://github.com/fheinfling/agentic-coop-db.git#subdirectory=clients/python"3. Connect:
from agentcoopdb import connect
db = connect("https://db.example.com", api_key="acd_live_...")
me = db.me() # verify connectivity: {workspace, role, server}The gateway enforces one statement per HTTP call (this prevents
multi-statement injection). When replaying a schema file, send each statement
as a separate db.execute() call. Use CREATE … IF NOT EXISTS / DO $$ … $$
guards so the sequence is fully idempotent — re-running from any point is safe.
For writes that must land together, use the CTE-wrapped transaction helper:
with db.transaction() as tx:
tx.execute("INSERT INTO events (id, type) VALUES ($1, $2)", [eid, "start"])
tx.execute("UPDATE jobs SET status=$1 WHERE id=$2", ["running", jid])
# Both writes execute as a single CTE-wrapped statement# Store embeddings
db.vector_upsert("documents", [
{"id": doc_id, "metadata": {"title": "…"}, "vector": embedding},
])
# Nearest-neighbour search
results = db.vector_search("documents", query_embedding, k=5)Because the gateway forwards plain SQL, agents can discover the database schema at runtime — no special endpoint needed:
# List all user tables
tables = db.select("SELECT table_name FROM information_schema.tables WHERE table_schema = $1", ["public"])
# Inspect columns of a specific table
cols = db.select("""
SELECT column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema = $1 AND table_name = $2
ORDER BY ordinal_position
""", ["public", "notes"])
# Find vector columns (pgvector)
vectors = db.select("""
SELECT table_name, column_name
FROM information_schema.columns
WHERE data_type = 'USER-DEFINED' AND udt_name = 'vector'
""")This makes Agentic Coop DB a natural fit for agents that need to understand and adapt to the schema they're working with.
For MCP-compatible agents, a standalone MCP server binary is included. It proxies tool calls to the gateway over HTTPS — every call goes through the full auth/rate-limit/tenant/validator/audit chain.
Install: download the binary for your platform from the latest release, extract it, and make it executable:
# macOS (Apple Silicon)
curl -fsSL https://github.com/fheinfling/agentic-coop-db/releases/latest/download/agentic-coop-db-mcp-darwin-arm64.tar.gz \
| tar xz && chmod +x agentic-coop-db-mcp && sudo mv agentic-coop-db-mcp /usr/local/bin/Other platforms
| Platform | Archive |
|---|---|
| macOS (Intel) | agentic-coop-db-mcp-darwin-amd64.tar.gz |
| Linux (x86_64) | agentic-coop-db-mcp-linux-amd64.tar.gz |
| Linux (ARM64) | agentic-coop-db-mcp-linux-arm64.tar.gz |
| Windows (x86_64) | agentic-coop-db-mcp-windows-amd64.zip |
Alternative: go install / build from source
# go install (requires Go 1.26+)
go install github.com/fheinfling/agentic-coop-db/cmd/mcp@latest
# or build from a clone
make build-mcp # produces bin/agentic-coop-db-mcpClaude Code — one command:
claude mcp add agentic-coop-db \
-e AGENTCOOPDB_GATEWAY_URL=https://db.example.com \
-e AGENTCOOPDB_API_KEY=acd_live_<id>_<secret> \
-- agentic-coop-db-mcpClaude Desktop — add to claude_desktop_config.json:
{
"mcpServers": {
"agentic-coop-db": {
"command": "agentic-coop-db-mcp",
"env": {
"AGENTCOOPDB_GATEWAY_URL": "https://db.example.com",
"AGENTCOOPDB_API_KEY": "acd_live_<id>_<secret>"
}
}
}
}Cursor — add to Settings > MCP Servers, or .cursor/mcp.json:
{
"mcpServers": {
"agentic-coop-db": {
"command": "agentic-coop-db-mcp",
"env": {
"AGENTCOOPDB_GATEWAY_URL": "https://db.example.com",
"AGENTCOOPDB_API_KEY": "acd_live_<id>_<secret>"
}
}
}
}Verify: ask your agent to "use the whoami tool" — it should return your workspace, role, and environment.
Available tools: sql_execute, rpc_call, list_tables,
describe_table, vector_search, vector_upsert, whoami, health.
Full reference: docs/mcp.md.
PostgreSQL is the source of truth for what each key can do. The gateway only enforces the minimum that Postgres cannot enforce by itself:
- Parameterization is mandatory. The body is
{sql, params}. The validator parses the SQL and counts$Nplaceholders; mismatch = HTTP 400. - Single statement only. Stacked-statement injection is rejected at parse time.
- Statement size cap (default 256 KiB, tunable via
AGENTCOOPDB_MAX_STATEMENT_BYTES) and parameter count cap (default 1 000, tunable viaAGENTCOOPDB_MAX_STATEMENT_PARAMS). SET LOCAL ROLE <key.role>before forwarding. The pool's login role isagentcoopdb_gateway, a low-privilege role with no privileges of its own — it is only a member of the role each key is bound to.- Built-in roles:
dbadmin(DDL/DCL, owner ofpublic,BYPASSRLS, not superuser) anddbuser(CRUD,NOBYPASSRLS). - Filesystem escape functions (
pg_read_file,lo_import,dblink_*,COPY ... FROM PROGRAM) are revoked at the database level — even an admin key cannot read files off the host. - RLS is the recommended pattern for tenant tables and
dbusercannot bypass it. - TLS is mandatory in any non-localhost deployment. The
cloudprofile uses Caddy auto-TLS.
Full threat model: docs/security.md.
Distributed research pipelines. Multiple agents — or human researchers — run experiments in different locations and push results into the same Postgres instance over HTTPS. A coordinator agent can then query across all results, compute aggregates, or trigger the next round. No shared filesystem, no VPN, no cloud vendor lock-in — just SQL over TLS.
RAG knowledge base.
Use pgvector to store embeddings alongside structured metadata in one database.
Agents ingest documents, chunk and embed them, then write vectors with
vector_upsert. At query time any agent can run a nearest-neighbour search
with vector_search — retrieval-augmented generation with nothing but
Postgres and an API key.
Replace Supabase / Neon / PlanetScale for prototypes. When you're building a proof-of-concept or MVP and don't want to sign up for (or pay for) a managed database-as-a-service, spin up Agentic Coop DB on a cheap VPS or a Raspberry Pi. You get auth, TLS, role-based access, and a standard SQL interface — enough to validate your idea before committing to a platform.
Multi-agent collaboration. Give each agent its own API key with the appropriate role. Agents can create tables, write intermediate results, and read each other's outputs — all governed by Postgres roles and RLS policies. The audit log tells you which agent wrote what and when.
LLM-maintained knowledge base (LLM Wiki). Instead of re-deriving answers from raw documents on every query (classic RAG), have your agents incrementally build a persistent, interlinked wiki backed by Postgres. Agents ingest sources, write structured pages, maintain cross-references in a link table, and keep summaries current — all via SQL. pgvector provides hybrid search (semantic + full-text) as the wiki grows, RLS isolates tenants in team settings, and idempotency keys let multiple agents ingest concurrently without conflicts. Inspired by Karpathy's LLM Wiki pattern.
Edge / IoT data collection.
Run the gateway on a Raspberry Pi at the edge. Field devices or local scripts
POST sensor readings over HTTPS; a cloud-side agent periodically queries the
Pi's database for analysis. The ARM64-tuned pi-lite profile keeps resource
usage minimal.
Each profile bundles its own PostgreSQL + pgvector instance:
| Profile | File | Use case |
|---|---|---|
local |
deploy/compose.local.yml |
Dev box, integration tests |
pi-lite |
deploy/compose.pi-lite.yml |
Raspberry Pi 4/5, low-mem ARM64 |
cloud |
deploy/compose.cloud.yml |
Hetzner / DO / AWS / bare metal + Caddy |
swarm |
deploy/stack.swarm.yml |
Docker Swarm with external secrets |
make up-local # localhost:8080, no TLS
make up-pi # ARM64-tuned postgres, low mem
make up-cloud # Caddy auto-TLS, backups, prometheusAlready have Postgres? Use
deploy/compose.external-pg.yml instead —
it runs only the API container and connects to your existing PostgreSQL 16+
instance (Coolify, Railway, RDS, Neon, etc.). Your platform provides TLS,
backups, and monitoring. The server automatically handles the PG 15+ CREATE
privilege change on the public schema — no manual grants required.
See docs/deploy-cloud.md for worked examples on
Hetzner, DigitalOcean, AWS Lightsail, and bare metal.
| Method | Path | Purpose |
|---|---|---|
POST |
/v1/sql/execute |
Forward parameterized SQL |
POST |
/v1/rpc/call |
Call a registered RPC (optional) |
POST |
/v1/auth/keys |
Create a new API key (dbadmin only) |
POST |
/v1/auth/keys/rotate |
Rotate the calling key |
GET |
/v1/me |
{workspace_id, key_id, role, env, server} |
GET |
/healthz |
Liveness |
GET |
/readyz |
Ready (DB + migrations) |
GET |
/metrics |
Prometheus |
Full reference: docs/api.md.
cmd/server— API server entrypointcmd/migrate— standalone migrator (also embedded in the server)cmd/mcp— MCP server binary (proxy to the gateway)internal/— implementation (clean layered architecture)migrations/— numbered SQL migrations (golang-migrate)clients/python— Python SDK + CLI (pip install agentic-coop-db)deploy/— compose files for local, pi-lite, cloud, swarmdocs/— architecture, deployment, security, ADRs, feature roadmaptest/integration— testcontainers-go full-stack teststest/security— cross-tenant + SQL bypass testsscripts/— dev helpers andverify-acs.sh
docs/architecture.md— clean layers and how requests flowdocs/api.md— endpoint reference + curl examplesdocs/security.md— threat model + reportingdocs/rls.md— multi-tenant pattern with row-level securitydocs/mcp.md— MCP server for Claude Desktop / Claude Code / Cursordocs/rpc-authoring.md— when to register an RPCdocs/deploy-local.md— local devdocs/deploy-pi-lite.md— Raspberry Pidocs/deploy-cloud.md— single-node clouddocs/faq.md— frequently asked questionsdocs/adr/— architectural decision records
See CONTRIBUTING.md. All commits must be signed off
(git commit -s) under the Developer Certificate of Origin.
Good first issues are tracked under the
good-first-issue
label.
Report vulnerabilities privately via GitHub Security Advisories.