Skip to content

Tight-Line/creel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

212 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Creel

CI codecov Quality Gate Status Security Rating Snyk

Self-hosted memory-as-a-service for AI agents.

v0.7.x: Per-topic vector backend configs, backend registry, Prometheus metrics, Helm hardening (PDB, NetworkPolicy, SecurityContext), Python and TypeScript SDKs, MCP server, CompactionService, chunk linking, managed pipeline, per-principal memory, dashboard, creel-chat.

Creel provides topic-scoped memory with principal-based RBAC, server-side document processing, per-principal memory (Mem0-style), and dual-mode retrieval (semantic search with citations + temporal context). Upload documents and Creel handles extraction, chunking, and embedding automatically. It runs in your infrastructure via Helm chart.

Quick start

# Start PostgreSQL + Creel server
docker compose up -d

# Build the CLI tools
make build

# Source the dev environment (sets CREEL_GRPC_ENDPOINT and CREEL_API_KEY)
source .env

# Create a topic and upload a document
bin/creel-cli topic create --slug my-notes --name "My Notes"
bin/creel-cli upload --topic my-notes --file notes.txt --name "Meeting Notes"

# Check processing status
bin/creel-cli jobs list --topic my-notes

# Search (once processing completes)
bin/creel-cli search --topic my-notes --query "action items" --top-k 5

See Quickstart for the full walkthrough.

Key features

  • Upload and forget: upload PDFs, HTML, or plain text; Creel extracts text, chunks it (fixed-size or LLM-based semantic), and embeds in the background
  • Document citations: search results include document metadata (title, author, URL, date) for proper attribution
  • Per-principal memory: automatic fact extraction from conversations with Mem0-style conflict resolution (ADD/UPDATE/DELETE/NOOP)
  • Topic > Document > Chunk hierarchy with RBAC and cross-topic linking
  • Dual-mode retrieval: RAG (semantic search) and context (temporal ordering)
  • Server-driven compaction with link preservation
  • Pluggable vector backends: pgvector (reference) with per-topic backend configs and a lazy-initialized backend registry
  • 72 gRPC/REST RPCs across 10 services
  • Two ingestion paths: managed (upload and forget) and direct (pre-chunked, pre-embedded) for power users
  • Admin dashboard: Laravel-based web UI for config, topic, system account, and memory management
  • creel-chat: interactive demo agent with streaming, memory, cross-topic RAG, and citation display

Documentation

Guide Description
Quickstart End-to-end setup in under 5 minutes
Fullstart Hands-on walkthrough of every feature
Concepts Data model, auth, search modes, memory, document processing
API Reference All 63 RPCs with request/response details
Development Dev environment, testing, adding RPCs and workers
Deployment Helm chart, configuration, OIDC setup
creel-chat Interactive demo agent with conversation memory
Architecture Full design document and roadmap

Status

Active development (v0.7.x). Phases 1-9 complete: core storage, RAG, auth, managed pipeline, per-principal memory, compaction, integration layers (SDKs, MCP, tool schemas), and backend hardening (per-topic vector backends, Prometheus metrics, Helm production readiness). See docs/ARCHITECTURE.md for the roadmap and CHANGELOG.md for release history.

License

Apache 2.0. See LICENSE.

About

Self-hosted memory-as-a-service for AI agents. Topic-scoped memory with principal-based RBAC, Zettelkasten-style chunk linking, and dual-mode retrieval.

Resources

License

Stars

Watchers

Forks

Packages