Zero-trust governance for AI agents. One decorator. Full control.
You gave your AI agent access to real tools. Now it can:
- Transfer money
- Send emails
- Delete records
- Execute code
What could possibly go wrong?
Everything.
from sentinel import protect, SentinelConfig
config = SentinelConfig(rules_path="rules.json")
@protect(config)
async def transfer_funds(amount: float, destination: str) -> str:
return f"Transferred ${amount} to {destination}"That's it. Three lines. Your agent now requires human approval for high-risk actions.
Agent: "I'll transfer $5,000 to vendor@example.com"
============================================================
π‘οΈ SENTINEL APPROVAL REQUIRED
============================================================
Agent: sales-agent
Function: transfer_funds
Amount: $5,000.00
Context:
current_balance: $10,000.00
daily_limit_remaining: $3,000.00
Reason: Amount exceeds $100 threshold
------------------------------------------------------------
Approve? [y/n]: _
You decide. Not the AI.
| Feature | Description |
|---|---|
| π― Rule Engine | JSON-configurable policies (thresholds, blocks, approvals) |
| π Multi-channel Approval | Terminal, Webhook, or Dashboard UI |
| π Context for Decisions | Show balance, limits, history to approvers |
| π Audit Log | JSONL logs for compliance (GDPR, SOC2 ready) |
| π§ Anomaly Detection | Statistical analysis blocks unusual patterns |
| π LangChain Native | protect_tools() wraps any LangChain tool |
| π₯οΈ Visual Dashboard | Streamlit UI with approve/deny buttons |
# Install from PyPI (recommended)
pip install agentic-sentinel
# Or install from GitHub
pip install git+https://github.com/azdhril/Sentinel.git
# With dashboard support
pip install agentic-sentinel[dashboard]
# With LangChain support
pip install agentic-sentinel[langchain]from sentinel import protect, SentinelConfig
config = SentinelConfig(
rules_path="rules.json",
approval_interface="terminal",
fail_mode="secure", # Block on errors, not allow
)
@protect(config)
async def delete_user(user_id: int) -> str:
return f"Deleted user {user_id}"{
"version": "1.0",
"default_action": "allow",
"rules": [
{
"id": "financial_limit",
"function_pattern": "transfer_*",
"conditions": [{"param": "amount", "operator": "gt", "value": 100}],
"action": "require_approval",
"message": "Transfers over $100 require approval"
},
{
"id": "block_deletes",
"function_pattern": "delete_*",
"action": "block",
"message": "Delete operations are disabled"
}
]
}from langchain.agents import create_openai_tools_agent
from sentinel.integrations.langchain import protect_tools
# Your existing tools
tools = [search_tool, email_tool, payment_tool]
# One line to protect them all
protected_tools = protect_tools(tools, sentinel_config)
# Use as normal
agent = create_openai_tools_agent(llm, protected_tools, prompt)Start the visual command center:
pip install agentic-sentinel[dashboard]
python -m sentinel.dashboardOpen http://localhost:8501:
- See pending approvals in real-time
- Click to approve or deny
- View audit history and metrics
- Track "Value Protected" across your org
Track your protection metrics: The dashboard shows "Total Value Protected" - the sum of all transactions that required approval. Use this metric to demonstrate ROI to stakeholders and justify governance investments.
Sentinel doesn't just check rules. It learns patterns.
config = SentinelConfig(
rules_path="rules.json",
anomaly_detection=True,
anomaly_statistical=True,
)Normal behavior: $50, $60, $70, $80, $90
Anomalous request: $5,000
Z-Score: 311.8 standard deviations
Risk: CRITICAL (10.0)
Action: BLOCKED AUTOMATICALLY
No rule needed. The math speaks for itself.
Most systems fail-open: if something breaks, actions are allowed.
Sentinel fails-secure: if something breaks, actions are blocked.
config = SentinelConfig(
fail_mode="secure", # Default: block on any error
# fail_mode="safe", # Alternative: allow on error (not recommended)
)A security product that fails open isn't a security product.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β YOUR AI AGENT β
β (LangChain / CrewAI / AutoGPT / Custom) β
βββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SENTINEL LAYER β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββββββ β
β β @protect ββ β Rules ββ β Anomaly Detection β β
β β Decorator β β Engine β β (Z-Score Analysis) β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Approval Interface β β
β β Terminal | Webhook/API | Dashboard UI β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Audit Logger β β
β β (JSONL - Compliance Ready) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β EXTERNAL TOOLS β
β (Payment APIs, Databases, Email Services, etc.) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| Without Sentinel | With Sentinel |
|---|---|
| Agent transfers $50,000 by mistake | Agent asks permission first |
| You find out from your bank | You approve or deny in real-time |
| Logs show "function called" | Logs show who approved, when, why |
| "The AI did it" | "John approved it at 3:42 PM" |
- Fintech: Approve transactions over threshold
- HR Tech: Review before sending offer letters
- DevOps: Gate production deployments
- Healthcare: Verify before prescription changes
- Legal: Review before contract modifications
- SaaS: Reduce impulsive cancellations
Sentinel is being used to protect AI agents in:
- π¦ Financial services automation
- π§ Customer communication workflows
- π§ DevOps and infrastructure management
- π Data pipeline operations
Want to be featured here? Open an issue and tell us your use case!
- Core interception engine
- JSON rule configuration
- Terminal approval interface
- Webhook/API approval
- Streamlit Dashboard
- Statistical anomaly detection
- LangChain integration
- Audit logging (JSONL)
- Slack/Teams approval
- LLM-based semantic analysis (optional)
- Cloud-hosted dashboard
- SOC2 compliance package
Sentinel can be configured via environment variables. Copy the example file:
cp .env.example .envThen edit .env with your values. Key variables:
| Variable | Default | Description |
|---|---|---|
SENTINEL_LOG_DIR |
./sentinel_logs |
Directory for audit logs |
SENTINEL_FAIL_MODE |
secure |
secure (block on error) or safe (allow on error) |
SENTINEL_WEBHOOK_URL |
- | URL for webhook approval requests |
SENTINEL_WEBHOOK_TOKEN |
- | Auth token for webhook |
OPENAI_API_KEY |
- | For LLM anomaly detection (optional) |
See .env.example for all available options.
We welcome contributions! See CONTRIBUTING.md for guidelines.
# Clone and install dev dependencies
git clone https://github.com/azdhril/Sentinel.git
cd Sentinel
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run with coverage
pytest tests/ -v --cov=sentinel --cov-report=term-missingMIT License. Use it, fork it, sell it. Just don't blame us if your AI still does something stupid.
Need custom integration, SLA, or compliance features?
Stop hoping your AI behaves. Start knowing.
Get Started β’ Documentation β’ Report Bug