Open source security for LLM inference
InferShield is a self-hosted security proxy that sits between your application and any LLM provider (OpenAI, Anthropic, Google, etc.), providing real-time threat detection, policy enforcement, and complete audit trails.
- Prompt injection attacks bypass traditional security tools
- Data exfiltration through LLM responses goes undetected
- Compliance requirements (SOC 2, HIPAA, GDPR) can't be met
- No visibility into what your LLMs are actually doing
InferShield provides enterprise-grade security for LLM integrations:
- β Real-time threat detection - Block prompt injection, data exfiltration, jailbreaks
- β Self-hosted - Your data never leaves your infrastructure
- β Provider-agnostic - Works with OpenAI, Anthropic, Google, local models
- β Zero code changes - Drop-in proxy, just change your API endpoint
- β Complete audit logs - Every request tracked with risk scores
- β Open source - MIT licensed, transparent, community-driven
# Pull the image
docker pull infershield/proxy:latest
# Run the proxy
docker run -p 8000:8000 \
-e OPENAI_API_KEY=sk-your-key-here \
infershield/proxy
# Update your code (one line change)
# Before:
client = OpenAI(base_url="https://api.openai.com/v1")
# After:
client = OpenAI(base_url="http://localhost:8000/v1")git clone https://github.com/infershield/infershield.git
cd infershield
cp .env.example .env # Add your API keys
docker-compose up -dNow visit:
- Proxy: http://localhost:8000
- Dashboard: http://localhost:3000
- Backend API: http://localhost:5000
βββββββββββββββ ββββββββββββββββββββ βββββββββββββββ
β Your App β βββ> β InferShield β βββ> β Any LLM β
β β β Proxy β β Provider β
β app.py β β localhost:8000 β β OpenAI/etc β
βββββββββββββββ ββββββββββββββββββββ βββββββββββββββ
β
β logs/metrics
βΌ
ββββββββββββββββββββ
β Dashboard β
β localhost:3000 β
ββββββββββββββββββββ
- Prompt Injection - Detects attempts to override system instructions
- Data Exfiltration - Blocks requests trying to extract sensitive data
- Jailbreak Attempts - Identifies evasion techniques (encoding, obfuscation)
- SQL Injection - Catches database attack patterns
- PII Leakage - Detects personally identifiable information
- Multi-encoding detection - Base64, hex, URL, Unicode escaping
- Nested encoding - Handles chained obfuscation (Base64 of hex, etc.)
- Synonym expansion - Catches evasion via alternative phrasing
- Context-aware scoring - Reduces false positives with proximity analysis
- Custom policies - Define your own threat detection rules
- Complete request logs - Every prompt and response recorded
- Risk scoring - 0-100 scale for every request
- Policy enforcement - Block high-risk requests automatically
- Export capabilities - JSON/CSV for compliance reporting
- Timestamped trails - Forensic-ready audit logs
OpenAI-compatible security proxy server.
- Drop-in replacement for any OpenAI SDK
- Forwards to configured LLM provider
- Real-time threat detection
- < 1ms latency overhead
Threat detection engine and API.
- 12+ detection policies
- Risk scoring algorithm
- Audit log storage
- REST API for dashboard
Real-time monitoring interface.
- Live request stream
- Threat analytics
- Risk score trends
- Audit log viewer
Create a .env file:
# LLM Provider API Keys
OPENAI_API_KEY=sk-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
# InferShield Backend
BACKEND_URL=http://localhost:5000
# Security Settings
RISK_THRESHOLD=70
BLOCK_HIGH_RISK=trueSee Configuration Guide β for all options.
- Latency: < 1ms overhead per request
- Throughput: 1000+ requests/second (single instance)
- Memory: ~50MB base usage
- Storage: ~1KB per logged request
InferShield has been red-team tested with 25+ attack vectors:
- β 95%+ detection rate across all threat types
- β < 5% false positive rate on legitimate queries
- β 100% blocking of known bypass techniques (encoding, obfuscation)
See Security Validation Report β
Looking for advanced capabilities?
InferShield Enterprise includes:
- π¬ ML-based detection - Advanced behavioral analysis
- π Compliance packs - SOC 2, HIPAA, GDPR templates
- π SSO/SAML - Enterprise authentication
- π Custom dashboards - Tailored reporting
- βοΈ Managed hosting - Fully managed cloud deployment
- π 24/7 support - Dedicated security hotline
Learn more about Enterprise β
We welcome contributions! See CONTRIBUTING.md for guidelines.
Quick ways to contribute:
- π Report bugs via GitHub Issues
- π‘ Suggest features in Discussions
- π§ Submit pull requests (see Development Guide)
- π Improve documentation
- π§ͺ Add detection policies
- Installation Guide
- Configuration Reference
- API Documentation
- Custom Policies
- Security Validation
- Troubleshooting
InferShield is MIT licensed. See LICENSE for details.
Free forever. No strings attached.
- Website: infershield.io
- GitHub: github.com/infershield
- Discord: Coming soon
- Twitter: Coming soon
- Email: security@infershield.io
If InferShield helps secure your LLM infrastructure, consider giving us a star! β
Built with inputs from security leaders in:
- Finance (banking, fintech)
- Healthcare (HIPAA-regulated orgs)
- Government (federal/state agencies)
Special thanks to the open source community for security research and feedback.
Built for security teams, by security engineers.
Β© 2026 InferShield Β· Secure every inference