Kavch is an AI-assisted digital safety platform designed to help users move from cyber harassment detection to evidence preservation and legal action in one workflow.
It combines threat analysis, deepfake/image misuse triage, tamper-evident vaulting, and legal draft generation with India-focused escalation guidance.
Victims of cyber harassment often face four problems:
- Abuse is hard to classify for urgency.
- Evidence is lost, edited, or not documented correctly.
- Legal filing language is complex and time-consuming.
- Support channels are fragmented.
Kavch addresses this by turning incidents into structured outputs: tiered analysis, hash-backed evidence records, and draft legal documents linked to those records.
- Endpoint:
POST /api/classify-threat - Returns:
- severity tier (
1to4) - confidence
- key phrases
- applicable legal sections
- recommended actions
- severity tier (
- Includes deterministic fallback logic when external AI/model confidence is low, so the UI still returns actionable results.
- Endpoint:
POST /api/scan-image - Produces:
manipulation_score(0-100)deepfake_detectedboolean- potential trace leads (
found_urls) - forensic notes and recommendation
- Current implementation is heuristic (fast and deterministic), intended for triage and escalation support rather than forensic finality.
- Endpoints:
POST /api/lock-evidenceGET /api/vaultDELETE /api/vault/{evidence_id}
- Uses SHA-256 hashing to fingerprint evidence payloads.
- Provides certificate rendering from vault items (platform/source, hash, timestamp).
- Supports API + local fallback continuity for frontend sessions.
- Endpoint:
POST /api/generate-legal-docs - Supported draft types:
- FIR Draft
- Platform Takedown
- Legal Notice
- NCW Complaint
- Multilingual support with backend and frontend fallback templates.
- Designed as assistive drafting; final filing should be reviewed by legal counsel.
- JWT auth with bcrypt-hashed passwords.
- Per-user history and stats:
- threats
- scans
- evidence items
- legal documents
Frontend (Vanilla JS + Tailwind CDN)
-> API wrapper in components.js
-> FastAPI backend
-> classifier.py (text risk)
-> deepfake.py (image risk)
-> evidence.py (SHA-256 evidence IDs)
-> legal_gen.py (legal draft generation)
-> SQLite via SQLAlchemy models
Detailed architecture reference: docs/ARCHITECTURE.md.
- Backend: FastAPI, SQLAlchemy, SQLite
- Authentication: JWT, bcrypt
- Analysis: Scikit-learn model + deterministic fallback pipelines
- Frontend: Static HTML + Vanilla JS + Tailwind CSS (CDN)
- Optional AI integration: Anthropic Claude API
ShiledHer/
backend/
main.py
auth.py
classifier.py
deepfake.py
evidence.py
legal_gen.py
database.py
models.py
requirements.txt
ml/
train_threat.py
data/
legal_templates/
frontend/
public/
index.html
report.html
threat-result.html
image-scan.html
vault.html
legal.html
safe-net.html
dashboard.html
login.html
signup.html
components.js
docs/
README.md
ARCHITECTURE.md
- Python 3.10+
- Node.js 18+
cd backend
pip install -r requirements.txt
python -m uvicorn main:app --reload --port 8000cd frontend/public
npx -y serve -p 3000 --no-clipboardOpen http://localhost:3000.
Create backend/.env:
SECRET_KEY=replace_with_long_random_secret
ANTHROPIC_API_KEY=optional_for_ai_generation
GOOGLE_VISION_API_KEY=optionalNotes:
- App runs without optional API keys using fallback logic.
- External keys improve generation quality and context, but are not mandatory for core flows.
GET /GET /api/healthGET /api/support-contacts
POST /api/auth/registerPOST /api/auth/loginGET /api/auth/me
POST /api/classify-threatPOST /api/scan-imagePOST /api/detect-coordination
POST /api/lock-evidenceGET /api/vaultDELETE /api/vault/{evidence_id}
POST /api/generate-legal-docsGET /api/legal-docs
GET /api/history/threatsGET /api/history/scans
- SHA-256 hashing is used for evidence integrity.
- Protected routes require JWT bearer auth.
- Passwords are stored with bcrypt hashes.
- User-scoped DB filtering is enforced on protected data routes.
Kavch is an assistive safety and documentation tool. It does not replace legal representation, law enforcement processes, or certified forensic examination. Always verify details before formal filing.
- Deepfake scoring is currently heuristic and not a certified forensic model.
- Legal output quality depends on context completeness and optional AI availability.
- The repository does not currently expose a production hardening baseline (tests/migrations/CI) by default.
- Add unit + API integration tests.
- Add schema migration tooling (Alembic).
- Add request audit logging and rate limits.
- Add confidence calibration datasets for classifier/scan outputs.
- Add root-level
LICENSEand deployment templates.
- Yuvraj Singh
- Akshat Srivastava
- Adarsh Kumar Pandey
- Chigulla Eshita
Last updated: March 2026