This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a Retrieval-Augmented Generation (RAG) system that answers questions about course materials using semantic search and AI-powered responses. The application uses:
- ChromaDB for vector storage
- Anthropic's Claude, OpenAI's GPT, or Google's Gemini for AI generation
- FastAPI for the web backend
- A React frontend (in the frontend/ directory)
-
Backend (
backend/):app.py: Main FastAPI application with endpointsrag_system.py: Core orchestrator that coordinates document processing, vector storage, and AI generationdocument_processor.py: Handles parsing and chunking of course documentsvector_store.py: Manages ChromaDB interactions for storing/retrieving document embeddingsai_generator.py: Provider-agnostic AI generation interface supporting multiple LLM providersconfig.py: Configuration management with environment variablessearch_tools.py: Implements semantic search functionality using toolssession_manager.py: Manages conversation historymodels.py: Data models for courses, lessons, and chunks
-
Frontend (
frontend/): React application for user interface -
Documents (
docs/): Course materials in PDF, DOCX, or TXT format
# Install dependencies
uv sync
# Set up environment variables in .env:
# ANTHROPIC_API_KEY=your_key_here
# LLM_PROVIDER=anthropic # or openai or gemini# Quick start (recommended)
chmod +x run.sh
./run.sh
# Manual start
cd backend
uv run uvicorn app:app --reload --port 8000Place PDF/DOCX/TXT files in the docs/ folder. The system will automatically load them on startup.
# Set up development environment with quality tools
uv sync --group dev
# Install pre-commit hooks (optional but recommended)
./scripts/setup-hooks.sh# Format code with Black and isort
./scripts/format.sh
# Check code formatting and linting
./scripts/lint.sh
# Run type checking
./scripts/typecheck.sh
# Run all quality checks (format check + linting + type checking)
./scripts/quality-check.sh# Run tests
./scripts/test.shThe project includes pre-commit hooks that automatically run quality checks before each commit. This ensures consistent code quality and catches issues early.
- Black for code formatting (line length: 88)
- isort for import organization
- flake8 for linting
- mypy for type checking
- pytest for testing
- pre-commit for automated quality checks
- Web Interface: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
The system supports three LLM providers:
- Anthropic Claude (default)
- OpenAI GPT
- Google Gemini
Switch between them by setting the LLM_PROVIDER environment variable in .env.
-
Always use
uvfor Python env management, installs, and execution — never usepipdirectly.- Install/resolve deps:
uv sync - Run server (manual):
cd backend && uv run uvicorn app:app --reload --port 8000 - Run tests:
uv run pytest -q - Add deps:
uv add <pkg>(or editpyproject.tomlthenuv sync) - Do not run
pip installorpython -m pipin this project.
- Install/resolve deps:
-
The run script already uses
uv:./run.shstarts the API viauv run uvicorn …
-
If a virtualenv appears broken, prefer
rm -rf .venv && uv syncover pip. -
make sure to use uv to manage all dependencies
-
use uv to run python files