-
Notifications
You must be signed in to change notification settings - Fork 1
Backend
Vivek Raman edited this page Feb 19, 2026
·
1 revision
A FastAPI-based backend for the LaTeX Chatbot Editor. This server handles LaTeX project management, compilation (via Tectonic), and AI-powered chat interactions.
- Project Management: Initialize, read, and update LaTeX projects.
- Compilation: Compile LaTeX files to PDF using Tectonic.
- AI Chat: Context-aware chat using LangGraph agents.
- Configuration: Manage user settings and LLM credentials.
graph TD
API[FastAPI Server] --> PM[Project Management]
API --> C[Compilation]
API --> Chat[AI Chat]
API --> Config[Configuration]
PM --> FS[(File System)]
C --> Tectonic{Tectonic Engine}
Chat --> LG[LangGraph Agent]
LG --> LLM((LLM Provider))
Chat --> FS
-
POST /init: Initialize a new project in a directory. -
GET /files: List files in a project directory. -
GET /files/content: internal endpoints for reading files. -
PUT /files/content: internal endpoints for updating files.
-
POST /compile: Trigger a compilation for a project. -
GET /pdf: Retrieve the compiled PDF.
-
POST /chat: Send a message to the AI agent. -
POST /copilotkit: Stream events for CopilotKit integration.
-
GET /config: Retrieve current configuration. -
POST /config: Update configuration (e.g., OpenAI keys). -
POST /nuke: Reset configuration to defaults.
The server is built with Python 3.10+ and managed by uv.
# From project root
make serverOr manually:
cd editor-cli
uv run latex-chatbot-serverThe server runs on http://127.0.0.1:8765 by default.