Itera is an AI-powered personalized learning roadmap generator. Fill in your profile — current role, target role, tech stack, experience — and Itera generates a structured, phase-by-phase learning roadmap with curated free and paid resources for every topic, time estimates split by learning path, and a semantic knowledge base used to validate your progress logs.
- Structured roadmaps — 3–4 phases, each broken into skill areas and topics, generated by a local LLM (Ollama) with Groq as automatic fallback
- Per-topic resources — YouTube playlists/videos, freeCodeCamp, Coursera, and Udemy links attached to every topic
- Resource cache — fetched resources are stored in the database by search query and reused across roadmaps, eliminating redundant API calls
- Free / Paid / Avg time estimates — each topic and the overall roadmap shows hours broken down by free path, paid path, and average
- Background knowledge base — a rich semantic KB (what it is, subtopics, validation keywords) is generated asynchronously after the roadmap is returned, so you're never waiting on it
- Semantic log validation — describe what you learned in plain text; the backend uses fuzzy matching against the topic's knowledge base to decide if it counts
- Enrollment & progress bar — enroll in a roadmap to start tracking; per-topic completion and an overall percentage shown in real time
- LLM call logging — every LLM call is logged to the database with latency, token counts, model used, and hallucination flags
- Typewriter reveal animation — when a roadmap loads, phases are revealed one by one with a letter-by-letter typewriter effect on each phase title
- Role dropdowns — current role and target role fields use a combobox with 27 preset roles and free-text input for anything not in the list
- Animated generating screen — 5-step progress animation with elapsed timer while the roadmap is being built (~1–2 minutes)
- Dark / Light theme
- JWT authentication
| Layer |
Technology |
| Backend |
Python 3.12 + FastAPI (async) |
| LLM |
Ollama (local, any model) — Groq (llama-3.3-70b) as fallback |
| Vector store |
ChromaDB (semantic KB search) |
| Database |
PostgreSQL 15 + Redis 7 |
| ORM |
SQLAlchemy (async) + Alembic |
| Frontend |
React 19 + Vite + Tailwind CSS |
| State |
Zustand (with localStorage persistence) |
| Proxy |
Nginx |
| Deployment |
Docker Compose |
- Docker Desktop
- Ollama running locally with a model pulled (e.g.
ollama pull llama3.1) — or a Groq API key as fallback (free tier available)
git clone https://github.com/xK3yx/itera.git
cd itera/itera-backend
# Copy the env file and configure your keys
cp .env.example .env
docker compose up --build -d
Open http://localhost in your browser.
Alembic migrations run automatically on container start — no manual step needed.
# App
APP_NAME=Itera
SECRET_KEY=your-secret-key
DEBUG=false
# Database
POSTGRES_USER=itera_user
POSTGRES_PASSWORD=itera_password
POSTGRES_DB=itera_db
DATABASE_URL=postgresql+asyncpg://itera_user:itera_password@db:5432/itera_db
# Redis
REDIS_URL=redis://redis:6379/0
# LLM — Ollama (primary)
OLLAMA_BASE_URL=http://host.docker.internal:11434
OLLAMA_MODEL=llama3.1
# LLM — Groq (fallback when Ollama is unreachable)
GROQ_API_KEY=your-groq-api-key
# YouTube Data API v3 (optional — falls back to search URLs if not set or quota exceeded)
YOUTUBE_API_KEY=your-youtube-api-key
# JWT
JWT_SECRET_KEY=your-jwt-secret
JWT_ALGORITHM=HS256
JWT_EXPIRE_MINUTES=1440
When running with Docker, DATABASE_URL and REDIS_URL use internal service hostnames automatically.
itera/
├── itera-backend/
│ ├── app/
│ │ ├── routers/
│ │ │ ├── auth.py # Register / login / profile
│ │ │ ├── generated_roadmaps.py # Generate, list, get, delete roadmaps
│ │ │ ├── roadmap_progress.py # Enroll, log progress, get enrollment
│ │ │ ├── knowledge_base.py # Read KB for a roadmap
│ │ │ ├── admin.py # LLM call log viewer
│ │ │ └── users.py # User profile update
│ │ ├── models/
│ │ │ ├── user.py
│ │ │ ├── generated_roadmap.py # GeneratedRoadmap + KnowledgeBase
│ │ │ ├── roadmap_enrollment.py
│ │ │ ├── llm_call_log.py # Per-call LLM audit log
│ │ │ └── resource_cache.py # Cached topic resources by search query
│ │ ├── services/
│ │ │ ├── roadmap_service.py # Full generation pipeline
│ │ │ ├── llm_tracker.py # Tracked LLM calls with hallucination detection
│ │ │ ├── fuzzy_match.py # Semantic progress log matching
│ │ │ └── chroma_service.py # ChromaDB vector store
│ │ └── config.py
│ ├── alembic/versions/ # Database migrations
│ └── docker-compose.yml
└── itera-frontend/
├── src/
│ ├── pages/
│ │ ├── Login.jsx / Register.jsx
│ │ ├── ProfileSetup.jsx # Profile form with role combobox
│ │ ├── Recommendations.jsx # Generate roadmap + history sidebar
│ │ └── RoadmapView.jsx # Roadmap with typewriter reveal + progress
│ ├── services/
│ │ └── api.js # Axios API client
│ └── store/
│ └── authStore.js # Zustand auth state
└── nginx.conf # Proxy + timeout config
| Method |
Endpoint |
Description |
| POST |
/api/v1/auth/register |
Register new user |
| POST |
/api/v1/auth/login |
Login, returns JWT |
| GET |
/api/v1/auth/me |
Get current user |
| PUT |
/api/v1/users/profile |
Update profile |
| Method |
Endpoint |
Description |
| POST |
/api/v3/roadmaps/generate |
Generate a new roadmap |
| GET |
/api/v3/roadmaps/ |
List all roadmaps for current user |
| GET |
/api/v3/roadmaps/{id} |
Get roadmap by ID |
| DELETE |
/api/v3/roadmaps/{id} |
Delete roadmap |
| Method |
Endpoint |
Description |
| POST |
/api/v3/roadmaps/{id}/enroll |
Enroll in a roadmap |
| GET |
/api/v3/roadmaps/{id}/enrollment |
Get enrollment + completed topics |
| POST |
/api/v3/roadmaps/{id}/progress/{topic_id} |
Submit a progress log entry |
| Method |
Endpoint |
Description |
| GET |
/api/v3/roadmaps/{id}/kb |
Get knowledge base for a roadmap |
| GET |
/api/v3/admin/llm-logs |
View LLM call audit log |
# Start everything (builds images, runs migrations automatically)
docker compose up --build -d
# View API logs
docker logs itera_api -f
# View background KB generation logs
docker logs itera_api 2>&1 | grep KB-BG
# Check resource cache
docker exec itera_postgres psql -U itera_user -d itera_db -P pager=off \
-c "SELECT search_query, hit_count, jsonb_array_length(resources) AS resources FROM resource_cache ORDER BY hit_count DESC;"
# Stop
docker compose down
# Full reset (wipes database volume)
docker compose down -v
- Structure — one LLM call produces phases and skill areas
- Topics — one LLM call per skill area (run in parallel)
- Resources — for each topic, check
resource_cache first; on miss, fetch from YouTube API (falls back to search URL on quota exceeded), freeCodeCamp, Coursera, and Udemy, then store in cache
- Hours — free path hours stamped on each topic; paid = free × 0.7; average shown in UI
- Persist — roadmap saved to database and returned to user (~1–2 minutes total)
- KB — knowledge base generated in background (asyncio task) after response is returned; used later for semantic progress matching