AI-powered activity tracking for macOS. Hindsight automatically captures screenshots at regular intervals and uses a local vision model running on LM Studio to summarize what you're working on.
- Automatic screenshot capture - Periodic screenshots using macOS native
screencapture - AI-powered summarization - Uses local vision models via LM Studio to describe your activity
- Web dashboard - Real-time activity stream and AI chat interface
- Worklog API - REST API to query your activity history
- Privacy-first - Everything runs locally, no data leaves your machine
- LM Studio plugin - Ask questions about your activity directly in LM Studio
Get a summary of what you worked on today using LM Studio
Important
Node.js 22.21.1 is required. Installation will fail without it.
Use nvm install 22.21.1 or download from nodejs.org
- macOS (required for
screencapture) - LM Studio with a vision-capable model (e.g.,
qwen/qwen3-vl-8b)
git clone https://github.com/your-username/hindsight.git
cd hindsight./install.shThe installer will:
- Check prerequisites (macOS, Node.js, Xcode CLT)
- Clone and build the lmstudio-js SDK
- Prompt for LM Studio API token
- Configure capture interval and other settings
- Install dependencies and build each package
If .env already exists, the installer skips configuration prompts and proceeds directly to installation. This allows re-running the installer after pulling updates.
Open LM Studio and load a vision-capable model trained for tool use. qwen/qwen3-vl-8b performed well during testing.
./hindsight.sh startNavigate to http://localhost:5173 (or your configured WEB_PORT) to view the activity stream and chat with the AI about your recent activity.
./hindsight.sh start # Start all services (API, capture, web, plugin)
./hindsight.sh start -v # Start with verbose API request logging
./hindsight.sh stop # Stop all services
./hindsight.sh status # Show service status
./hindsight.sh logs # Tail log filesThe web dashboard provides:
- Activity Stream - Real-time feed of your recent activity summaries
- AI Chat - Ask questions about your activity using the loaded LM Studio model
Access it at http://localhost:5173 (default).
The worklog API runs on http://localhost:3000 by default.
# Get worklog counts
curl http://localhost:3000/worklogs/counts
# Get worklogs for a time range
curl "http://localhost:3000/worklogs?start=1706745600&end=1706832000"
# Get worklogs for the last hour
curl "http://localhost:3000/worklogs?start=$(($(date +%s) - 3600))&end=$(date +%s)"Configuration is stored in .env at the project root:
# LM Studio API token (from LM Studio > Settings > Developer)
LM_API_TOKEN=your-token-here
# API server port
PORT=3000
# Screenshot capture interval in minutes
CAPTURE_INTERVAL=5
# Vision model for summarization
VISION_MODEL=qwen/qwen3-vl-8b
# Web dashboard port
WEB_PORT=5173Use ./hindsight.sh start -v to enable verbose API request logging.
hindsight/
├── install.sh # Installation script
├── hindsight.sh # Service manager (start/stop/status)
├── .env # Configuration (created by install.sh)
├── .env.example # Configuration template
├── lmstudio-js/ # SDK (cloned by install.sh)
├── packages/
│ ├── api/ # Express/SQLite worklog server
│ ├── capture-daemon/ # Screenshot capture (bash)
│ ├── image-summarizer/ # LM Studio vision summarization
│ ├── plugin/ # LM Studio plugin
│ └── web/ # React web dashboard
├── data/ # Runtime data
│ └── screenshots/ # Temporary screenshot storage
└── logs/ # Service log files
Express.js REST API with SQLite database for storing and querying activity logs.
Bash script that captures screenshots at regular intervals and sends them to the image-summarizer.
Node.js CLI that sends screenshots to LM Studio's vision model and posts summaries to the API.
LM Studio plugin that provides tools to query activity logs from within LM Studio chat:
available_hindsight_logs- Get counts of available worklog entriesget_hindsight_logs- Retrieve worklog entries for a date range
React web dashboard with:
- Real-time activity stream (polls API every 30 seconds)
- AI chat interface connected to LM Studio
- Built with Vite for fast development
Make sure LM Studio is running and has loaded a vision model. The capture daemon connects to ws://127.0.0.1:1234.
- Check the capture log:
./hindsight.sh logs - Verify the vision model is loaded in LM Studio
- Test the image-summarizer directly:
node packages/image-summarizer/dist/index.js /path/to/test.png qwen/qwen3-vl-8b
- Check if it's running:
./hindsight.sh status - Check the API log:
tail -f logs/api.log - Verify port 3000 isn't in use:
lsof -i :3000
- Check if it's running:
./hindsight.sh status - Check the web log:
tail -f logs/web.log - Verify the web port isn't in use:
lsof -i :5173 - Make sure the web package was built:
cd packages/web && npm run build
better-sqlite3 requires native compilation. Ensure you have:
- Xcode Command Line Tools:
xcode-select --install - Python 3:
python3 --version
macOS requires screen recording permission. Go to: System Preferences > Privacy & Security > Screen Recording > Enable for Terminal (or your terminal app)
Each package manages its own dependencies independently (no npm workspaces).
# Install and build a specific package
cd packages/api && npm install && npm run build
# Run API in development mode
cd packages/api && npm run dev
# Run web dashboard in development mode
cd packages/web && npm run dev
# Run plugin in development mode
cd packages/plugin && npm run dev
# Run capture daemon manually
bash packages/capture-daemon/capture.sh ./data/screenshots/ 1MIT
