An internal AI-powered knowledge assistant built with Ruby on Rails, React, OpenAI SDK, pgvector, and Ollama (for local inference). It enables organizations to upload internal policies, guidelines, and documents — so employees can easily ask questions like:
“What’s our leave policy?” “How do I submit an expense report?” “What’s our remote work policy?”
Build an internal knowledge chatbot that:
- Learns from uploaded company documents or policies.
- Uses Retrieval-Augmented Generation (RAG) for precise, context-aware answers.
- Runs via OpenAI API or locally with Ollama.
- Integrates seamlessly into existing Rails-based internal tools.
| Layer | Technology | Purpose |
|---|---|---|
| Backend | Ruby on Rails | API and RAG pipeline |
| Frontend | React | Interactive chat interface |
| AI SDK | OpenAI SDK | Embeddings + LLM inference |
| Vector DB | PostgreSQL + pgvector | Semantic document storage |
| Local AI | Ollama | On-device LLM inference option |
git clone https://github.com/your-org/organization-chatbot.git
cd organization-chatbotbundle install
yarn install.env
OPENAI_API_KEY=your_openai_api_key
DATABASE_URL=postgres://user:password@localhost:5432/organization_chatbot_development
OLLAMA_HOST=http://localhost:11434
psql -d postgres -c "CREATE EXTENSION IF NOT EXISTS vector;"
rails db:create db:migrate
rails s
Admins upload PDFs, DOCX files, or text documents. The app extracts and chunks text for embeddings.
Each text chunk is embedded using the OpenAI embeddings API (or Ollama local model). The embeddings are stored in pgvector for semantic search.
When a user asks a question, relevant chunks are retrieved based on vector similarity.
The chatbot sends the question + retrieved context to an LLM (OpenAI or Ollama) for a grounded answer.
🔜 Multi-user access control
🔜 Slack / Teams integration
🔜 Fine-tuned company-specific model support