🧠 LLM-Powered FAQ Assistant (RAG) This project is building an AI-powered FAQ assistant that answers user questions based on a custom document or knowledge base using Retrieval-Augmented Generation (RAG).
🚀 What It Is Doing The system is:
- Ingesting FAQ documents and structured text data
- Converting content into semantic embeddings using Hugging Face models
- Storing vectors in ChromaDB for efficient similarity search
- Retrieving relevant context based on user queries
- Generating grounded answers using an open-source LLM
- Serving responses through a simple Streamlit web interface
⚙️ Tech Stack
- Python
- Hugging Face Transformers
- Open-source LLMs
- ChromaDB (Vector Database)
- Streamlit
🎯 Purpose This assistant is helping users quickly navigate and query large FAQ or documentation sets by combining semantic search with LLM-based response generation.