Skip to content

yrarjun59/FAQ-Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 LLM-Powered FAQ Assistant (RAG) This project is building an AI-powered FAQ assistant that answers user questions based on a custom document or knowledge base using Retrieval-Augmented Generation (RAG).

🚀 What It Is Doing The system is:

  • Ingesting FAQ documents and structured text data
  • Converting content into semantic embeddings using Hugging Face models
  • Storing vectors in ChromaDB for efficient similarity search
  • Retrieving relevant context based on user queries
  • Generating grounded answers using an open-source LLM
  • Serving responses through a simple Streamlit web interface

⚙️ Tech Stack

  • Python
  • Hugging Face Transformers
  • Open-source LLMs
  • ChromaDB (Vector Database)
  • Streamlit

🎯 Purpose This assistant is helping users quickly navigate and query large FAQ or documentation sets by combining semantic search with LLM-based response generation.

About

RAG-based FAQ assistant using open-source LLMs, semantic embeddings, and vector search for context-aware question answering

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors