Skip to content

vavilovnv/simple-ai-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 A simple AI agent with web search

Ru Простой AI агент, "под капотом" которого два AI-ассистента - обычный и "думающий". Оба ассистента умеют в web-поиск. Ассистентами управляет модель-судья, которая решает, какому из ассистентов передать пользовательский запрос на доработку.

The AI agent can generate a response using the regular assistant and the thinking assistant, if necessary. Both assistants can use web search via the Tavily API.

To run the application, you need to use an LLM that supports the OpenAI API (the application uses integration from langchain).

You can use either an online or offline model. For instance, for a local setup, you can use openai/gpt-oss-20b and the LM-studio server.

💡 How does an AI agent work?

The landgraph superviser determines which AI assistant will answer the user's question and redirects the question:

  • A regular AI assistant handles simple questions.
  • Questions that require analysis and reasoning are forwarded to a thinking AI assistant.

The AI assistant generates a response, calling the Tavily API to obtain missing data if necessary.

The generated result is returned to the supervisor, and from there to the user.

Setting up and starting a chat with a model:

  1. Install a package manager uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
  1. Install dependencies:
uv sync
  1. Based on the .env_example file, create a .env file and fill in the variable values.
  2. Run main.py:
python main.py

About

A simple AI agent with web search

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors