Ru Простой AI агент, "под капотом" которого два AI-ассистента - обычный и "думающий". Оба ассистента умеют в web-поиск. Ассистентами управляет модель-судья, которая решает, какому из ассистентов передать пользовательский запрос на доработку.
The AI agent can generate a response using the regular assistant and the thinking assistant, if necessary. Both assistants can use web search via the Tavily API.
To run the application, you need to use an LLM that supports the OpenAI API (the application uses integration from langchain).
You can use either an online or offline model. For instance, for a local setup, you can use openai/gpt-oss-20b and the LM-studio server.
The landgraph superviser determines which AI assistant will answer the user's question and redirects the question:
- A regular AI assistant handles simple questions.
- Questions that require analysis and reasoning are forwarded to a thinking AI assistant.
The AI assistant generates a response, calling the Tavily API to obtain missing data if necessary.
The generated result is returned to the supervisor, and from there to the user.
- Install a package manager uv:
curl -LsSf https://astral.sh/uv/install.sh | sh- Install dependencies:
uv sync- Based on the .env_example file, create a .env file and fill in the variable values.
- Run main.py:
python main.py