EAA is a Python toolkit for building experiment-facing agents around a shared set
of task-manager, tool, memory, and WebUI primitives. The repository is organized
as a workspace with multiple installable packages, currently eaa-core and
eaa-imaging.
- Core agent runtime:
packages/eaa-core/src/eaa_core/task_manager/base.py - Reusable built-in graphs: chat and feedback loop
- Concrete workflows: ROI search, feature tracking, parameter tuning, Bayesian
optimization, and analytical task managers under
packages/eaa-*/src/eaa_*/task_manager/ - Tool system:
BaseTool, serialized execution, optional approval gates, MCP server/client helpers - Long-term memory: optional chat-memory layer configured through
MemoryManagerConfig - WebUI: FastAPI + static frontend, connected to the agent through a shared SQLite database
uv sync
source .venv/bin/activate
which pythonwhich python should resolve to .venv/bin/python.
This installs the workspace members eaa-core and eaa-imaging into the
repository-local environment as editable packages.
python -m venv .venv
source .venv/bin/activate
pip install -e packages/eaa-core -e packages/eaa-imagingFor published packages, users would typically install:
pip install eaa-core eaa-imagingThe smallest useful setup is a task manager, an LLM config, and one or more tools. This example starts a free-form chat with a simulated image-acquisition tool:
from skimage import data
from eaa_core.api.llm_config import OpenAIConfig
from eaa_core.task_manager.base import BaseTaskManager
from eaa_imaging.tool.imaging.acquisition import SimulatedAcquireImage
llm_config = OpenAIConfig(
model="your-model-name",
base_url="https://api.openai.com/v1",
api_key="your-api-key",
)
acquisition_tool = SimulatedAcquireImage(
whole_image=data.camera(),
add_axis_ticks=True,
show_image_in_real_time=False,
)
task_manager = BaseTaskManager(
llm_config=llm_config,
tools=[acquisition_tool],
session_db_path="session.sqlite",
use_webui=False,
)
task_manager.run_conversation()For a workflow-oriented manager, see examples/roi_search.py.
LLMConfigobjects describe how the chat model is constructed. The shipped config classes areOpenAIConfig,AskSageConfig, andArgoConfig.BaseTaskManagerowns model invocation, tool registration, memory, persistence, and graph execution.SerialToolExecutorruns tool calls one at a time. This is intentional: many experiment tools are stateful, should not be driven concurrently, or not thread-safe.MemoryManageradds optional long-term memory retrieval/saving on chat turns.- The WebUI is a separate FastAPI process. It communicates with the agent through the same SQLite database used for WebUI relay state and checkpointing.
The reusable graphs shipped in the base runtime are:
chat_graphfor interactive conversationfeedback_loop_graphfor iterative tool-driven workflows
build_task_graph() is available as a subclass hook for custom LangGraph
workflows, but the task managers currently in this repository mostly either:
- reuse
run_feedback_loop()/run_conversation(), or - implement an analytical workflow directly in Python while still updating message history and WebUI state through task-manager helpers
Set use_webui=True on the task manager and give it a session_db_path. Then
launch the standalone WebUI process against the same SQLite file:
from eaa_core.gui.chat import run_webui, set_message_db_path
set_message_db_path("session.sqlite")
run_webui(host="127.0.0.1", port=8008)Checkpointing and the WebUI relay share the same SQLite database by default.
Each resume entrypoint also accepts checkpoint_db_path if you need to load
checkpoints from a different SQLite file. The base task manager exposes:
run_conversation_from_checkpoint()run_feedback_loop_from_checkpoint()run_from_checkpoint()for subclasses that implementtask_graph
Long-term memory is configured with MemoryManagerConfig. In the current
codebase, the built-in memory manager creates a Chroma-backed vector store and
can:
- retrieve relevant memories on chat turns
- inject retrieved memories back into the model context
- save new memories when a user message contains trigger phrases such as
"remember this"or"keep in mind"
The postgresql_vector_store extra is present in pyproject.toml, but the
built-in MemoryManager path in this repository currently wires up Chroma.
EAA supports both directions of MCP integration.
Expose EAA tools as an MCP server:
from eaa_core.tool.mcp_server import run_mcp_server_from_tools
from eaa_core.tool.example_calculator import CalculatorTool
run_mcp_server_from_tools(
tools=CalculatorTool(),
server_name="Calculator MCP Server",
)Use an external MCP server as a normal EAA tool:
from eaa_core.tool.mcp_client import MCPTool
mcp_tool = MCPTool(
{
"mcpServers": {
"remote_tools": {
"command": "python",
"args": ["./path/to/server.py"],
}
}
}
)Use an MCP server over HTTP from another machine:
from eaa_core.tool.mcp_client import MCPTool
mcp_tool = MCPTool(
{
"mcpServers": {
"calculator": {
"url": "http://SERVER_IP:8050/mcp",
"transport": "http",
}
}
}
)For this remote HTTP setup, the server side must be started with
run_mcp_server_from_tools(..., transport="http", host="0.0.0.0", port=8050, path="/mcp").
The client config must keep the server definition under mcpServers; passing
only {"url": ..., "transport": "http"} is not enough.
Skills are reusable, markdown-first task packages that EAA can discover and
load at runtime. In the current implementation, each skill is a directory with
at least a SKILL.md file. Additional markdown files and referenced images can
live alongside it.
Bundled skills live under packages/eaa-core/src/eaa/skills/ and
packages/eaa-imaging/src/eaa/skills/. A typical layout looks like:
my-skill/
SKILL.md
references/
api_reference.md
figure.png
To use skills, point a task manager at one or more skill directories:
task_manager = BaseTaskManager(
llm_config=llm_config,
tools=[acquisition_tool],
skill_dirs=["./packages/eaa-imaging/src/eaa/skills", "~/.eaa_skills"],
)At build time, EAA scans those directories for SKILL.md, turns each skill
into a documentation-fetching tool, and makes the discovered skills available
inside the agent. In an interactive chat session you can:
- run
/skillto list the loaded skills - run
/subtask <task description>to let the agent choose a skill, fetch its docs, and launch a skill-driven subtask flow
If you want to copy the bundled skills out of the package tree, use:
python -m eaa_core.cli install-skills --destination ~/.eaa_skillsSphinx documentation lives under docs/ and is configured for Read the Docs.
Build it locally with:
uv sync --extra docs
source .venv/bin/activate
cd docs
make htmlThe generated site will be in docs/_build/html/.