A provider-portable LLM toolkit with structured streaming, model registries, cross-provider message normalization, and an optional stateful agent runtime.
- agentic-kit — low-level portability layer with model descriptors, registries, structured event streams, and compatibility helpers
- @agentic-kit/agent — minimal stateful runtime with sequential tool execution and lifecycle events
- @agentic-kit/ollama — adapter for local Ollama inference
- @agentic-kit/anthropic — adapter for Anthropic Claude models
- @agentic-kit/openai — generalized adapter for OpenAI-compatible chat completion APIs
git clone git@github.com:constructive-io/agentic-kit.git
cd agentic-kit
pnpm install
pnpm build
pnpm testimport { complete, getModel } from "agentic-kit";
const model = getModel("openai", "gpt-4o-mini");
const message = await complete(model!, {
messages: [{ role: "user", content: "Hello", timestamp: Date.now() }],
});
console.log(message.content);See individual package READMEs for docs and local dev instructions.
Default tests stay deterministic and local:
pnpm testThere is also a local-only Ollama live lane that does not hit hosted providers. The default root command runs the fast smoke tier:
OLLAMA_LIVE_MODEL=qwen3.5:4b pnpm test:live:ollamaRun the broader lane explicitly when you want slower behavioral coverage:
OLLAMA_LIVE_MODEL=qwen3.5:4b pnpm test:live:ollama:extendedThe Ollama live script performs a preflight against OLLAMA_BASE_URL and exits
cleanly if the local server or requested model is unavailable. If
nomic-embed-text:latest is installed, the lane also exercises local embedding
generation.