Milestone: v0.2.0 | Tier: Medium | Effort: Medium
Problem
To use agent-kernel as middleware in an LLM pipeline, developers must manually translate between Capability objects and provider-specific tool schemas. This is tedious, error-prone, and blocks adoption in the two most popular LLM ecosystems.
OpenAI uses {type: "function", name, description, parameters} with optional strict mode and namespaces.
Anthropic uses {name, description, input_schema} with cache_control support.
Both formats are trivially derivable from Capability metadata, but nobody wants to write the boilerplate.
Proposed Change
1. Schema adapters (src/agent_kernel/adapters/)
OpenAI adapter (openai.py):
def capabilities_to_tools(caps: list[Capability]) -> list[dict]:
"""Convert Capabilities to OpenAI function tool definitions."""
def tool_call_to_request(tool_call: dict) -> CapabilityRequest:
"""Convert an OpenAI tool_call to a CapabilityRequest."""
def format_result(result: Any) -> dict:
"""Format kernel invoke() result as OpenAI function_call_output."""
Anthropic adapter (anthropic.py):
def capabilities_to_tools(caps: list[Capability]) -> list[dict]:
"""Convert Capabilities to Anthropic tool definitions."""
def tool_use_to_request(tool_use_block: dict) -> CapabilityRequest:
"""Convert an Anthropic tool_use block to a CapabilityRequest."""
def format_result(result: Any, tool_use_id: str) -> dict:
"""Format kernel invoke() result as Anthropic tool_result."""
2. Full middleware pipeline hooks
class OpenAIMiddleware:
"""Drop-in middleware for OpenAI Responses/Chat Completions API."""
def __init__(self, kernel: Kernel, principal: Principal): ...
def get_tools(self) -> list[dict]:
"""Get OpenAI-formatted tools from kernel registry."""
async def handle_tool_calls(self, response_output: list[dict]) -> list[dict]:
"""Process tool calls through kernel pipeline, return function_call_outputs."""
def intercept_tool_call(self, callback: Callable) -> None:
"""Register pre-invocation hook (logging, metrics, approval gates)."""
def intercept_tool_result(self, callback: Callable) -> None:
"""Register post-invocation hook (result transformation, caching)."""
Equivalent AnthropicMiddleware class.
3. Namespace mapping (OpenAI)
Map dot-notation capability IDs to OpenAI namespace format:
billing.list_invoices → namespace billing, function list_invoices
4. No SDK dependency
Adapters work with plain dicts — no dependency on openai or anthropic packages. They transform data structures only.
Acceptance Criteria
Affected Files
src/agent_kernel/adapters/__init__.py (new)
src/agent_kernel/adapters/openai.py (new)
src/agent_kernel/adapters/anthropic.py (new)
tests/test_adapters.py (new — round-trip and middleware tests)
docs/integrations.md (usage examples)
Milestone: v0.2.0 | Tier: Medium | Effort: Medium
Problem
To use agent-kernel as middleware in an LLM pipeline, developers must manually translate between
Capabilityobjects and provider-specific tool schemas. This is tedious, error-prone, and blocks adoption in the two most popular LLM ecosystems.OpenAI uses
{type: "function", name, description, parameters}with optionalstrictmode and namespaces.Anthropic uses
{name, description, input_schema}withcache_controlsupport.Both formats are trivially derivable from
Capabilitymetadata, but nobody wants to write the boilerplate.Proposed Change
1. Schema adapters (
src/agent_kernel/adapters/)OpenAI adapter (
openai.py):Anthropic adapter (
anthropic.py):2. Full middleware pipeline hooks
Equivalent
AnthropicMiddlewareclass.3. Namespace mapping (OpenAI)
Map dot-notation capability IDs to OpenAI namespace format:
billing.list_invoices→ namespacebilling, functionlist_invoices4. No SDK dependency
Adapters work with plain dicts — no dependency on
openaioranthropicpackages. They transform data structures only.Acceptance Criteria
Capability→ OpenAI schema →tool_call→CapabilityRequestpreserves all fieldsCapability→ Anthropic schema →tool_use→CapabilityRequestpreserves all fieldsOpenAIMiddleware.handle_tool_calls()processes tool calls through full kernel pipelineintercept_tool_call()andintercept_tool_result()hooks execute in correct orderopenaioranthropicpackagesAffected Files
src/agent_kernel/adapters/__init__.py(new)src/agent_kernel/adapters/openai.py(new)src/agent_kernel/adapters/anthropic.py(new)tests/test_adapters.py(new — round-trip and middleware tests)docs/integrations.md(usage examples)