Skip to content

ecarlsf/agent-patterns

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Agent Patterns 🎭

Practical, runnable implementations of constrained multi-agent reasoning patterns.

Most multi-agent frameworks focus on orchestration — how to route between agents. This repo focuses on cognition — how to structure agent thinking so the outputs are reliable, auditable, and resistant to the failure modes that make AI dangerous in high-stakes domains.

Each pattern constrains agents into specific cognitive roles, inspired by established human decision-making frameworks. The result: structured disagreement, forced perspective-taking, and explicit tradeoff analysis — things a single LLM call can't reliably do.


Patterns

Pattern Inspired By What It Does Best For
Six Thinking Hats Edward de Bono 6 parallel agents evaluate from distinct cognitive angles (facts, emotions, risks, benefits, creativity, process) Product decisions, feature evaluation, strategy review
Pre-Mortem Gary Klein Agents assume failure has already happened and work backward to identify causes Launch readiness, risk assessment, project planning
Dialectic Hegelian dialectic Thesis → Antithesis → Synthesis agents force genuine intellectual tension before resolution Complex tradeoffs, architecture decisions, policy evaluation

Why Constrained Agents?

The default approach to multi-agent systems is "give each agent a persona and let them chat." This fails in predictable ways:

  1. Convergence bias — agents agree too quickly because the underlying LLM optimizes for coherence
  2. Role collapse — the "devil's advocate" agent stops actually advocating after 2-3 turns
  3. Audit opacity — you can't trace why the system reached a conclusion

Constrained patterns fix this by making each agent's cognitive role structural, not just suggested. The Six Thinking Hats pattern doesn't ask an agent to think about risks — it only allows the Red Hat agent to process emotional/intuitive reactions, with its output schema enforcing that constraint.

This is the same insight behind Active Inference in neuroscience: intelligent behavior emerges from constrained prediction, not unconstrained generation.

Quick Start

# Clone
git clone https://github.com/ecarlsf/agent-patterns.git
cd agent-patterns

# Install dependencies
pip install -r requirements.txt

# Set your API key
export ANTHROPIC_API_KEY=your_key_here

# Run any pattern
python patterns/six_thinking_hats/run.py --prompt "Should we build a mobile app or focus on the web experience first?"
python patterns/pre_mortem/run.py --prompt "We're launching a new AI-powered feature next week"
python patterns/dialectic/run.py --prompt "Should startups use AI coding tools for their MVP?"

Architecture

Each pattern follows the same structure:

patterns/<pattern_name>/
├── README.md          # Pattern explanation + when to use it
├── agents.py          # Agent definitions with constrained system prompts
├── run.py             # CLI entrypoint
├── schemas.py         # Pydantic output schemas (enforces cognitive constraints)
└── examples/          # Example inputs and outputs

Every agent's output is validated against a Pydantic schema, so you can see exactly what each cognitive role contributed and trace the reasoning chain.

Requirements

  • Python 3.10+
  • Anthropic API key (uses Claude Sonnet)
  • See requirements.txt for packages

Background

I wrote about the thinking behind these patterns in my LinkedIn series on decision debt — the core thesis being that AI execution speed is now commoditized, and the bottleneck has shifted to judgment quality. These patterns are tools for that judgment layer.

License

MIT

About

Constrained multi-agent reasoning patterns — Six Thinking Hats, Pre-Mortem, Dialectic

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages