Practical, runnable implementations of constrained multi-agent reasoning patterns.
Most multi-agent frameworks focus on orchestration — how to route between agents. This repo focuses on cognition — how to structure agent thinking so the outputs are reliable, auditable, and resistant to the failure modes that make AI dangerous in high-stakes domains.
Each pattern constrains agents into specific cognitive roles, inspired by established human decision-making frameworks. The result: structured disagreement, forced perspective-taking, and explicit tradeoff analysis — things a single LLM call can't reliably do.
| Pattern | Inspired By | What It Does | Best For |
|---|---|---|---|
| Six Thinking Hats | Edward de Bono | 6 parallel agents evaluate from distinct cognitive angles (facts, emotions, risks, benefits, creativity, process) | Product decisions, feature evaluation, strategy review |
| Pre-Mortem | Gary Klein | Agents assume failure has already happened and work backward to identify causes | Launch readiness, risk assessment, project planning |
| Dialectic | Hegelian dialectic | Thesis → Antithesis → Synthesis agents force genuine intellectual tension before resolution | Complex tradeoffs, architecture decisions, policy evaluation |
The default approach to multi-agent systems is "give each agent a persona and let them chat." This fails in predictable ways:
- Convergence bias — agents agree too quickly because the underlying LLM optimizes for coherence
- Role collapse — the "devil's advocate" agent stops actually advocating after 2-3 turns
- Audit opacity — you can't trace why the system reached a conclusion
Constrained patterns fix this by making each agent's cognitive role structural, not just suggested. The Six Thinking Hats pattern doesn't ask an agent to think about risks — it only allows the Red Hat agent to process emotional/intuitive reactions, with its output schema enforcing that constraint.
This is the same insight behind Active Inference in neuroscience: intelligent behavior emerges from constrained prediction, not unconstrained generation.
# Clone
git clone https://github.com/ecarlsf/agent-patterns.git
cd agent-patterns
# Install dependencies
pip install -r requirements.txt
# Set your API key
export ANTHROPIC_API_KEY=your_key_here
# Run any pattern
python patterns/six_thinking_hats/run.py --prompt "Should we build a mobile app or focus on the web experience first?"
python patterns/pre_mortem/run.py --prompt "We're launching a new AI-powered feature next week"
python patterns/dialectic/run.py --prompt "Should startups use AI coding tools for their MVP?"Each pattern follows the same structure:
patterns/<pattern_name>/
├── README.md # Pattern explanation + when to use it
├── agents.py # Agent definitions with constrained system prompts
├── run.py # CLI entrypoint
├── schemas.py # Pydantic output schemas (enforces cognitive constraints)
└── examples/ # Example inputs and outputs
Every agent's output is validated against a Pydantic schema, so you can see exactly what each cognitive role contributed and trace the reasoning chain.
- Python 3.10+
- Anthropic API key (uses Claude Sonnet)
- See
requirements.txtfor packages
I wrote about the thinking behind these patterns in my LinkedIn series on decision debt — the core thesis being that AI execution speed is now commoditized, and the bottleneck has shifted to judgment quality. These patterns are tools for that judgment layer.
MIT