DOJ-3828: Migrate 12 agents from dojo-academy (PR-A of DOJ-3709)#18
Conversation
Migrate twelve agent definitions from dojo-academy/agents/ to instructional-design-toolkit/agents/ as PR-A of the DOJ-3709 chain. Each agent ships as-is per the DOJ-3708 chain precedent — no decomposition; prose, frontmatter, and Spanish accents preserved. dojo-academy-specific path framings (content/courses/, content/_templates/, nanobanana-output/, skills/academy-philosophy/) are wrapped as consumer-conventions to keep the agents reusable across consumer plugins. Migrated agents: - challenge-designer - content-architect - content-reviewer - framework-extractor - proofreader - quiz-generator - repo-analyzer - research-agent - student-perspective - text-class-writer - translation-reviewer - translator After this PR merges, conceptual references like `text-class-writer`, `content-reviewer`, and `quiz-generator` from the DOJ-3708 chain commands resolve to real agents in the IDT repo. The DOJ-3774 CI workflow (frontmatter + agent reference resolution + JSON schemas) passes locally on all 19 agents, 36 commands, and 4 skills. Out of scope: DOJ-3829 (PR-B skills migration), DOJ-3711 (delete from dojo-academy), DOJ-3710 (academy-philosophy + content-standards as overlays). Created by Claude Code on behalf of @andres Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
@greptileai review |
Greptile SummaryThis PR migrates 12 agent Confidence Score: 4/5Safe to merge — all 12 agents are additive, no existing files are modified, and CI lint checks pass. The migration is faithful and well-scoped. The main findings are a '5 questions' / 6-row mismatch in the content-reviewer rubric and a hint template that only scaffolds 2 entries while the spec text references 'Hint 3.' Both are copy issues from the source that would cause minor confusion in practice. The research-agent's undeclared Context7 MCP dependency is worth documenting explicitly for downstream consumers. agents/content-reviewer.md (Check 4 question count), agents/challenge-designer.md (hint template vs spec text), agents/research-agent.md (Context7 plugin dependency)
|
| Filename | Overview |
|---|---|
| agents/content-reviewer.md | New agent for quality review; Check 4 header says "5 questions" but the rubric table contains 6, creating a confusing scoring discrepancy |
| agents/challenge-designer.md | New agent defining challenge/BUILD deliverable design; minor inconsistency: hint spec mentions 3 hints but template only scaffolds 2 |
| agents/research-agent.md | New deep-dive research agent; uses Context7 MCP tools (plugin-specific) and WebSearch/Agent tools; Context7 dependency will fail if not installed in consumer environment |
| agents/content-architect.md | New agent for course/module structure design; references ${CLAUDE_PLUGIN_ROOT} commands not yet in IDT (out of scope per PR, consumer-dependent) |
| agents/text-class-writer.md | New primary content-writing agent; correctly labels Claude Partner Network and gemini-3-pro-image-preview as dojo-academy-specific; anti-pattern list is comprehensive |
| agents/translator.md | New Spanish LATAM translation agent; mandatory accent enforcement and glossary self-check steps are strong operational guards |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
RA[research-agent] --> FE[framework-extractor]
RA --> CA[content-architect]
RA --> TCW[text-class-writer]
REPO[repo-analyzer] --> FE
REPO --> TCW
CA --> TCW
CA --> QG[quiz-generator]
CA --> CD[challenge-designer]
FE --> TCW
FE --> QG
TCW --> CR[content-reviewer]
QG --> CR
CD --> CR
CR --> PRF[proofreader]
PRF --> TR[translator]
TR --> TRV[translation-reviewer]
SP[student-perspective] --> CR
Prompt To Fix All With AI
Fix the following 3 code review issues. Work through them one at a time, proposing concise fixes.
---
### Issue 1 of 3
agents/content-reviewer.md:65
The header says "5 questions" but the rubric table contains 6 rows. An agent following these instructions will be confused about whether it has completed the evaluation after row 5. The `text-class-writer.md` peer agent (which uses the same rubric) correctly labels this as "six questions" — update the count here to match.
```suggestion
Score the text class on 6 questions (1-5 each):
```
### Issue 2 of 3
agents/challenge-designer.md:146-151
The spec text says "Hint 3 is more direct," implying up to 3 hints, but the template block only scaffolds 2 `
## Submission
```
### Issue 3 of 3
agents/research-agent.md:59-61
**Context7 MCP dependency not declared in frontmatter**
Step 2 instructs the agent to call `mcp__plugin_context7_context7__resolve-library-id` and `mcp__plugin_context7_context7__query-docs`, but these MCP tools are not listed in the frontmatter `tools:` field and depend on the consumer having the Context7 plugin installed. In an IDT consumer without Context7, this step will fail at runtime with no graceful fallback. Consider adding a conditional note ("if Context7 is installed, use …; otherwise fall back to WebFetch") to make the dependency explicit.
Reviews (1): Last reviewed commit: "DOJ-3828: Migrate 12 agents from dojo-ac..." | Re-trigger Greptile
|
|
||
| ### Check 4: Text Class Quality Rubric | ||
|
|
||
| Score the text class on 5 questions (1-5 each): |
There was a problem hiding this comment.
The header says "5 questions" but the rubric table contains 6 rows. An agent following these instructions will be confused about whether it has completed the evaluation after row 5. The
text-class-writer.md peer agent (which uses the same rubric) correctly labels this as "six questions" — update the count here to match.
| Score the text class on 5 questions (1-5 each): | |
| Score the text class on 6 questions (1-5 each): |
Prompt To Fix With AI
This is a comment left during a code review.
Path: agents/content-reviewer.md
Line: 65
Comment:
The header says "5 questions" but the rubric table contains 6 rows. An agent following these instructions will be confused about whether it has completed the evaluation after row 5. The `text-class-writer.md` peer agent (which uses the same rubric) correctly labels this as "six questions" — update the count here to match.
```suggestion
Score the text class on 6 questions (1-5 each):
```
How can I resolve this? If you propose a fix, please make it concise.| <details> | ||
| <summary>Hint 2</summary> | ||
| {Address a common stumbling point} | ||
| </details> | ||
|
|
||
| ## Submission |
There was a problem hiding this comment.
The spec text says "Hint 3 is more direct," implying up to 3 hints, but the template block only scaffolds 2
<details> entries. An agent reading the template will consistently produce 2-hint challenges even when 3 would be appropriate. Either add a third placeholder or remove the "Hint 3" reference from the spec text.
| <details> | |
| <summary>Hint 2</summary> | |
| {Address a common stumbling point} | |
| </details> | |
| ## Submission | |
| <details> | |
| <summary>Hint 2</summary> | |
| {Address a common stumbling point} | |
| </details> | |
| <details> | |
| <summary>Hint 3</summary> | |
| {More direct nudge toward the solution path} | |
| </details> | |
| ## Submission |
Prompt To Fix With AI
This is a comment left during a code review.
Path: agents/challenge-designer.md
Line: 146-151
Comment:
The spec text says "Hint 3 is more direct," implying up to 3 hints, but the template block only scaffolds 2 `<details>` entries. An agent reading the template will consistently produce 2-hint challenges even when 3 would be appropriate. Either add a third placeholder or remove the "Hint 3" reference from the spec text.
```suggestion
<details>
<summary>Hint 2</summary>
{Address a common stumbling point}
</details>
<details>
<summary>Hint 3</summary>
{More direct nudge toward the solution path}
</details>
## Submission
```
How can I resolve this? If you propose a fix, please make it concise.| - Use Context7 MCP tools (`mcp__plugin_context7_context7__resolve-library-id` then `mcp__plugin_context7_context7__query-docs`) to fetch library documentation | ||
| - Use WebFetch for official documentation pages, getting-started guides, and API references | ||
| - Use WebSearch to locate official docs if the URL isn't obvious |
There was a problem hiding this comment.
Context7 MCP dependency not declared in frontmatter
Step 2 instructs the agent to call mcp__plugin_context7_context7__resolve-library-id and mcp__plugin_context7_context7__query-docs, but these MCP tools are not listed in the frontmatter tools: field and depend on the consumer having the Context7 plugin installed. In an IDT consumer without Context7, this step will fail at runtime with no graceful fallback. Consider adding a conditional note ("if Context7 is installed, use …; otherwise fall back to WebFetch") to make the dependency explicit.
Prompt To Fix With AI
This is a comment left during a code review.
Path: agents/research-agent.md
Line: 59-61
Comment:
**Context7 MCP dependency not declared in frontmatter**
Step 2 instructs the agent to call `mcp__plugin_context7_context7__resolve-library-id` and `mcp__plugin_context7_context7__query-docs`, but these MCP tools are not listed in the frontmatter `tools:` field and depend on the consumer having the Context7 plugin installed. In an IDT consumer without Context7, this step will fail at runtime with no graceful fallback. Consider adding a conditional note ("if Context7 is installed, use …; otherwise fall back to WebFetch") to make the dependency explicit.
How can I resolve this? If you propose a fix, please make it concise.Three findings from Greptile's 4/5 review:
1. content-reviewer.md Check 4: header said "5 questions" but the rubric
table contained 6 rows. Fix the count to match the table — agents
following the rubric will no longer wonder if they're done after row 5.
The peer text-class-writer agent already labels this as "six questions",
so this aligns the two sibling agents.
2. challenge-designer.md: spec text says "Hint 3 is more direct" implying
up to 3 hints, but the template scaffold only included 2 <details>
blocks. Add the missing Hint 3 entry so designers don't have to invent
the third hint structure on their own.
3. research-agent.md Step 2: Context7 MCP tools were referenced without a
fallback path. Consumer environments without the Context7 plugin
installed would fail at runtime. Add a conditional ("if Context7 is
installed, use it; otherwise fall back to WebFetch") so the step is
resilient across consumer setups.
All three are inherited copy bugs from the dojo-academy source. Fixes
applied to the IDT version only — dojo-academy source is untouched per
the migration contract; cleanup of dojo-academy is DOJ-3711.
CI verification (local): all 3 lint checks still pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
text-class-writer,content-reviewer,quiz-generatorfrom the DOJ-3708 chain commands resolve to real agents in IDTFiles
12 new files in
agents/:No name collisions with the 7 existing IDT agents (
business-context-detector,changelog-generator,cmi5-metadata-writer,course-visualizer,learner-profile-builder,session-type-detector,slides-renderer). Total IDT agent count post-merge: 19.Consumer-convention framing
Per the DOJ-3708 chain precedent, dojo-academy-specific framings are wrapped so the agents stay reusable across consumer plugins:
content/courses/...paths → described as consumer-specific (dojo-academy convention shown)content/_templates/...→ described as consumer-shipped templatesnanobanana-output/→ described as consumer-specific holding folderskills/academy-philosophy/...,skills/blooms-taxonomy/...,skills/ship-first-design/...→ described as consumer overlays (these skills migrate via DOJ-3829, PR-B of this chain)Claude Partner Networkframing → scoped to dojo-academy as the example, not asserted as universalDiscrepancy with parent
Parent DOJ-3709 description title says "11 agents" but the table lists 12. PR-A migrates the 12 from the table (the source of truth).
CI verification (local)
All 3 DOJ-3774 lint checks pass on this branch's HEAD:
check_frontmatter.py: 58 file(s) valid, 2 skipped (no frontmatter)check_agent_references.py: 19 agent(s) indexed, all IDT-internal references resolvecheck_json_schemas.py: 3 file(s) validTest plan
Out of scope
Closes DOJ-3828
Created by Claude Code on behalf of @andres