One prompt. One Claude call. Full engineering package from any transcript.
Update: Copilot coding agent now uses fewer premium requests — each session consumes just one premium request.
| File | Purpose |
|---|---|
master_dev_prompt.txt |
The system prompt — defines schema + Claude's role |
run_master_dev.sh |
Shell script to pipe any transcript through Claude Code or Codex (supports --mock or MOCK_OUTPUT=1 to emit outputs/sample.json when a model CLI is unavailable) |
validate_output.py |
JSON validator for structure + enum checks |
master_dev_runtime.py |
Shared Python runtime for prompt loading and strict output validation |
batch_run_master_dev.sh |
Batch processor for all transcripts/*.txt files |
Makefile |
Local/CI shortcuts (validate-file, validate-outputs, ci) |
.github/workflows/validate-master-dev-prompt.yml |
GitHub Actions validation workflow |
sample_transcript.txt |
Test transcript (Dear Saigon SMS ordering system) |
app.py |
FastAPI HTTP server — POST /process calls Claude directly via SDK, GET / serves the web UI |
static/index.html |
Terminal-style web UI — paste a transcript, get rendered artifacts |
chmod +x run_master_dev.sh./run_master_dev.sh your_transcript.txt./run_master_dev.sh your_transcript.txt > output.json./run_master_dev.sh sample_transcript.txtThe script auto-detects:
claudeCLI first- falls back to
codex execifclaudeis not installed
Optional reliability env vars:
MAX_RETRIES(default3)RETRY_DELAY_SECONDS(default2)MASTER_DEV_TIMEOUT_SECONDS(default0, disabled)
Example:
MAX_RETRIES=5 RETRY_DELAY_SECONDS=3 MASTER_DEV_TIMEOUT_SECONDS=90 \
./run_master_dev.sh sample_transcript.txt > output.jsonpython3 validate_output.py output.jsonValidation is strict:
- required object keys must match exactly
actions.items,implementation_plan.milestones,implementation_plan.tech_tasks, andcode_suggestions.snippetsmust each contain at least one item- enum fields must use the exact allowed values
Or with Make:
make validate-file FILE=output.jsonPut .txt transcripts in transcripts/, then run:
chmod +x batch_run_master_dev.sh
./batch_run_master_dev.shCustom folders:
./batch_run_master_dev.sh ./my_transcripts ./my_outputsValidate all generated outputs:
make validate-outputsBatch behavior:
- successful runs write
outputs/<name>.json - schema failures write
outputs/<name>.invalid.json - run logs go to
outputs/<name>.log make validate-outputsvalidates only*.jsonand ignores*.invalid.json
Local CI-equivalent check:
make ciGitHub Actions runs the same checks on push and pull request.
Python test suite:
python3 -m pytest -q{
cat master_dev_prompt.txt
cat your_transcript.txt
echo ""
echo '"""'
} | claudeCodex alternative:
{
cat master_dev_prompt.txt
cat your_transcript.txt
echo ""
echo '"""'
} | codex exec --skip-git-repo-check -| Mode | What you get |
|---|---|
design_doc |
context, requirements, architecture, alternatives, decisions, open questions |
pm_summary |
plain-language overview, scope, timeline implications |
actions |
tasks with owner, priority (low/medium/high), type |
implementation_plan |
milestones with ETA + risks, tech tasks with complexity (S/M/L) |
code_suggestions |
real runnable snippets in detected language, stack context |
Run the included server for HTTP access and a browser UI:
export ANTHROPIC_API_KEY=sk-...
pip install fastapi uvicorn anthropic python-dotenv
uvicorn app:app --reloadGET /— opens the web UI (paste transcript, get rendered artifacts)POST /process— returns schema-validated structured JSON from any transcriptPOST /process/stream— streams partial text and emits a final schema-validated result or validation error eventGET /health— liveness check
Optional: set SERVER_API_KEY to require an X-Api-Key header on all requests.
import anthropic, json
client = anthropic.Anthropic(api_key=YOUR_KEY)
with open("master_dev_prompt.txt") as f:
system_prompt = f.read()
def run_master_dev(transcript: str) -> dict:
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=8096,
system=system_prompt,
messages=[{"role": "user", "content": f'{transcript}\n"""'}]
)
return json.loads(response.content[0].text)Run the watcher to automatically process any transcript dropped into transcripts/:
python3 watcher.pyThen drop any .txt file into transcripts/ — validated JSON output appears in outputs/ automatically.
Custom directories:
python3 watcher.py --transcripts ./my-transcripts --outputs ./my-outputsStop with Ctrl+C. Files already processed (with a matching .json in outputs/) are skipped automatically.
Schema failures are written to outputs/<name>.invalid.json, and validator details are appended to outputs/<name>.log.
Three workflows run automatically on the main branch:
| Workflow | File | Trigger | Purpose |
|---|---|---|---|
| Validate Master Dev Prompt | validate-master-dev-prompt.yml |
Push / PR to main |
Installs Python deps, runs make ci (schema validation + pytest suite) |
| Copilot Auto-Fix | copilot-autofix.yml |
Push / PR | GitHub Copilot automated code fix suggestions |
| Cleanup Old Workflow Runs | cleanup-workflow-runs.yml |
Weekly (Sun 3am UTC) + manual | Deletes workflow run history older than 30 days via GitHub API |
make ci- Go to Actions → Cleanup Old Workflow Runs
- Click Run workflow → select branch
main→ Run workflow
- Default branch:
main - Copilot feature branches are ephemeral — they are deleted after their PR is merged or closed
- Do not force-push to
main