|
| 1 | +# 📰 Tech Content Editorial Board |
| 2 | + |
| 3 | +> For an overview of all available workflows, see the [main README](../README.md). |
| 4 | +
|
| 5 | +**Daily editorial-board review of the repository's technical rigor, wording, structure, and editorial quality** |
| 6 | + |
| 7 | +The [Tech Content Editorial Board workflow](../workflows/tech-content-editorial-board.md?plain=1) is a [GitHub Agentic Workflow](https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/) for reviewing a technical content repository as if it were being examined by a demanding editorial board of principal engineers, technical writers, and domain specialists. It focuses on content quality first: clarity, rigor, structure, examples, caveats, flow, and reader trust. |
| 8 | + |
| 9 | +Rather than producing a passive report, the workflow is biased toward action. When it finds a safe, focused content improvement, it prefers to ship one small content pull request in the same run. It can also create a single tracking issue for materially new editorial backlog that is not already covered by an open issue or pull request. |
| 10 | + |
| 11 | +## Installation |
| 12 | + |
| 13 | +```bash |
| 14 | +# Install the 'gh aw' extension |
| 15 | +gh extension install github/gh-aw |
| 16 | + |
| 17 | +# Add the workflow to your repository |
| 18 | +gh aw add-wizard githubnext/agentics/tech-content-editorial-board |
| 19 | +``` |
| 20 | + |
| 21 | +This walks you through adding the workflow to your repository. |
| 22 | + |
| 23 | +## How It Works |
| 24 | + |
| 25 | +```mermaid |
| 26 | +graph LR |
| 27 | + A[Inspect repository and open work] --> B[Choose one review lens] |
| 28 | + B --> C[Assess content quality] |
| 29 | + C --> D{Safe focused content edit?} |
| 30 | + D -->|Yes| E[Create one editorial PR] |
| 31 | + D -->|No| F[Record issue-only findings] |
| 32 | + E --> G[Check for duplicate tracking] |
| 33 | + F --> G |
| 34 | + G --> H[Create at most one tracking issue] |
| 35 | +``` |
| 36 | + |
| 37 | +Each run starts by inspecting the repository, recent work, and open issues or pull requests so it does not duplicate existing tracking. It then selects a review lens and evaluates the repository as a technical publishing asset, looking for weaknesses in: |
| 38 | + |
| 39 | +- Technical rigor and accuracy |
| 40 | +- Wording, clarity, and flow |
| 41 | +- Structure and narrative coherence |
| 42 | +- Examples, diagrams, and caveats |
| 43 | +- Reader trust and practical usefulness |
| 44 | + |
| 45 | +When a low-risk, article-level improvement is available, the workflow should prefer making that edit and opening a focused pull request. Any broader or remaining backlog is then summarized in at most one tracking issue. |
| 46 | + |
| 47 | +## Simulated Board Personas |
| 48 | + |
| 49 | +The workflow simulates a board-style review using named personas with distinct areas of expertise: |
| 50 | + |
| 51 | +- **The Editor** — wording, structure, flow, coherence, section ordering, rewrites, and whether the article's argument lands clearly for engineering readers |
| 52 | +- **The Critic** — devil's-advocate skepticism, anti-hype pressure testing, second-order effects, hidden assumptions, and missing downside |
| 53 | +- **Martin Kleppmann** — consistency, correctness, ordering, edge cases |
| 54 | +- **Martin Fowler** — architecture, patterns, trade-offs, diagrams |
| 55 | +- **Robert C. Martin (Uncle Bob)** — clean architecture, separation of concerns |
| 56 | +- **Katherine Rack** — systems thinking, scale, failure cascades |
| 57 | +- **Ben Sigelman** — observability, tracing, debugging |
| 58 | +- **Klaus Marquardt** — Kafka, partitioning, message keys |
| 59 | +- **Greg Young** — DDD, event sourcing, CQRS |
| 60 | +- **Tanya Janca** — security, resilience, secrets hygiene |
| 61 | +- **Kelsey Hightower** — operations, deployment realism, maintainability |
| 62 | +- **Charity Majors** — on-call pain, telemetry, failure clarity |
| 63 | + |
| 64 | +In addition to those board voices, the workflow uses an **Orchestrator** role during synthesis. The Orchestrator does not act as another reviewer; it pulls together the strongest themes, conflicts, objections, and concrete next actions into maintainable recommendations for humans to review. |
| 65 | + |
| 66 | +## Usage |
| 67 | + |
| 68 | +This workflow runs daily on weekdays and can also be started manually. |
| 69 | + |
| 70 | +```bash |
| 71 | +gh aw run tech-content-editorial-board |
| 72 | +``` |
| 73 | + |
| 74 | +### Configuration |
| 75 | + |
| 76 | +The workflow is designed to work out of the box for technical documentation repositories. By default it: |
| 77 | + |
| 78 | +- Runs on weekdays |
| 79 | +- Focuses on content-only improvements rather than infrastructure or code changes |
| 80 | +- Creates at most one pull request and one issue per run |
| 81 | +- Uses repository memory to keep editorial attention moving across different focus areas over time |
| 82 | + |
| 83 | +After editing run `gh aw compile` to update the workflow and commit all changes to the default branch. |
| 84 | + |
| 85 | +### Human in the Loop |
| 86 | + |
| 87 | +- Review the editorial pull request for tone, accuracy, and scope |
| 88 | +- Confirm that suggested backlog items are worth tracking |
| 89 | +- Merge only the focused content changes that match your publishing standards |
| 90 | +- Adjust prompts or schedule if you want the board to be more aggressive or more selective |
0 commit comments