-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the Agentic Accelerator Framework wiki — a living record of building, extending, and operating the framework's four domains (Security, Accessibility, Code Quality, and FinOps) using custom GitHub Copilot agents, GitHub Advanced Security, and Microsoft Defender for Cloud.
This wiki captures real, non-deterministic agentic sessions where Copilot automode and subagents scaffold entire repositories, generate sample applications, build SARIF converters, produce Power BI PBIPs, create workshop labs, and automate screenshot capture — all from a single prompt invocation.
The DIY guide
walks through creating the Code Quality domain's two repositories —
code-quality-scan-demo-app (scanner platform with 5 sample apps in C#,
Python, Java, TypeScript, and Go) and code-quality-scan-workshop (10
hands-on labs). The following sections document a live Copilot automode session
executing this guide end-to-end.
With the Agentic Accelerator Framework workspace open in VS Code, the
/scaffold-domain domain=code-quality prompt is invoked. Copilot enters
automode and the DomainScaffolder agent begins orchestrating the build. The
Copilot Chat panel shows the agent's plan with a todo list tracking each
scaffolding phase:
Progress can be monitored in VS Code's Explorer panel or by watching newly created repositories appear in the GitHub organization:
Each run of the scaffolding agent produces a slightly different execution path — this is the nature of agentic workflows. The following screenshots document the first full journey from invocation to completion.
The agent analyzes the existing Accessibility and FinOps domain structures to determine the target layout for Code Quality. It reads the Domain Parity and Contribution Guide, the domain scaffolding skill, and the DIY documentation:
The agent formulates its execution plan, identifying all artifacts to generate — repository scaffolds, GitHub Actions workflows, SARIF converters, bootstrap scripts, and Copilot configuration files:
The workload is substantial — 5 sample apps, 10 workshop labs, SARIF converters, Power BI PBIP, bootstrap scripts, and CI/CD pipelines. The DomainScaffolder delegates to subagents (Explore for codebase research, Phase Implementor for file generation) to divide and parallelize the work:
With subagents active, files begin materializing. The agent creates directory structures, generates configuration files, and writes initial source code:
The scaffolding continues with GitHub Actions workflow files, Dockerfile templates, and Bicep infrastructure definitions:
Switching to the VS Code Explorer, the generated file tree becomes visible.
The demo-app repository takes shape with its expected structure — src/,
infra/, .github/workflows/, scripts/, and Copilot artifacts:
The workshop repository scaffolding has not started yet at this point — the agent completes the demo-app first before moving to the workshop:
Back in the Copilot Chat panel, the agent reports progress on the demo-app structure and prepares to generate the 5 sample applications:
The agent begins creating the 5 sample applications, each with intentional code quality violations to serve as scanner targets. The apps span C# (ASP.NET Minimal API), Python (Flask), Java (Spring Boot), TypeScript (Next.js), and Go (stdlib net/http):
The generation follows a deliberate order — C# first, then Python, then Java,
with each app including infra/main.bicep, Dockerfile,
start-local.ps1/stop-local.ps1, and an intentionally incomplete test
suite:
With sample apps generated, the agent moves to screenshot automation. It
creates capture-screenshots.ps1 and screenshot-manifest.json for
manifest-driven screenshot capture across all workshop labs:
The agent also generates the Power BI PBIP — a star schema semantic model with
Fact_CodeQualityFindings, dimension tables, and four report pages (Quality
Overview, Coverage by Repository, Complexity Analysis, Test Generation
Tracking):
The final major phase generates the 10 workshop labs (Labs 00–08 plus ADO variants for Labs 06 and 07). Each lab directory includes step-by-step instructions, expected outputs, and platform-specific sections:
The DomainScaffolder agent marks the task complete. The todo list shows all phases finished — repository structure, sample apps, SARIF converters, bootstrap scripts, GitHub Actions workflows, Power BI PBIP, screenshot automation, and workshop labs:
After the DomainScaffolder completes, manual verification is essential. Agentic scaffolding produces the bulk of the artifacts, but several configuration steps require human review before the repositories are production-ready. The checklist below documents each finding from the first iteration.
The code-quality-scan-workshop repository was created as a regular repo
instead of a GitHub template repository. The "Template repository" checkbox
in Settings → General must be enabled manually so that new workshop
instances can be created via Use this template:
GitHub Pages was not enabled on the workshop repository during scaffolding.
Navigate to Settings → Pages and select the deployment branch (typically
main or gh-pages) and folder (/docs or root) to activate the workshop
site:
Once Pages is enabled and the source branch is configured, GitHub deploys the Jekyll-based site. The Pages settings page confirms the deployment URL:
Enabling Pages should trigger the initial GitHub Actions deployment pipeline. Check the Actions tab to confirm the workflow runs successfully. The first run builds and deploys the Jekyll site:
The workshop labs render on GitHub Pages, but Mermaid diagrams embedded in the
Markdown do not display correctly. GitHub Pages' default Jekyll renderer does
not support Mermaid natively — a custom plugin, JavaScript include, or
pre-rendered SVG approach is needed. Screenshots are also missing at this
stage since capture-screenshots.ps1 has not been executed yet:
Known issues from first iteration:
- Mermaid diagrams render as raw code blocks — need Mermaid JS include or pre-rendered SVGs
- Screenshot placeholders are empty — run
capture-screenshots.ps1after demo apps are deployed
Both the code-quality-scan-demo-app and code-quality-scan-workshop
repositories are missing several metadata items that the existing
Accessibility and FinOps domains have:
-
No topics/tags — Add relevant GitHub topics (
code-quality,sarif,github-copilot,agentic-ai,workshop, etc.) - No description — Set the repository description in Settings → General
- No website link — Add the GitHub Pages URL to the About section (top-right of the repo page) so users can discover the workshop site
The following actions must be completed manually before the Code Quality domain reaches full parity with the existing Accessibility and FinOps domains:
-
Enable the template flag on
code-quality-scan-workshop - Add repository topics, description, and website URL to both repos
- Fix Mermaid rendering — add a Mermaid JS include to the Jekyll layout or pre-render diagrams as SVGs
-
Run
deploy-all.yml— deploy all 5 demo apps to Azure so the scanner targets are live -
Run
capture-screenshots.ps1— capture all workshop screenshots after demo apps are deployed -
Run bootstrap scripts (
setup-oidc.ps1,bootstrap-demo-apps.ps1) if Azure AD federation has not been configured
The deploy-all workflow is available in the demo-app repository's Actions tab. Trigger it manually to provision all 5 sample applications to Azure App Service:
Before deploy-all.yml can succeed, the Azure AD federation and GitHub
secrets infrastructure must be in place. This section documents the
bootstrap process and the issues encountered during the first iteration.
The correct execution order is:
-
setup-oidc.ps1— Creates Azure AD app registrations and federated credentials for workload identity federation (OIDC) -
bootstrap-demo-apps.ps1— Creates the 5 demo-app repositories, pushes content, and configures GitHub secrets and environments
Attempting to deploy before running these scripts results in authentication failures. The bootstrap script output confirms the initial setup:
The first run of bootstrap-demo-apps.ps1 encountered errors. The script
created the first two demo-app repositories successfully but failed midway
through the third:
The GitHub organization confirms that only two of the five expected repositories were created before the script halted:
The fix was applied directly in VS Code by editing the bootstrap script to handle the error condition:
Copilot can assist with identifying the next step when the bootstrap process
stalls. Asking Copilot for guidance surfaces that setup-oidc.ps1 must
run before the deployment workflows can authenticate:
Running setup-oidc.ps1 creates the Azure AD app registrations and
configures federated credentials for each demo-app repository:
Additional issues surface during the OIDC setup — typically around permissions, subscription scoping, or pre-existing registrations. Each issue is resolved iteratively:
After several iterations of fixing and re-running, both bootstrap scripts stabilize. A critical design principle emerged: both scripts must be idempotent — safe to re-run without duplicating resources or failing on already-existing artifacts:
The idempotency pattern uses az ad app list --filter and
gh repo view --json checks before creating resources, ensuring safe
re-execution:
A notable finding: the OIDC app registration required subscription-level scope because the target resource group did not yet exist. The deployment workflow itself creates the resource group, creating a circular dependency. For this proof of concept, subscription-level scope was used, though a production deployment should use a more restrictive RBAC approach:
Security note: Subscription-level Contributor is acceptable for a POC but should be scoped to a dedicated resource group in production. Consider pre-creating the resource group in a separate Bicep module or using a two-stage deployment (resource group first, then app deployment with scoped RBAC).
With OIDC configured and bootstrap complete, the deploy-all.yml workflow
can finally run. The first attempt surfaces deployment-specific issues that
need resolution:
The workflow presents multiple configuration options for resolving the deployment issue. The error details and proposed solutions are visible in the Actions log:
Option A requires creating a client secret for the Azure AD app registration. The following steps walk through the Azure portal configuration:
Navigate to Azure AD → App registrations → Certificates & secrets and create a new client secret:
Configure the secret description and expiration period:
After clicking Add, copy the generated secret value immediately (it is shown only once) and set it as a GitHub repository secret:
A key learning from the first iteration: the individual sample app
repositories (code-quality-demo-app-001 through demo-app-005) were
scaffolded without their own deploy.yml workflow files. The deploy-all.yml
in the parent repository expects each sample app to have a callable deployment
workflow:
After adding the missing deploy workflows, the bootstrap script is re-run (leveraging its idempotent design) to push the updated content to all repositories:
With all fixes in place, the deploy-all.yml workflow runs successfully.
However, the current implementation deploys all 5 apps sequentially rather
than in parallel, resulting in longer execution times:
Critical observations from the first iteration deploy-all:
- Sequential deployment — All 5 app deployments run sequentially. These should execute in parallel using a matrix strategy or separate concurrent jobs to reduce total deployment time.
- Silent failure masking — The workflow reports overall success even when individual app deployments fail. It should fail the entire run if any single deployment fails.
- Missing workflow summary — The completed workflow does not publish the deployed application URLs to the GitHub Actions job summary. The Accessibility and FinOps domains include summary links — Code Quality should match.
- Missing per-app documentation — Each demo-app repository should include a wiki page and screenshots showing the running application, matching the pattern established by the Accessibility and FinOps domains.
After the initial deploy-all run, further iteration is needed to resolve remaining deployment failures and refine the infrastructure approach.
The deploy-all workflow completes, but individual app deployments surface errors related to Azure resource provisioning. The Actions log reveals the specific failure points:
Returning to VS Code, Copilot assists with diagnosing the deployment errors and recommending fixes based on the error output:
A key architectural decision emerges: should all 5 demo apps share a single resource group, or should each app get its own? The decision is to use separate resource groups for each of the 5 demo apps. This approach provides cleaner resource isolation, independent lifecycle management, and easier cost tracking per application:
After multiple iterations of fixing Bicep templates, adjusting OIDC scopes, resolving resource naming conflicts, and updating the deploy workflows, all 5 demo apps finally deploy successfully. The deploy-all workflow run confirms green status across all jobs:
Improvement opportunity: Compare the diff between the initial scaffolded commit and the final working commit of
code-quality-scan-demo-appto catalog every fix that was needed. This delta reveals exactly where theDomainScaffolderagent (and thedomain-scaffoldingskill) should be improved to reduce the number of manual iterations required in future scaffold runs.
This section documents the agentic process of fixing the Code Quality workshop labs to be executable by students. A Copilot CLI agent (claude-opus-4.6) was used to systematically identify, fix, and verify all issues — focusing on the GitHub lab track first (ADO labs deferred).
The agent followed a structured process:
- Explore — Read all lab files, manifest, scripts, and demo-app structure
- Identify issues — Catalog every blocker preventing student execution
-
Fix infrastructure — Update
capture-screenshots.ps1andscreenshot-manifest.json - Capture & verify — Run the script, OCR-verify each screenshot
- Fix lab content — Update markdown for correctness and add screenshots
- Document — Record all findings for scaffolder improvement
Problem: The screenshot-manifest.json commands used Unix-only syntax:
-
head -40,tail -10,head -30(not available in PowerShell) -
cat file(alias exists but inconsistent) -
2>/dev/null(PowerShell uses2>$null) -
cd dir && cmd(works in PS7 but fragile) -
/tmp/paths (Windows uses$env:TEMP)
Fix: Replaced all commands with PowerShell equivalents:
-
head -N→Select-Object -First N -
tail -N→Select-Object -Last N -
cat→Get-Content -
cd dir && cmd→Set-Location dir; cmd -
/tmp/→$env:TEMP
Scaffolder improvement: The domain-scaffolding skill should detect the target OS and generate platform-appropriate commands. Consider using PowerShell Core (cross-platform) as the default shell for all manifest commands.
Problem: Commands like cd cq-demo-app-004 && npx eslint src/ assumed the screenshot script runs from the demo-app repo root. But capture-screenshots.ps1 is in the workshop repo, and cq-demo-app-* directories are in a separate code-quality-scan-demo-app repository.
Fix:
- Added
workingDir: "demo-app"field to manifest entries that need the demo-app CWD - Updated
capture-screenshots.ps1to accept-DemoAppDirparameter - Script auto-detects the demo-app repo as a sibling directory using the
scannerRepomanifest field -
Invoke-FreezeScreenshotprependsSet-Locationto the temp script whenworkingDiris set
Scaffolder improvement: The skill should understand the two-repo architecture (workshop + demo-app) and generate manifest commands with explicit working directory context. The scannerRepo field exists but was not used by the capture script.
Problem: The capture-screenshots.ps1 script spawns child pwsh processes via freeze --execute. These child processes don't inherit the parent's modified PATH. The script attempted to resolve the Python Scripts directory using site.getuserbase(), but on Windows Store Python the actual Scripts dir is at <userbase>/Python313/Scripts — not <userbase>/Scripts.
Fix: Changed PATH detection to derive the Scripts directory from pip show ruff output:
$ruffLocation = pip show ruff | Select-String "Location:" | ...
$pyVerDir = Split-Path $ruffLocation -Parent # e.g., .../Python313
$candidate = Join-Path $pyVerDir "Scripts" # e.g., .../Python313/ScriptsThe preamble in Invoke-FreezeScreenshot now injects the PATH into child processes.
Scaffolder improvement: The capture script should not assume ruff/lizard are globally on PATH. Use python -m ruff and python -m lizard as fallbacks, or document the PATH requirement in Lab 00.
Problem: The TypeScript Next.js demo app was scaffolded without an eslint.config.mjs. ESLint v9+ requires flat config format. Running npx eslint src/ produced:
"ESLint couldn't find an eslint.config.{js|mjs|cjs} file"
The package.json had eslint and eslint-config-next as devDependencies, but no config file to use them.
Fix: Created cq-demo-app-004/eslint.config.mjs:
import { FlatCompat } from "@eslint/eslintrc";
const eslintConfig = [
...compat.extends("next/core-web-vitals", "plugin:@typescript-eslint/recommended"),
{ rules: { "prefer-const": "warn" } },
];After the fix, ESLint correctly reports 4 intentional violations (3× no-explicit-any, 1× no-unused-vars).
Scaffolder improvement: The DomainScaffolder must generate ESLint config files for all JavaScript/TypeScript demo apps. For ESLint v9+, always use flat config format (eslint.config.mjs). Include @typescript-eslint/recommended rules to surface the intentional violations.
Problem: The github-auth.json file contains only a comment note, not actual Playwright storage state:
{ "_note": "Run: npx playwright codegen --save-storage=github-auth.json github.com" }This causes all playwright-auth screenshots to fail (Lab 06 Security tab, Lab 07 Actions pages).
Partial fix: Captured public-facing pages (Actions tab) without auth. The Security/code-scanning tab shows a 404/login page — requires authenticated Playwright state.
Scaffolder improvement: The skill should either:
- Generate a bootstrap script that automates
playwright codegenwith interactive auth - Document the auth setup step prominently in Lab 00
- Pre-capture authenticated screenshots during the scaffolding session when the developer is already authenticated
Problem: Java JDK and Gradle are not installed on the development machine. Lab 04 screenshots show "gradlew not found" instead of actual build output.
Fix: Added graceful fallback messaging in the manifest commands:
if (Test-Path gradlew.bat) { .\gradlew.bat build ... }
else { Write-Host 'gradlew not found - Java/Gradle must be installed' }Scaffolder improvement: The capture script could check prerequisites before attempting capture and skip/warn rather than produce misleading screenshots.
Problem: The SARIF upload command in Lab 06 Step 2 used bash-only syntax:
SARIF_CONTENT=$(gzip -c reports/complexity-001.sarif | base64 -w0)Fix: Replaced with PowerShell equivalent using [System.IO.Compression.GZipStream] and [Convert]::ToBase64String().
Scaffolder improvement: All lab commands should use PowerShell syntax since the workshop targets Windows/PowerShell as the primary shell. The skill should never generate bash-only commands in lab steps.
Problem: Labs 01–06 tell students to cd cq-demo-app-NNN without explaining that these directories live inside the code-quality-scan-demo-app repository. A student who hasn't cloned that repo would be confused.
Fix: Added a callout box at the start of each lab's Steps section:
Working Directory: These commands run from the root of your
code-quality-scan-demo-appclone.
Scaffolder improvement: The skill should include context-setting callouts in every lab that references resources from outside the workshop repo.
Problem: ./gradlew build is Unix syntax. On Windows, students need .\gradlew.bat build.
Fix: Added cross-platform commands showing both Windows and macOS/Linux variants.
Scaffolder improvement: Generate platform-aware commands with both variants shown.
Problem: The lab markdown files had no ![image]() references. All 28 screenshots existed only as empty image directories with README.md placeholders.
Fix: Added screenshot references at relevant steps in each lab file (labs 00–07).
Scaffolder improvement: The skill should auto-generate  references in lab markdown files, corresponding to entries in screenshot-manifest.json.
| Lab | Screenshots | Status | Notes |
|---|---|---|---|
| Lab 00 | 7 | ✅ All correct | Tool versions showing correctly |
| Lab 01 | 3 | ✅ All correct | Demo app matrix, tree, violation summary |
| Lab 02 | 3 | ✅ All correct | ESLint violations detected (after config fix) |
| Lab 03 | 3 | ✅ All correct | Ruff violations, pytest coverage (24%), SARIF output |
| Lab 04 | 2 | Java not installed — shows fallback message | |
| Lab 05 | 3 | ✅ All correct | dotnet build warnings, lizard scan, SARIF structure |
| Lab 06 | 3 | SARIF structure ✅, Security tab shows login (no auth) | |
| Lab 07 | 4 | ✅ Mostly correct | Actions pages captured; scan workflow shows "no runs yet" |
| Total | 28 | 23 good, 5 degraded |
-
code-quality-scan-demo-app:a0fed4a— Add ESLint flat config for cq-demo-app-004 -
code-quality-scan-workshop:a95d803— Fix GitHub lab markdown (working dir notes, screenshots, cross-platform commands) -
code-quality-scan-workshop:57740b3— Capture 28 GitHub lab screenshots and fix capture infrastructure
The following improvements to the domain-scaffolding skill would eliminate the need for these manual fixes in future scaffold runs:
| Priority | Improvement | Issues Prevented |
|---|---|---|
| P0 | Generate ESLint config for JS/TS apps | #4 |
| P0 | Use PowerShell syntax in all manifest commands | #1, #7 |
| P0 | Set working directory context in manifest entries | #2 |
| P1 | Auto-generate screenshot references in lab markdown | #10 |
| P1 | Include working directory callouts in labs | #8 |
| P1 | Handle Python Scripts PATH in capture script | #3 |
| P1 | Generate Playwright auth bootstrap script | #5 |
| P2 | Show cross-platform command variants | #9 |
| P2 | Pre-check tool prerequisites before capture | #6 |