Project: dotPilot
Stack: .NET 10 Uno Platform desktop app with central package management, NUnit unit tests, and Uno.UITest smoke coverage
Follows MCAF
This file defines how AI agents work in this solution.
- Root
AGENTS.mdholds the global workflow, shared commands, cross-cutting rules, and global skill catalog. - In multi-project solutions, each project or module root MUST have its own local
AGENTS.md. - Local
AGENTS.mdfiles add project-specific entry points, boundaries, commands, risks, and applicable skills.
- Solution root:
.(DotPilot.slnx) - Projects or modules with local
AGENTS.mdfiles:DotPilotDotPilot.TestsDotPilot.UITests
- Shared solution artifacts:
.editorconfigDirectory.Build.propsDirectory.Packages.propsglobal.jsondocs/Architecture.md.codex/skills/mcaf-*
- Read the solution-root
AGENTS.mdfirst. - Read the nearest local
AGENTS.mdfor the area you will edit. - Apply the stricter rule when both files speak to the same topic.
- Local
AGENTS.mdfiles may refine or tighten root rules, but they must not silently weaken them. - If a local rule needs an exception, document it explicitly in the nearest local
AGENTS.md, ADR, or feature doc.
Learn the user's stable habits, preferences, and corrections. Record durable rules here instead of relying on chat history.
Before doing any non-trivial task, evaluate the latest user message.
If it contains a durable rule, correction, preference, or workflow change, update AGENTS.md first.
If it is only task-local scope, do not turn it into a lasting rule.
Update this file when the user gives:
- a repeated correction
- a permanent requirement
- a lasting preference
- a workflow change
- a high-signal frustration that indicates a rule was missed
Extract rules aggressively when the user says things equivalent to:
- "never", "don't", "stop", "avoid"
- "always", "must", "make sure", "should"
- "remember", "keep in mind", "note that"
- "from now on", "going forward"
- "the workflow is", "we do it like this"
Preferences belong in ## Preferences:
- positive preferences go under
Likes - negative preferences go under
Dislikes - comparisons should become explicit rules or preferences
Corrections should update an existing rule when possible instead of creating duplicates.
Treat these as strong signals and record them immediately:
- anger, swearing, sarcasm, or explicit frustration
- ALL CAPS, repeated punctuation, or "don't do this again"
- the same mistake happening twice
- the user manually undoing or rejecting a recurring pattern
Do not record:
- one-off instructions for the current task
- temporary exceptions
- requirements that are already captured elsewhere without change
Rule format:
- one instruction per bullet
- place it in the right section
- capture the why, not only the literal wording
- remove obsolete rules when a better one replaces them
List only the skills this solution actually uses.
mcaf-dotnet— primary entry skill for normal C# and .NET work in this solution.mcaf-dotnet-features— decide which modern C# and .NET 10 features are safe for the active project.mcaf-solution-governance— create or refine the root and project-localAGENTS.mdfiles.mcaf-testing— plan test scope, layering, and regression coverage.mcaf-dotnet-quality-ci— align.editorconfig, analyzers, formatting, and CI-quality gates.mcaf-dotnet-complexity— review or tighten complexity limits and complexity tooling.mcaf-solid-maintainability— enforce SOLID, SRP, maintainability limits, and exception handling.mcaf-architecture-overview— create or maintaindocs/Architecture.md.mcaf-ci-cd— design or review CI and deployment gates.mcaf-ui-ux— handle UI architecture, accessibility, and design-handoff rules for the agent-facing UI.figma-implement-design— translate Figma handoff intoUno Platformdesktop XAML without drifting into web-specific implementation patterns.
Skill-management rules for this .NET solution:
mcaf-dotnetis the entry skill and routes to specialized.NETskills.- Route test planning through
mcaf-testing; the current repo usesNUniton theVSTestrunner, so do not applyTUnitorMicrosoft.Testing.Platformassumptions. - Add tool-specific
.NETskills only when the repository actually uses those tools in CI or local verification. - Keep only
mcaf-*skills in agent skill directories. - When upgrading skills, recheck
build,test,format,analyze, andcoveragecommands against the repo toolchain.
build:dotnet build DotPilot.slnxtest:dotnet test DotPilot.slnxformat:dotnet format DotPilot.slnx --verify-no-changesanalyze:dotnet build DotPilot.slnx -warnaserrorcoverage:dotnet test DotPilot.Tests/DotPilot.Tests.csproj --collect:"XPlat Code Coverage"
For this app:
- unit tests currently use
NUnitthrough the defaultVSTestrunner - UI smoke tests live in
DotPilot.UITestsand are a mandatory part of normal verification; the harness must provision or resolve browser-driver prerequisites automatically instead of skipping when local setup is missing formatusesdotnet format --verify-no-changes- coverage uses the
coverlet.collectorintegration onDotPilot.Tests LangVersionis pinned tolatestat the root- the repo-root lowercase
.editorconfigis the source of truth for formatting, naming, style, and analyzer severity Directory.Build.propsowns the shared analyzer and warning policy for future projectsDirectory.Packages.propsowns centrally managed package versionsglobal.jsonpins the .NET SDK and Uno SDK version used by the app and tests
- Multi-project solutions MUST keep one root
AGENTS.mdplus one localAGENTS.mdin each project or module root. - Do not add
Owned by:metadata to root or localAGENTS.mdfiles. - Each local
AGENTS.mdMUST document:- project purpose
- entry points
- boundaries
- project-local commands
- applicable skills
- local risks or protected areas
- If a project grows enough that the root file becomes vague, add or tighten the local
AGENTS.mdbefore continuing implementation.
These limits are repo-configured policy values. They live here so the solution can tune them over time.
file_max_loc:400type_max_loc:200function_max_loc:50max_nesting_depth:3exception_policy:Document any justified exception in the nearest ADR, feature doc, or local AGENTS.md with the reason, scope, and removal/refactor plan.
Local AGENTS.md files may tighten these values, but they must not loosen them without an explicit root-level exception.
- Start from
docs/Architecture.mdand the nearest localAGENTS.md. - Treat
docs/Architecture.mdas the architecture map for every non-trivial task. - If the overview is missing, stale, or diagram-free, update it before implementation.
- Define scope before coding:
- in scope
- out of scope
- Keep context tight. Do not read the whole repo if the architecture map and local docs are enough.
- If the task matches a skill, use the skill instead of improvising.
- Analyze first:
- current state
- required change
- constraints and risks
- For non-trivial work, create a root-level
<slug>.plan.mdfile before making code or doc changes. - Keep the
<slug>.plan.mdfile as the working plan for the task until completion. - The plan file MUST contain:
- task goal and scope
- a detailed implementation plan with detailed ordered steps
- constraints and risks
- explicit test steps as part of the ordered plan, not as a later add-on
- the test and verification strategy for each planned step
- the testing methodology for the task: what flows will be tested, how they will be tested, and what quality bar the tests must meet
- an explicit full-test baseline step after the plan is prepared
- a tracked list of already failing tests, with one checklist item per failing test
- root-cause notes and intended fix path for each failing test that must be addressed
- a checklist with explicit done criteria for each step
- ordered final validation skills and commands, with reason for each
- Use the Ralph Loop for every non-trivial task:
- plan in detail in
<slug>.plan.mdbefore coding or document edits - include test creation, test updates, and verification work in the ordered steps from the start
- once the initial plan is ready, run the full relevant test suite to establish the real baseline
- if tests are already failing, add each failing test back into
<slug>.plan.mdas a tracked item with its failure symptom, suspected cause, and fix status - work through failing tests one by one: reproduce, find the root cause, apply the fix, rerun, and update the plan file
- include ordered final validation skills in the plan file, with reason for each skill
- require each selected skill to produce a concrete action, artifact, or verification outcome
- execute one planned step at a time
- mark checklist items in
<slug>.plan.mdas work progresses - review findings, apply fixes, and rerun relevant verification
- update the plan file and repeat until done criteria are met or an explicit exception is documented
- plan in detail in
- Implement code and tests together.
- Run verification in layers:
- changed tests
- related suite
- broader required regressions
- If
buildis separate fromtest, runbuildbeforetest. - After tests pass, run
format, then the final required verification commands. - The task is complete only when every planned checklist item is done and all relevant tests are green.
- Summarize the change, risks, and verification before marking the task complete.
- All durable docs live in
docs/. docs/Architecture.mdis the required global map and the first stop for agents.docs/Architecture.mdMUST contain Mermaid diagrams for:- system or module boundaries
- interfaces or contracts between boundaries
- key classes or types for the changed area
- Keep one canonical source for each important fact. Link instead of duplicating.
- Public bootstrap templates are limited to root-level agent files. Authoring scaffolds for architecture, features, ADRs, and other workflows live in skills.
- Update feature docs when behaviour changes.
- Update ADRs when architecture, boundaries, or standards change.
- For non-trivial work, the plan file, feature doc, or ADR MUST document the testing methodology:
- what flows are covered
- how they are tested
- which commands prove them
- what quality and coverage requirements must hold
- Every feature doc under
docs/Features/MUST contain at least one Mermaid diagram for the main behaviour or flow. - Every ADR under
docs/ADR/MUST contain at least one Mermaid diagram for the decision, boundaries, or interactions. - Mermaid diagrams are mandatory in architecture docs, feature docs, and ADRs.
- Mermaid diagrams must render. Simplify them until they do.
- TDD is the default for new behaviour and bug fixes: write the failing test first, make it pass, then refactor.
- Bug fixes start with a failing regression test that reproduces the issue.
- Every behaviour change needs new or updated automated tests with meaningful assertions. New tests are mandatory for new behaviour and bug fixes.
- Tests must prove the real user flow or caller-visible system flow, not only internal implementation details.
- Tests should be as realistic as possible and exercise the system through real flows, contracts, and dependencies.
- Tests must cover positive flows, negative flows, edge cases, and unexpected paths from multiple relevant angles when the behaviour can fail in different ways.
- Prefer integration, API, and UI tests over isolated unit tests when behaviour crosses boundaries.
- Do not use mocks, fakes, stubs, or service doubles in verification.
- Exercise internal and external dependencies through real containers, test instances, or sandbox environments that match the real contract.
- Flaky tests are failures. Fix the cause.
- Changed production code MUST reach at least 80% line coverage, and at least 70% branch coverage where branch coverage is available.
- Critical flows and public contracts MUST reach at least 90% line coverage with explicit success and failure assertions.
- Repository or module coverage must not decrease without an explicit written exception. Coverage after the change must stay at least at the previous baseline or improve.
- Coverage is for finding gaps, not gaming a number. Coverage numbers do not replace scenario coverage or user-flow verification.
- The task is not done until the full relevant test suite is green, not only the newly added tests.
- UI smoke tests are mandatory for this repository and must run in normal agent verification; missing local browser-driver setup is a harness bug to fix, not a reason to skip the suite.
- For
.NET, keep the active framework and runner model explicit so agents do not mixTUnit,Microsoft.Testing.Platform, and legacyVSTestassumptions. - After changing production code, run the repo-defined quality pass: format, build, analyze, focused tests, broader tests, coverage, and any configured extra gates.
- Everything in this solution MUST follow SOLID principles by default.
- Every class, object, module, and service MUST have a clear single responsibility and explicit boundaries.
- SOLID is mandatory.
- SRP and strong cohesion are mandatory for files, types, and functions.
- Prefer composition over inheritance unless inheritance is explicitly justified.
- Large files, types, functions, and deep nesting are design smells. Split them or document a justified exception under
exception_policy. - Hardcoded values are forbidden.
- String literals are forbidden in implementation code. Declare them once as named constants, enums, configuration entries, or dedicated value objects, then reuse those symbols.
- Avoid magic literals. Extract shared values into constants, enums, configuration, or dedicated types.
- Design boundaries so real behaviour can be tested through public interfaces.
- For
.NET, the repo-root.editorconfigis the source of truth for formatting, naming, style, and analyzer severity. - Use nested
.editorconfigfiles when they serve a clear subtree-specific purpose. Do not let IDE defaults, pipeline flags, and repo config disagree.
- Never commit secrets, keys, or connection strings.
- Never skip tests to make a branch green.
- Never weaken a test or analyzer without explicit justification.
- Never introduce mocks, fakes, stubs, or service doubles to hide real behaviour in tests or local flows.
- Never introduce a non-SOLID design unless the exception is explicitly documented under
exception_policy. - Never force-push to
main. - Never approve or merge on behalf of a human maintainer.
Always:
- Read root and local
AGENTS.mdfiles before editing code. - Read the relevant docs before changing behaviour or architecture.
- Run the required verification commands yourself.
Ask first:
- changing public API contracts
- adding new dependencies
- modifying database schema
- deleting code files
- Follow the canonical MCAF tutorial when bootstrapping or upgrading the agent workflow.
- Keep the root
AGENTS.mdat the repository root. - Keep the repo-local agent skill directory limited to current
mcaf-*skills. - Keep the solution file name cased as
DotPilot.slnx. - Treat
DotPilotUI implementation asUno Platformdesktop XAML work, especially for Figma handoff, instead of translating designs into web stacks. - Use central package management for shared test and tooling packages.
- Keep one
.NETtest framework active in the solution at a time unless a documented migration is in progress. - Validate UI changes through runnable
DotPilot.UITestson every relevant verification pass, instead of relying only on manual browser inspection or conditional local setup.
- Installing stale, non-canonical, or non-
mcaf-*skills into the repo-local agent skill directory. - Moving root governance out of the repository root.
- Mixing multiple
.NETtest frameworks in the active solution without a documented migration plan. - Switching desktop Uno pages into stacked or mobile-style responsive layouts during resize work unless the user explicitly asks for a different composition; desktop pages must stay desktop-first and protect geometry through sizing constraints instead.