Skip to content

Latest commit

 

History

History
315 lines (253 loc) · 16.6 KB

File metadata and controls

315 lines (253 loc) · 16.6 KB

AGENTS.md

Project: dotPilot Stack: .NET 10 Uno Platform desktop app with central package management, NUnit unit tests, and Uno.UITest smoke coverage

Follows MCAF


Purpose

This file defines how AI agents work in this solution.

  • Root AGENTS.md holds the global workflow, shared commands, cross-cutting rules, and global skill catalog.
  • In multi-project solutions, each project or module root MUST have its own local AGENTS.md.
  • Local AGENTS.md files add project-specific entry points, boundaries, commands, risks, and applicable skills.

Solution Topology

  • Solution root: . (DotPilot.slnx)
  • Projects or modules with local AGENTS.md files:
    • DotPilot
    • DotPilot.Tests
    • DotPilot.UITests
  • Shared solution artifacts:
    • .editorconfig
    • Directory.Build.props
    • Directory.Packages.props
    • global.json
    • docs/Architecture.md
    • .codex/skills/mcaf-*

Rule Precedence

  1. Read the solution-root AGENTS.md first.
  2. Read the nearest local AGENTS.md for the area you will edit.
  3. Apply the stricter rule when both files speak to the same topic.
  4. Local AGENTS.md files may refine or tighten root rules, but they must not silently weaken them.
  5. If a local rule needs an exception, document it explicitly in the nearest local AGENTS.md, ADR, or feature doc.

Conversations (Self-Learning)

Learn the user's stable habits, preferences, and corrections. Record durable rules here instead of relying on chat history.

Before doing any non-trivial task, evaluate the latest user message. If it contains a durable rule, correction, preference, or workflow change, update AGENTS.md first. If it is only task-local scope, do not turn it into a lasting rule.

Update this file when the user gives:

  • a repeated correction
  • a permanent requirement
  • a lasting preference
  • a workflow change
  • a high-signal frustration that indicates a rule was missed

Extract rules aggressively when the user says things equivalent to:

  • "never", "don't", "stop", "avoid"
  • "always", "must", "make sure", "should"
  • "remember", "keep in mind", "note that"
  • "from now on", "going forward"
  • "the workflow is", "we do it like this"

Preferences belong in ## Preferences:

  • positive preferences go under Likes
  • negative preferences go under Dislikes
  • comparisons should become explicit rules or preferences

Corrections should update an existing rule when possible instead of creating duplicates.

Treat these as strong signals and record them immediately:

  • anger, swearing, sarcasm, or explicit frustration
  • ALL CAPS, repeated punctuation, or "don't do this again"
  • the same mistake happening twice
  • the user manually undoing or rejecting a recurring pattern

Do not record:

  • one-off instructions for the current task
  • temporary exceptions
  • requirements that are already captured elsewhere without change

Rule format:

  • one instruction per bullet
  • place it in the right section
  • capture the why, not only the literal wording
  • remove obsolete rules when a better one replaces them

Global Skills

List only the skills this solution actually uses.

  • mcaf-dotnet — primary entry skill for normal C# and .NET work in this solution.
  • mcaf-dotnet-features — decide which modern C# and .NET 10 features are safe for the active project.
  • mcaf-solution-governance — create or refine the root and project-local AGENTS.md files.
  • mcaf-testing — plan test scope, layering, and regression coverage.
  • mcaf-dotnet-quality-ci — align .editorconfig, analyzers, formatting, and CI-quality gates.
  • mcaf-dotnet-complexity — review or tighten complexity limits and complexity tooling.
  • mcaf-solid-maintainability — enforce SOLID, SRP, maintainability limits, and exception handling.
  • mcaf-architecture-overview — create or maintain docs/Architecture.md.
  • mcaf-ci-cd — design or review CI and deployment gates.
  • mcaf-ui-ux — handle UI architecture, accessibility, and design-handoff rules for the agent-facing UI.
  • figma-implement-design — translate Figma handoff into Uno Platform desktop XAML without drifting into web-specific implementation patterns.

Skill-management rules for this .NET solution:

  • mcaf-dotnet is the entry skill and routes to specialized .NET skills.
  • Route test planning through mcaf-testing; the current repo uses NUnit on the VSTest runner, so do not apply TUnit or Microsoft.Testing.Platform assumptions.
  • Add tool-specific .NET skills only when the repository actually uses those tools in CI or local verification.
  • Keep only mcaf-* skills in agent skill directories.
  • When upgrading skills, recheck build, test, format, analyze, and coverage commands against the repo toolchain.

Rules to Follow (Mandatory)

Commands

  • build: dotnet build DotPilot.slnx
  • test: dotnet test DotPilot.slnx
  • format: dotnet format DotPilot.slnx --verify-no-changes
  • analyze: dotnet build DotPilot.slnx -warnaserror
  • coverage: dotnet test DotPilot.Tests/DotPilot.Tests.csproj --collect:"XPlat Code Coverage"

For this app:

  • unit tests currently use NUnit through the default VSTest runner
  • UI smoke tests live in DotPilot.UITests and are a mandatory part of normal verification; the harness must provision or resolve browser-driver prerequisites automatically instead of skipping when local setup is missing
  • format uses dotnet format --verify-no-changes
  • coverage uses the coverlet.collector integration on DotPilot.Tests
  • LangVersion is pinned to latest at the root
  • the repo-root lowercase .editorconfig is the source of truth for formatting, naming, style, and analyzer severity
  • Directory.Build.props owns the shared analyzer and warning policy for future projects
  • Directory.Packages.props owns centrally managed package versions
  • global.json pins the .NET SDK and Uno SDK version used by the app and tests

Project AGENTS Policy

  • Multi-project solutions MUST keep one root AGENTS.md plus one local AGENTS.md in each project or module root.
  • Do not add Owned by: metadata to root or local AGENTS.md files.
  • Each local AGENTS.md MUST document:
    • project purpose
    • entry points
    • boundaries
    • project-local commands
    • applicable skills
    • local risks or protected areas
  • If a project grows enough that the root file becomes vague, add or tighten the local AGENTS.md before continuing implementation.

Maintainability Limits

These limits are repo-configured policy values. They live here so the solution can tune them over time.

  • file_max_loc: 400
  • type_max_loc: 200
  • function_max_loc: 50
  • max_nesting_depth: 3
  • exception_policy: Document any justified exception in the nearest ADR, feature doc, or local AGENTS.md with the reason, scope, and removal/refactor plan.

Local AGENTS.md files may tighten these values, but they must not loosen them without an explicit root-level exception.

Task Delivery

  • Start from docs/Architecture.md and the nearest local AGENTS.md.
  • Treat docs/Architecture.md as the architecture map for every non-trivial task.
  • If the overview is missing, stale, or diagram-free, update it before implementation.
  • Define scope before coding:
    • in scope
    • out of scope
  • Keep context tight. Do not read the whole repo if the architecture map and local docs are enough.
  • If the task matches a skill, use the skill instead of improvising.
  • Analyze first:
    • current state
    • required change
    • constraints and risks
  • For non-trivial work, create a root-level <slug>.plan.md file before making code or doc changes.
  • Keep the <slug>.plan.md file as the working plan for the task until completion.
  • The plan file MUST contain:
    • task goal and scope
    • a detailed implementation plan with detailed ordered steps
    • constraints and risks
    • explicit test steps as part of the ordered plan, not as a later add-on
    • the test and verification strategy for each planned step
    • the testing methodology for the task: what flows will be tested, how they will be tested, and what quality bar the tests must meet
    • an explicit full-test baseline step after the plan is prepared
    • a tracked list of already failing tests, with one checklist item per failing test
    • root-cause notes and intended fix path for each failing test that must be addressed
    • a checklist with explicit done criteria for each step
    • ordered final validation skills and commands, with reason for each
  • Use the Ralph Loop for every non-trivial task:
    • plan in detail in <slug>.plan.md before coding or document edits
    • include test creation, test updates, and verification work in the ordered steps from the start
    • once the initial plan is ready, run the full relevant test suite to establish the real baseline
    • if tests are already failing, add each failing test back into <slug>.plan.md as a tracked item with its failure symptom, suspected cause, and fix status
    • work through failing tests one by one: reproduce, find the root cause, apply the fix, rerun, and update the plan file
    • include ordered final validation skills in the plan file, with reason for each skill
    • require each selected skill to produce a concrete action, artifact, or verification outcome
    • execute one planned step at a time
    • mark checklist items in <slug>.plan.md as work progresses
    • review findings, apply fixes, and rerun relevant verification
    • update the plan file and repeat until done criteria are met or an explicit exception is documented
  • Implement code and tests together.
  • Run verification in layers:
    • changed tests
    • related suite
    • broader required regressions
  • If build is separate from test, run build before test.
  • After tests pass, run format, then the final required verification commands.
  • The task is complete only when every planned checklist item is done and all relevant tests are green.
  • Summarize the change, risks, and verification before marking the task complete.

Documentation

  • All durable docs live in docs/.
  • docs/Architecture.md is the required global map and the first stop for agents.
  • docs/Architecture.md MUST contain Mermaid diagrams for:
    • system or module boundaries
    • interfaces or contracts between boundaries
    • key classes or types for the changed area
  • Keep one canonical source for each important fact. Link instead of duplicating.
  • Public bootstrap templates are limited to root-level agent files. Authoring scaffolds for architecture, features, ADRs, and other workflows live in skills.
  • Update feature docs when behaviour changes.
  • Update ADRs when architecture, boundaries, or standards change.
  • For non-trivial work, the plan file, feature doc, or ADR MUST document the testing methodology:
    • what flows are covered
    • how they are tested
    • which commands prove them
    • what quality and coverage requirements must hold
  • Every feature doc under docs/Features/ MUST contain at least one Mermaid diagram for the main behaviour or flow.
  • Every ADR under docs/ADR/ MUST contain at least one Mermaid diagram for the decision, boundaries, or interactions.
  • Mermaid diagrams are mandatory in architecture docs, feature docs, and ADRs.
  • Mermaid diagrams must render. Simplify them until they do.

Testing

  • TDD is the default for new behaviour and bug fixes: write the failing test first, make it pass, then refactor.
  • Bug fixes start with a failing regression test that reproduces the issue.
  • Every behaviour change needs new or updated automated tests with meaningful assertions. New tests are mandatory for new behaviour and bug fixes.
  • Tests must prove the real user flow or caller-visible system flow, not only internal implementation details.
  • Tests should be as realistic as possible and exercise the system through real flows, contracts, and dependencies.
  • Tests must cover positive flows, negative flows, edge cases, and unexpected paths from multiple relevant angles when the behaviour can fail in different ways.
  • Prefer integration, API, and UI tests over isolated unit tests when behaviour crosses boundaries.
  • Do not use mocks, fakes, stubs, or service doubles in verification.
  • Exercise internal and external dependencies through real containers, test instances, or sandbox environments that match the real contract.
  • Flaky tests are failures. Fix the cause.
  • Changed production code MUST reach at least 80% line coverage, and at least 70% branch coverage where branch coverage is available.
  • Critical flows and public contracts MUST reach at least 90% line coverage with explicit success and failure assertions.
  • Repository or module coverage must not decrease without an explicit written exception. Coverage after the change must stay at least at the previous baseline or improve.
  • Coverage is for finding gaps, not gaming a number. Coverage numbers do not replace scenario coverage or user-flow verification.
  • The task is not done until the full relevant test suite is green, not only the newly added tests.
  • UI smoke tests are mandatory for this repository and must run in normal agent verification; missing local browser-driver setup is a harness bug to fix, not a reason to skip the suite.
  • For .NET, keep the active framework and runner model explicit so agents do not mix TUnit, Microsoft.Testing.Platform, and legacy VSTest assumptions.
  • After changing production code, run the repo-defined quality pass: format, build, analyze, focused tests, broader tests, coverage, and any configured extra gates.

Code And Design

  • Everything in this solution MUST follow SOLID principles by default.
  • Every class, object, module, and service MUST have a clear single responsibility and explicit boundaries.
  • SOLID is mandatory.
  • SRP and strong cohesion are mandatory for files, types, and functions.
  • Prefer composition over inheritance unless inheritance is explicitly justified.
  • Large files, types, functions, and deep nesting are design smells. Split them or document a justified exception under exception_policy.
  • Hardcoded values are forbidden.
  • String literals are forbidden in implementation code. Declare them once as named constants, enums, configuration entries, or dedicated value objects, then reuse those symbols.
  • Avoid magic literals. Extract shared values into constants, enums, configuration, or dedicated types.
  • Design boundaries so real behaviour can be tested through public interfaces.
  • For .NET, the repo-root .editorconfig is the source of truth for formatting, naming, style, and analyzer severity.
  • Use nested .editorconfig files when they serve a clear subtree-specific purpose. Do not let IDE defaults, pipeline flags, and repo config disagree.

Critical

  • Never commit secrets, keys, or connection strings.
  • Never skip tests to make a branch green.
  • Never weaken a test or analyzer without explicit justification.
  • Never introduce mocks, fakes, stubs, or service doubles to hide real behaviour in tests or local flows.
  • Never introduce a non-SOLID design unless the exception is explicitly documented under exception_policy.
  • Never force-push to main.
  • Never approve or merge on behalf of a human maintainer.

Boundaries

Always:

  • Read root and local AGENTS.md files before editing code.
  • Read the relevant docs before changing behaviour or architecture.
  • Run the required verification commands yourself.

Ask first:

  • changing public API contracts
  • adding new dependencies
  • modifying database schema
  • deleting code files

Preferences

Likes

  • Follow the canonical MCAF tutorial when bootstrapping or upgrading the agent workflow.
  • Keep the root AGENTS.md at the repository root.
  • Keep the repo-local agent skill directory limited to current mcaf-* skills.
  • Keep the solution file name cased as DotPilot.slnx.
  • Treat DotPilot UI implementation as Uno Platform desktop XAML work, especially for Figma handoff, instead of translating designs into web stacks.
  • Use central package management for shared test and tooling packages.
  • Keep one .NET test framework active in the solution at a time unless a documented migration is in progress.
  • Validate UI changes through runnable DotPilot.UITests on every relevant verification pass, instead of relying only on manual browser inspection or conditional local setup.

Dislikes

  • Installing stale, non-canonical, or non-mcaf-* skills into the repo-local agent skill directory.
  • Moving root governance out of the repository root.
  • Mixing multiple .NET test frameworks in the active solution without a documented migration plan.
  • Switching desktop Uno pages into stacked or mobile-style responsive layouts during resize work unless the user explicitly asks for a different composition; desktop pages must stay desktop-first and protect geometry through sizing constraints instead.