diff --git a/docs/adr/26148-deterministic-audit-metrics-via-run-summary-cache.md b/docs/adr/26148-deterministic-audit-metrics-via-run-summary-cache.md new file mode 100644 index 0000000000..3c1b097b5a --- /dev/null +++ b/docs/adr/26148-deterministic-audit-metrics-via-run-summary-cache.md @@ -0,0 +1,82 @@ +# ADR-26148: Deterministic Audit Metrics via run_summary.json Cache and workflow-logs/ Exclusion + +**Date**: 2026-04-14 +**Status**: Draft +**Deciders**: pelikhan, Copilot + +--- + +## Part 1 — Narrative (Human-Friendly) + +### Context + +The `audit` command reported wildly inconsistent `token_usage` and `turns` on repeated invocations for the same workflow run (observed: 9 turns / 381k tokens on one call, 22 turns / 4.7M tokens on another). Two compounding bugs caused this: (1) `AuditWorkflowRun` unconditionally re-processed all local log files on every call, even when a fully-computed `run_summary.json` was already on disk; and (2) the log-file walk in `extractLogMetrics` did not exclude the `workflow-logs/` directory, which `downloadWorkflowRunLogs` populates with GitHub Actions step-output — files that capture the same agent stdout already present in the agent artifact logs, inflating token counts by approximately 12×. + +### Decision + +We will adopt a **cache-first strategy** for `AuditWorkflowRun`: before performing any API calls or log processing, check whether a valid `run_summary.json` exists on disk (validated by CLI version). If a cache hit is found, reconstruct `ProcessedRun` from the cached summary and return immediately via a shared `renderAuditReport` helper, bypassing all re-download and re-parse logic. We will additionally **exclude the `workflow-logs/` directory** from the `extractLogMetrics` log walk by returning `filepath.SkipDir` whenever the walk visits a directory named `workflow-logs`, preventing GitHub Actions runner captures from being counted as agent artifact data. Together, these two changes ensure that repeated `audit` calls for the same run produce identical metrics. + +### Alternatives Considered + +#### Alternative 1: Invalidate and Overwrite the Cache on Every Call + +Rather than treating the cached `run_summary.json` as authoritative, re-process logs on every call and overwrite the cache. This would keep the cache "fresh" but would perpetuate the inconsistency problem: log re-processing can produce different values depending on which files are present at the time (e.g., if `workflow-logs/` has been populated between calls). This was rejected because consistency of audit metrics across repeated calls is the primary requirement. + +#### Alternative 2: Exclude `workflow-logs/` Files by Name Pattern Instead of Directory Skip + +Rather than skipping the entire `workflow-logs/` directory with `filepath.SkipDir`, selectively exclude individual files whose names match known GitHub Actions runner-log patterns (e.g., `*_Run log step.txt`). This would be fragile: GitHub Actions file naming conventions can change, and any unrecognized file would silently inflate metrics again. Skipping the entire directory by name is simpler, robust, and aligns with how `downloadWorkflowRunLogs` places its output. + +#### Alternative 3: Store Canonical Metrics in a Separate Lock File + +Record only the metrics (token usage, turns) in a dedicated lock file separate from `run_summary.json`, and read that lock file on subsequent calls. This adds file-system complexity without meaningful benefit over reusing the existing `run_summary.json` structure. The current `loadRunSummary` already performs CLI-version validation, providing a clean automatic invalidation mechanism. + +### Consequences + +#### Positive +- Repeated `audit` calls for the same run are now deterministic and produce identical output. +- The cache-hit path avoids all API calls and file re-parsing, making subsequent audits significantly faster. +- The `renderAuditReport` helper function eliminates the duplicated render + finalization logic that previously existed in both the fresh-download and (now) cache-hit code paths. +- Cache invalidation on CLI upgrade is automatic via the existing `CLIVersion` check in `loadRunSummary`. + +#### Negative +- The first successful `audit` call becomes the canonical source of truth. If log files were incomplete on the first run (e.g., partial download), the cached metrics will be wrong until the cache is manually cleared or the CLI is upgraded. +- The `workflow-logs/` exclusion is a directory-name-based heuristic. If `downloadWorkflowRunLogs` ever changes the output directory name, the exclusion silently stops working. +- Adding a new top-level helper (`renderAuditReport`) increases the surface area of the package's internal API. + +#### Neutral +- The `run_summary.json` format is unchanged; only the read/write ordering is adjusted (save-before-render in the fresh-download path). +- Existing tests for `loadRunSummary` and `saveRunSummary` remain valid; new regression tests were added for the cache-hit path and the `workflow-logs/` exclusion. + +--- + +## Part 2 — Normative Specification (RFC 2119) + +> The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHALL**, **SHALL NOT**, **SHOULD**, **SHOULD NOT**, **RECOMMENDED**, **MAY**, and **OPTIONAL** in this section are to be interpreted as described in [RFC 2119](https://www.rfc-editor.org/rfc/rfc2119). + +### Cache-First Audit Strategy + +1. Implementations **MUST** check for a valid `run_summary.json` on disk before initiating any API calls or log-file processing in `AuditWorkflowRun`. +2. Implementations **MUST** treat a cache hit (valid `run_summary.json` with matching `CLIVersion`) as the authoritative source of metrics and return immediately without re-processing logs. +3. Implementations **MUST NOT** overwrite an existing `run_summary.json` when serving a cache hit; the cached file **MUST** remain unmodified. +4. Implementations **MUST** persist `run_summary.json` to disk before calling the render step in the fresh-download path, so that a render failure does not prevent future cache hits. +5. Implementations **SHOULD** log a message (at the appropriate verbosity level) indicating that a cached summary is being used, including the original `ProcessedAt` timestamp. + +### Log Metric Extraction + +1. Implementations **MUST** skip the `workflow-logs/` directory (and its contents) when walking the run output directory in `extractLogMetrics`. +2. Implementations **MUST** use `filepath.SkipDir` (or equivalent) to exclude the entire `workflow-logs/` subtree, not individual files within it. +3. Implementations **MUST NOT** include token-usage data found in `workflow-logs/` in the `LogMetrics.TokenUsage` or `LogMetrics.Turns` totals. +4. Implementations **MAY** log a debug message when skipping the `workflow-logs/` directory to aid in future diagnostics. + +### Shared Render Path + +1. Implementations **MUST** use a single shared function (currently `renderAuditReport`) to build and emit the audit report, invoked by both the cache-hit path and the fresh-download path. +2. The shared render function **MUST NOT** re-extract metrics from log files; it **MUST** use only the metrics passed to it by the caller. + +### Conformance + +An implementation is considered conformant with this ADR if it satisfies all **MUST** and **MUST NOT** requirements above. Failure to meet any **MUST** or **MUST NOT** requirement constitutes non-conformance. + +--- + +*This is a DRAFT ADR generated by the [Design Decision Gate](https://github.com/github/gh-aw/actions/runs/24396807146) workflow. The PR author must review, complete, and finalize this document before the PR can merge.* diff --git a/pkg/cli/audit.go b/pkg/cli/audit.go index f8ea5b0282..9d3d5f1947 100644 --- a/pkg/cli/audit.go +++ b/pkg/cli/audit.go @@ -189,6 +189,41 @@ func AuditWorkflowRun(ctx context.Context, runID int64, owner, repo, hostname st return auditJobRun(runID, jobID, stepNumber, owner, repo, hostname, runOutputDir, verbose, jsonOutput) } + // Use cached run summary when available to ensure deterministic metrics across repeated calls. + // Re-processing the same log files can produce different results (e.g. when GitHub's API + // returns aggregated data that differs from the locally-stored firewall logs), so we always + // prefer the first fully-processed summary written to disk. The cache is automatically + // invalidated whenever the CLI version changes (see loadRunSummary). + if summary, ok := loadRunSummary(runOutputDir, verbose); ok { + auditLog.Printf("Using cached run summary for run %d (processed at %s)", runID, summary.ProcessedAt.Format(time.RFC3339)) + if verbose { + fmt.Fprintln(os.Stderr, console.FormatInfoMessage(fmt.Sprintf("Using cached run summary for run %d (processed at %s)", runID, summary.ProcessedAt.Format(time.RFC3339)))) + } + processedRun := ProcessedRun{ + Run: summary.Run, + AwContext: summary.AwContext, + TaskDomain: summary.TaskDomain, + BehaviorFingerprint: summary.BehaviorFingerprint, + AgenticAssessments: summary.AgenticAssessments, + AccessAnalysis: summary.AccessAnalysis, + FirewallAnalysis: summary.FirewallAnalysis, + PolicyAnalysis: summary.PolicyAnalysis, + RedactedDomainsAnalysis: summary.RedactedDomainsAnalysis, + MissingTools: summary.MissingTools, + MissingData: summary.MissingData, + Noops: summary.Noops, + MCPFailures: summary.MCPFailures, + TokenUsage: summary.TokenUsage, + GitHubRateLimitUsage: summary.GitHubRateLimitUsage, + JobDetails: summary.JobDetails, + } + // Override the cached LogsPath with the current runOutputDir so that downstream + // file reads (created items, aw_info, etc.) resolve correctly even if the run + // directory has been moved or copied since the summary was first written. + processedRun.Run.LogsPath = runOutputDir + return renderAuditReport(processedRun, summary.Metrics, summary.MCPToolUsage, runOutputDir, owner, repo, hostname, verbose, parse, jsonOutput) + } + // Check if we have locally cached artifacts first hasLocalCache := fileutil.DirExists(runOutputDir) && !fileutil.IsDirEmpty(runOutputDir) @@ -416,6 +451,50 @@ func AuditWorkflowRun(ctx context.Context, runID int64, owner, repo, hostname st processedRun.BehaviorFingerprint = behaviorFingerprint processedRun.AgenticAssessments = agenticAssessments + // Save run summary for caching future audit runs + summary := &RunSummary{ + CLIVersion: GetVersion(), + RunID: run.DatabaseID, + ProcessedAt: time.Now(), + Run: run, + Metrics: metrics, + AwContext: processedRun.AwContext, + TaskDomain: processedRun.TaskDomain, + BehaviorFingerprint: processedRun.BehaviorFingerprint, + AgenticAssessments: processedRun.AgenticAssessments, + AccessAnalysis: accessAnalysis, + FirewallAnalysis: firewallAnalysis, + PolicyAnalysis: policyAnalysis, + RedactedDomainsAnalysis: redactedDomainsAnalysis, + MissingTools: missingTools, + MissingData: missingData, + Noops: noops, + MCPFailures: mcpFailures, + MCPToolUsage: mcpToolUsage, + TokenUsage: tokenUsageSummary, + GitHubRateLimitUsage: rateLimitUsage, + ArtifactsList: artifacts, + JobDetails: jobDetails, + } + + if err := saveRunSummary(runOutputDir, summary, verbose); err != nil { + if verbose { + fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to save run summary: %v", err))) + } + } + + return renderAuditReport(processedRun, metrics, mcpToolUsage, runOutputDir, owner, repo, hostname, verbose, parse, jsonOutput) +} + +// renderAuditReport builds and renders the audit report from a fully-populated processedRun. +// It is called both when serving from a cached run summary and after a fresh processing pass, +// ensuring that the two paths produce identical output. +func renderAuditReport(processedRun ProcessedRun, metrics LogMetrics, mcpToolUsage *MCPToolUsageData, runOutputDir string, owner, repo, hostname string, verbose bool, parse bool, jsonOutput bool) error { + runID := processedRun.Run.DatabaseID + + currentCreatedItems := extractCreatedItemsFromManifest(runOutputDir) + processedRun.Run.SafeItemsCount = len(currentCreatedItems) + currentSnapshot := buildAuditComparisonSnapshot(processedRun, currentCreatedItems) comparison := buildAuditComparisonForRun(processedRun, currentSnapshot, runOutputDir, owner, repo, hostname, verbose) @@ -474,38 +553,6 @@ func AuditWorkflowRun(ctx context.Context, runID int64, owner, repo, hostname st } } - // Save run summary for caching future audit runs - summary := &RunSummary{ - CLIVersion: GetVersion(), - RunID: run.DatabaseID, - ProcessedAt: time.Now(), - Run: run, - Metrics: metrics, - AwContext: processedRun.AwContext, - TaskDomain: processedRun.TaskDomain, - BehaviorFingerprint: processedRun.BehaviorFingerprint, - AgenticAssessments: processedRun.AgenticAssessments, - AccessAnalysis: accessAnalysis, - FirewallAnalysis: firewallAnalysis, - PolicyAnalysis: policyAnalysis, - RedactedDomainsAnalysis: redactedDomainsAnalysis, - MissingTools: missingTools, - MissingData: missingData, - Noops: noops, - MCPFailures: mcpFailures, - MCPToolUsage: mcpToolUsage, - TokenUsage: tokenUsageSummary, - GitHubRateLimitUsage: rateLimitUsage, - ArtifactsList: artifacts, - JobDetails: jobDetails, - } - - if err := saveRunSummary(runOutputDir, summary, verbose); err != nil { - if verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to save run summary: %v", err))) - } - } - // Display logs location (only for console output) if !jsonOutput { absOutputDir, _ := filepath.Abs(runOutputDir) diff --git a/pkg/cli/audit_test.go b/pkg/cli/audit_test.go index d6203bd9a3..dab9aaed9d 100644 --- a/pkg/cli/audit_test.go +++ b/pkg/cli/audit_test.go @@ -5,6 +5,7 @@ package cli import ( "encoding/json" "errors" + "fmt" "io" "os" "path/filepath" @@ -428,6 +429,162 @@ func TestAuditCachingBehavior(t *testing.T) { } } +// TestAuditUsesRunSummaryCache verifies that when a valid run_summary.json exists on disk, +// AuditWorkflowRun returns successfully using only cached data — without calling +// fetchWorkflowRunMetadata (which would require a live GitHub API) and without +// re-processing local log files. +// +// The test is structured so that, if the early-return cache path is removed, the function +// would call fetchWorkflowRunMetadata → gh api → fail in the test environment (no credentials), +// causing the test to fail. Only the cache path can satisfy the call without network access. +func TestAuditUsesRunSummaryCache(t *testing.T) { + tempDir := testutil.TempDir(t, "test-audit-cache-*") + // AuditWorkflowRun derives runOutputDir as /run-, so use tempDir as + // the outputDir and let the function build the subdirectory path. + const runID int64 = 99999 + runOutputDir := filepath.Join(tempDir, fmt.Sprintf("run-%d", runID)) + if err := os.MkdirAll(runOutputDir, 0755); err != nil { + t.Fatalf("Failed to create run directory: %v", err) + } + + // Write a stub aw_info.json so the directory is non-empty + awInfoContent := `{"engine_id": "copilot", "workflow_name": "test-workflow"}` + if err := os.WriteFile(filepath.Join(runOutputDir, "aw_info.json"), []byte(awInfoContent), 0644); err != nil { + t.Fatalf("Failed to write aw_info.json: %v", err) + } + + // Write a "poison" log file with a grossly inflated token count. If the cache path is + // bypassed and log files are re-processed, this value would be counted and would + // overwrite the summary — but the test verifies that never happens. + poisonLog := `{"type":"agent_turn","usage":{"total_tokens":9999999}}` + "\n" + if err := os.WriteFile(filepath.Join(runOutputDir, "agent-stdio.log"), []byte(poisonLog), 0644); err != nil { + t.Fatalf("Failed to write poison log: %v", err) + } + + // Ground-truth metrics that were captured on the first (correct) audit pass + cachedRun := WorkflowRun{ + DatabaseID: runID, + WorkflowName: "GPL Dependency Cleaner", + Status: "completed", + Conclusion: "success", + TokenUsage: 381270, + Turns: 9, + LogsPath: runOutputDir, + } + cachedMetrics := LogMetrics{ + TokenUsage: 381270, + Turns: 9, + } + + cachedSummary := &RunSummary{ + CLIVersion: GetVersion(), + RunID: runID, + ProcessedAt: time.Now().Add(-time.Hour), // processed one hour ago + Run: cachedRun, + Metrics: cachedMetrics, + MissingTools: []MissingToolReport{}, + MCPFailures: []MCPFailureReport{}, + JobDetails: []JobInfoWithDuration{}, + } + + if err := saveRunSummary(runOutputDir, cachedSummary, false); err != nil { + t.Fatalf("Failed to save initial run summary: %v", err) + } + + summaryPath := filepath.Join(runOutputDir, runSummaryFileName) + initialInfo, err := os.Stat(summaryPath) + if err != nil { + t.Fatalf("Could not stat run_summary.json: %v", err) + } + initialModTime := initialInfo.ModTime() + + // Call AuditWorkflowRun — the only way this can succeed in a test environment (no GitHub + // credentials) is if the early-return cache path is taken, skipping fetchWorkflowRunMetadata. + // WorkflowPath is empty in the cached summary, so renderAuditReport will not attempt any + // GitHub API calls for baseline comparison either. + ctx := t.Context() + if err := AuditWorkflowRun( + ctx, + runID, + "", // owner — empty: no explicit repo context, relies on gh CLI defaults + "", // repo + "", // hostname — empty: uses github.com + tempDir, + false, // verbose + false, // parse + false, // jsonOutput + 0, // jobID — 0: full-run audit (not job-specific) + 0, // stepNumber + nil, // artifactSets + ); err != nil { + t.Fatalf("AuditWorkflowRun failed — cache path not taken (fetchWorkflowRunMetadata was probably called): %v", err) + } + + // The run_summary.json must NOT have been modified — the poison log must not have been parsed + currentInfo, err := os.Stat(summaryPath) + if err != nil { + t.Fatalf("Could not stat run_summary.json after AuditWorkflowRun: %v", err) + } + if !currentInfo.ModTime().Equal(initialModTime) { + t.Errorf("run_summary.json was modified (mtime changed from %v to %v): "+ + "the audit must not overwrite the cache on repeated calls", + initialModTime, currentInfo.ModTime()) + } + + // Verify cached metrics are untouched — the poison log would have inflated these if parsed + loadedSummary, ok := loadRunSummary(runOutputDir, false) + if !ok { + t.Fatalf("loadRunSummary should still find a valid cached summary") + } + if loadedSummary.Metrics.TokenUsage != cachedMetrics.TokenUsage { + t.Errorf("Token usage mismatch: expected cached=%d, got=%d (poison log was parsed)", + cachedMetrics.TokenUsage, loadedSummary.Metrics.TokenUsage) + } + if loadedSummary.Metrics.Turns != cachedMetrics.Turns { + t.Errorf("Turns mismatch: expected cached=%d, got=%d", + cachedMetrics.Turns, loadedSummary.Metrics.Turns) + } +} + +// TestRenderAuditReportUsesProvidedMetrics verifies that renderAuditReport renders the report +// using the metrics supplied by the caller rather than re-extracting them from log files. +// This is the key property that ensures cache-path and fresh-path produce identical output. +func TestRenderAuditReportUsesProvidedMetrics(t *testing.T) { + tempDir := testutil.TempDir(t, "test-render-audit-*") + runOutputDir := filepath.Join(tempDir, "run-11111") + if err := os.MkdirAll(runOutputDir, 0755); err != nil { + t.Fatalf("Failed to create run directory: %v", err) + } + + run := WorkflowRun{ + DatabaseID: 11111, + WorkflowName: "Test Workflow", + Status: "completed", + Conclusion: "success", + TokenUsage: 12345, + Turns: 7, + LogsPath: runOutputDir, + } + metrics := LogMetrics{ + TokenUsage: 12345, + Turns: 7, + } + processedRun := ProcessedRun{ + Run: run, + MissingTools: []MissingToolReport{}, + MCPFailures: []MCPFailureReport{}, + JobDetails: []JobInfoWithDuration{}, + } + + // renderAuditReport should complete without error even without GitHub API access. + // No GitHub calls are made because WorkflowPath is empty, causing findPreviousSuccessfulWorkflowRuns + // to return early with an error before any network requests are issued. + err := renderAuditReport(processedRun, metrics, nil, runOutputDir, "", "", "", false, false, false) + if err != nil { + t.Errorf("renderAuditReport returned unexpected error: %v", err) + } +} + func TestBuildAuditDataWithFirewall(t *testing.T) { // Create test data with firewall analysis run := WorkflowRun{ diff --git a/pkg/cli/copilot_metrics_fix_test.go b/pkg/cli/copilot_metrics_fix_test.go index 871f5ff4c0..b7043c552c 100644 --- a/pkg/cli/copilot_metrics_fix_test.go +++ b/pkg/cli/copilot_metrics_fix_test.go @@ -11,6 +11,60 @@ import ( "github.com/stretchr/testify/require" ) +// TestExtractLogMetricsExcludesWorkflowLogsDir is a regression test for the +// double-counting issue reported in the "Audit shows inconsistent metrics on +// repeated calls for same run" issue. +// +// Background: +// downloadWorkflowRunLogs (called during artifact download) places GitHub Actions +// step-output files under workflow-logs/. These files capture the runner's combined +// stdout/stderr for each step, which means they contain a copy of everything the +// agent wrote to stdout — including the same token-usage JSON blocks that are already +// in agent-stdio.log / agent.log from the dedicated agent artifact. +// +// Because the log-file walk in extractLogMetrics previously did NOT skip +// workflow-logs/, any .log or *log*.txt file found there was parsed and its +// TokenUsage was ADDED to metrics.TokenUsage (the walk uses +=). With ~12 such +// copies the total ballooned to ≈4.7M instead of the correct ≈381k. +// +// The fix adds an explicit filepath.SkipDir return when the walk visits a directory +// named "workflow-logs", so only the agent artifact files are counted. +func TestExtractLogMetricsExcludesWorkflowLogsDir(t *testing.T) { + tempDir := t.TempDir() + + // Simulate a Copilot-CLI run directory + require.NoError(t, os.WriteFile(filepath.Join(tempDir, "aw_info.json"), []byte(`{"engine_id":"copilot"}`), 0600)) + + // The single JSON data block that represents one LLM API call with 1000 tokens. + oneTurn := `2025-09-26T11:00:00Z [DEBUG] data: +2025-09-26T11:00:00Z [DEBUG] { +2025-09-26T11:00:00Z [DEBUG] "choices": [{"message": {"role": "assistant", "tool_calls": []}}], +2025-09-26T11:00:00Z [DEBUG] "usage": {"prompt_tokens": 900, "completion_tokens": 100, "total_tokens": 1000} +2025-09-26T11:00:00Z [DEBUG] } +2025-09-26T11:00:01Z [DEBUG] Workflow done` + + // Primary agent log — the "source of truth" artifact + require.NoError(t, os.WriteFile(filepath.Join(tempDir, "agent.log"), []byte(oneTurn), 0600)) + + // Simulate workflow-logs/ as produced by downloadWorkflowRunLogs. + // Two step-output files: one .log and one *log*.txt (both would have matched + // the old filter), both containing identical token data. + wfLogsDir := filepath.Join(tempDir, "workflow-logs", "agent") + require.NoError(t, os.MkdirAll(wfLogsDir, 0755)) + require.NoError(t, os.WriteFile(filepath.Join(wfLogsDir, "runner.log"), []byte(oneTurn), 0644)) + require.NoError(t, os.WriteFile(filepath.Join(wfLogsDir, "2_Run log step.txt"), []byte(oneTurn), 0644)) + + metrics, err := extractLogMetrics(tempDir, false) + require.NoError(t, err) + + // Without the fix, metrics.TokenUsage would be 3000 (1000 * 3 files). + // With the fix, workflow-logs/ is skipped and only agent.log is counted. + assert.Equal(t, 1000, metrics.TokenUsage, + "TokenUsage must not include workflow-logs/ files (expected 1000, not %d)", metrics.TokenUsage) + assert.Equal(t, 1, metrics.Turns, + "Turns must not be inflated by workflow-logs/ copies (expected 1, not %d)", metrics.Turns) +} + // TestCopilotDebugLogTurnsExtraction verifies that Turns are correctly counted from // [DEBUG] data: blocks in the Copilot CLI debug log format. // diff --git a/pkg/cli/logs_metrics.go b/pkg/cli/logs_metrics.go index 8496cd2cca..e36b9dc56d 100644 --- a/pkg/cli/logs_metrics.go +++ b/pkg/cli/logs_metrics.go @@ -169,8 +169,16 @@ func extractLogMetrics(logDir string, verbose bool, workflowPath ...string) (Log return err } - // Skip directories + // Skip directories. workflow-logs/ is explicitly excluded because it contains + // GitHub Actions runner captures of each job/step's stdout rather than the agent + // artifact data. Parsing those files would double-count token usage and turns; + // the same agent session output appears in both the agent artifact + // (e.g. agent-stdio.log) and the workflow run logs (workflow-logs/). if info.IsDir() { + if info.Name() == "workflow-logs" { + logsMetricsLog.Printf("Skipping workflow-logs directory (GHA runner logs, not agent metrics): %s", path) + return filepath.SkipDir + } return nil }