Skip to content

Conversation

@btucker
Copy link
Owner

@btucker btucker commented Jan 10, 2026

No description provided.

… code

Explores the concept of a "mental model" diagram that:
- Represents system semantics (not file structure)
- Updates via AI interpretation of code changes + intent
- Supports time travel through version history
- Enables interactive prompting by clicking diagram elements

Key insight: When working with AI coding agents, understanding the
conceptual structure matters more than knowing exact file layouts.
The diagram becomes an interface for directing the agent.

Files:
- mental_model.py: Core data structures (nodes, edges, Mermaid export)
- living_model.py: Integration with agentgit watcher for live updates
- interactive_model.py: Time travel + focused prompt generation
Refines the spike with two key improvements from feedback:

1. AI decides structure - No prescribed node types. The AI observes
   the codebase and decides what abstraction level / structure best
   captures how the system works.

2. Selection + instruction - "Draw a box and describe" interaction.
   Select part of the diagram, describe how you think about it, and
   the model adapts to your mental model.

The enhancer integrates with existing agentgit infrastructure:
- Uses the same LLM pattern as llm.py enhancer
- Plugs into the enhancer hook system
- Persists model state to JSON
- Supports time travel via snapshots

Files:
- enhancers/mental_model.py: Integrated enhancer with hooks
- adaptive_model.py: Standalone exploration of the concept
The mental model is now persisted at:
  <output_repo>/.agentgit/mental_model.json
  <output_repo>/.agentgit/mental_model.md (human-readable)

This keeps it alongside the git history that agentgit produces,
making it part of the same artifact.

Added:
- update_mental_model_after_build() - main integration point
- load_mental_model() - load from existing repo
- save_mermaid() - generates markdown with diagram + element table
The .agentgit/mental_model.md is now an accumulating insights document
where both AI and human contribute understanding over time.

On each code change:
  AI reads:  JSON (structure) + diff + insights.md (accumulated context)
  AI writes: Updated JSON + new insight appended to .md

This creates a feedback loop where human corrections and AI observations
build up a richer shared understanding of the system.

Added:
- save_insights() - append timestamped insights
- load_insights() - read accumulated context for AI
- add_human_insight() - human can add corrections/refinements
- Updated observe_changes() to use insights as context
"Reframe" - to see something through a different frame/lens.
From cognitive psychology, where reframing means shifting your mental model.

- reframe() is now the main entry point (with verbose "Reframing..." output)
- Updated docs and insights file header
- Works as a verb: "Reframing the model..."

Future CLI commands:
  reframe watch          # Watch and evolve
  reframe insight "..."  # Add human insight
  reframe show           # Display diagram
  reframe history        # Time travel
56 tests covering:
- ModelElement and ModelRelation dataclasses
- MentalModel (elements, relations, snapshots, mermaid export, JSON)
- MentalModelEnhancer (save, load, insights persistence)
- Prompt building (observation, instruction, focused)
- Helper functions and global state management
- Integration test for full model evolution flow

All tests pass.
- Add SequenceStep and SequenceDiagram classes for modeling interaction flows
- Add to_full_mermaid() to export both component and sequence diagrams
- Create static_analysis.py with Python and JavaScript code analysis
  - Regex-based fallback when pyan/ctags unavailable
  - Extracts classes, functions, inheritance, and imports
  - Properly handles node_modules exclusion using path parts
- Fix Python function detection regex to use [ \t]* instead of \s*
  to avoid matching newlines as indentation
- Add comprehensive tests (100 tests total, all passing)
- Add to_force_graph() method to MentalModel for react-force-graph-2d/3d
  - Exports nodes with id, name, group, color, val, description, files
  - Exports links with source, target, label, style, description
  - Omits empty optional fields for cleaner output
- Add ReframeViewer.tsx React component with:
  - Force-directed graph visualization
  - Node click/hover interactions
  - Info panel showing node details and associated files
  - Color coding by group
- Add standalone index.html viewer:
  - Works without build step (uses CDN)
  - Drag-and-drop JSON file loading
  - URL param support (?model=path/to/model.json)
  - Legend showing groups
- Add 5 tests for to_force_graph (75 total)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants