Skip to content

Latest commit

 

History

History
100 lines (65 loc) · 2.68 KB

File metadata and controls

100 lines (65 loc) · 2.68 KB

CodeStat · AI Code Metrics

Quantify how much AI actually contributes to your codebase.

CI License

CodeStat is a local metrics tool that analyzes how you use AI coding assistants: how many lines are generated by AI, how many are kept, and how this evolves over time.

中文文档见:README.zh-CN.md


Features

  • Global dashboard for all data

    • AI generated lines, adopted lines, adoption & generation rates
    • File count, session count, quick bar chart overview
  • Multi‑dimension queries

    • By file: see how much of a file comes from AI and how much you kept
    • By session: analyze one coding session with detailed diff lines
    • By project: aggregate metrics for an entire repository
  • Agent / model comparison

    • Compare multiple sessions (agents / models / settings) side‑by‑side
    • See which one actually produces more adopted code instead of just more tokens
  • Local‑first & privacy‑friendly

    • All metrics are computed locally from your own diffs
    • No source code or prompts are sent to any remote service
  • Nice CLI UX

    • Rich‑based tables & colors, arrow‑key navigation
    • Minimal but informative header (MCP status + repo info)

Demo

TODO: add real screenshots / GIFs from your terminal

  • Global dashboard

    Global dashboard

  • Session metrics with diff lines

    Session metrics


Quickstart

Install

From PyPI (recommended):

pip install aicodestat

From source:

git clone https://github.com/2hangchen/CodeStat.git
cd CodeStat
pip install -r requirements.txt

Start the CLI

python .\cli\main.py

Use ↑/↓ to move, Enter to confirm.
Choose “📈 Global Dashboard (All Data)” to see an overview of your local metrics.


Typical Workflows

  • Measure your own AI usage

    • Record one or more coding sessions with your IDE + MCP server
    • Run CodeStat and inspect:
      • AI generated vs adopted lines
      • Which files receive the most AI help
  • Compare agents / models / prompts

    • Map different sessions to different agents / models
    • Use Compare Agents to get a per‑session comparison table
  • Project‑level health check

    • For a given repo, run project metrics to see:
      • Where AI contributes the most
      • Whether AI‑generated code is actually being kept