A Model Context Protocol (MCP) server that gives your AI Coding Agent the ability to autonomously analyze application performance bottlenecks across multiple languages (Go, Python, and Java).
Instead of blindly guessing why an application is slow or leaking memory, this server allows an LLM (like Claude or Copilot via an MCP client) to ingest actual profiling data, read the hot paths, and even spin up interactive Flamegraph UIs for you to inspect locally.
- � Go (
pprof): Parsescpu.profandmem.prof. Supports-sample_indexfiltering (alloc_space,inuse_objects) and can launch background interactive Flamegraphs. - 🐍 Python (
cProfile): Feed it a.pyscript (e.g.main.py). The server will automatically execute it withcProfileand return the cumulative time bottlenecks to the AI. - ☕ Java (
Flight Recorder): Feed it a.jarfile. The server will execute it with Java Flight Recorder (JFR) enabled, profiling it for 60 seconds and saving anmcp-recording.jfrfor you to analyze in Java Mission Control.
- Go (1.20+ recommended)
- To view visual graphs, you may optionally need Graphviz installed on your system.
-
Clone this repository:
git clone https://github.com/Sarthak160/Profiler-MCP.git cd Profiler-MCP -
Build the binary:
go build -o pprof-mcp-server main.go
-
Note the absolute path to the compiled
pprof-mcp-serverbinary.
Because this is an MCP server, it runs entirely locally on your machine over standard stdio. You just need to tell your AI client where the binary is located.
Add the following to your Claude Desktop configuration file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
{
"mcpServers": {
"pprof-inspector": {
"command": "/absolute/path/to/Profiler-MCP/pprof-mcp-server"
}
}
}Open the MCP Servers configuration inside the Cline or Roo Code extension settings (VS Code -> Command Palette -> Cline: Open MCP Settings) and add:
{
"mcpServers": {
"pprof-inspector": {
"command": "/absolute/path/to/Profiler-MCP/pprof-mcp-server"
}
}
}Once connected, your AI assistant will automatically have access to these tools:
analyze_profile:- Arguments:
profile_path_or_url(required),sample_index(optional) - Description: Analyzes a pprof profile (cpu, heap) to identify performance bottlenecks.
- Arguments:
open_interactive_ui:- Arguments:
profile_path_or_url(required) - Description: Launches a background pprof web server and returns the local HTTP URL to the LLM, allowing it to provide you a link for visual Flamegraph inspection.
- Arguments:
- "I generated a CPU profile at
./app/cpu.prof. Can you use your pprof tools to tell me which function is consuming the most time?" - "We have a memory leak. I just dumped
./app/mem.prof. Parse it and look specifically at-alloc_spaceto tell me where the leak is originating." - "Open an interactive flamegraph UI for
./app/cpu.profand give me the localhost link."
If you want to test the server immediately, this repo contains a dummy app that intentionally spins the CPU and leaks memory.
cd sample
go run main.go
# This will generate both a cpu.prof and mem.prof locally!Now you can point your configured AI agent at those newly generated .prof files.