This tutorial teaches you how to record your interactive JFR Shell commands into reusable scripts.
Recording is perfect for:
- 📝 Capturing ad-hoc analysis - Turn exploratory work into reusable scripts
- 🔄 Reproducing issues - Record exact steps to reproduce a problem
- 👥 Sharing workflows - Share your analysis procedure with teammates
- 📚 Creating templates - Build script templates through example
- 🎓 Learning - Record and review your analysis patterns
- JFR Shell installed and running
- At least one JFR recording file for practice
- Basic familiarity with JFR Shell commands
- Basic Recording
- Viewing and Replaying Recordings
- Converting to Parameterized Scripts
- Advanced Workflows
- Best Practices
$ jfr-shell
╔═══════════════════════════════════════╗
║ JFR Shell (CLI) ║
║ Interactive JFR exploration ║
╚═══════════════════════════════════════╝
Type 'help' for commands, 'exit' to quit
jfr>jfr> record start
Recording started: /Users/you/.jfr-shell/recordings/session-20251226143022.jfrsBy default, recordings go to ~/.jfr-shell/recordings/ with a timestamp filename.
Or specify a custom path:
jfr> record start /tmp/my-analysis.jfrs
Recording started: /tmp/my-analysis.jfrsNow perform your analysis as usual. Every command is recorded:
jfr> open /tmp/recording.jfr
Session 1 opened: /tmp/recording.jfr
jfr> info
Session 1: /tmp/recording.jfr
Duration: 60.5s
Events: 12,453
Types: 47
jfr> events/jdk.ExecutionSample | count()
Total: 8,234
jfr> events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(5, by=count)
┌───────────────────────┬───────┐
│ Thread │ Count │
├───────────────────────┼───────┤
│ main │ 3,451 │
│ http-worker-1 │ 2,103 │
│ gc-thread │ 1,234 │
│ async-processor │ 876 │
│ background-task │ 570 │
└───────────────────────┴───────┘jfr> record status
Recording to: /tmp/my-analysis.jfrsjfr> record stop
Recording stopped: /tmp/my-analysis.jfrs
jfr> exit
Goodbye!Note: If you forget to stop, recording auto-saves when you exit!
$ cat /tmp/my-analysis.jfrsOutput:
# JFR Shell Recording
# Started: 2025-12-26T14:30:22Z
# Session: my-analysis.jfrs
# [14:30:35]
open /tmp/recording.jfr
# [14:30:40]
info
# [14:30:45]
events/jdk.ExecutionSample | count()
# [14:31:05]
events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(5, by=count)
# Recording stopped: 2025-12-26T14:32:00ZNotice:
- ✅ Header with timestamp
- ✅ Comments with command timestamps
# [HH:mm:ss] - ✅ All commands exactly as you typed them
- ✅ Footer with stop timestamp
The recorded file is immediately executable:
$ jfr-shell script /tmp/my-analysis.jfrsThe exact same commands execute in sequence!
Just replace the path in the recorded script:
$ cat /tmp/my-analysis.jfrs | sed 's|/tmp/recording.jfr|/tmp/other-recording.jfr|' | jfr-shell script -Or manually edit the file and change the path.
Recorded scripts have hardcoded values. Let's make them reusable!
# JFR Shell Recording
# Started: 2025-12-26T14:30:22Z
# [14:30:35]
open /tmp/prod-recording-20251226.jfr
# [14:30:45]
events/jdk.ExecutionSample | count()
# [14:31:05]
events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(5, by=count)
# [14:31:20]
events/jdk.FileRead[bytes>=1000] --limit 10
# Recording stopped: 2025-12-26T14:32:00ZEdit the file to use variables:
#!/usr/bin/env -S jbang jfr-shell@btraceio script -
# Thread Analysis Script
# Converted from recorded session on 2025-12-26
#
# Usage:
# ./thread-analysis.jfrs recording=/path/to/file.jfr top_n=5 min_bytes=1000
#
# Arguments:
# recording - Path to JFR recording file
# top_n - Number of top threads to show
# min_bytes - Minimum bytes for file read filter
# Open recording
open $1
# Count execution samples
events/jdk.ExecutionSample | count()
# Top threads by sample count
events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(${top_n}, by=count)
# Large file reads
events/jdk.FileRead[bytes>=${min_bytes}] --limit 10chmod +x thread-analysis.jfrs# Basic usage
./thread-analysis.jfrs recording=/tmp/app.jfr top_n=5 min_bytes=1000
# Different parameters
./thread-analysis.jfrs recording=/tmp/prod.jfr top_n=20 min_bytes=5000
# Analyze multiple recordings
for f in /tmp/recordings/*.jfr; do
echo "=== $f ==="
./thread-analysis.jfrs recording=$f top_n=10 min_bytes=2000
doneRecord, review, edit, and re-record to build the perfect script.
Iteration 1: Initial Exploration
jfr> record start /tmp/explore.jfrs
jfr> open /tmp/app.jfr
jfr> events/jdk.ExecutionSample | count()
jfr> events/jdk.GarbageCollection | count()
jfr> record stopIteration 2: Add More Analysis
jfr> record start /tmp/explore.jfrs # Overwrites previous
jfr> open /tmp/app.jfr
jfr> info
jfr> events/jdk.ExecutionSample | count()
jfr> events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(10, by=count)
jfr> events/jdk.GarbageCollection | stats(duration)
jfr> events/jdk.ObjectAllocationInNewTLAB | groupBy(objectClass/name) | top(20, by=sum(allocationSize))
jfr> close
jfr> record stopNow you have a refined, complete analysis script!
Create recordings for different analysis types:
# CPU profiling
jfr> record start ~/jfr-scripts/cpu-profiling.jfrs
jfr> open /tmp/sample.jfr
jfr> events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(10, by=count)
jfr> events/jdk.ExecutionSample | groupBy(stackTrace) | top(10, by=count)
jfr> record stop
# Memory profiling
jfr> record start ~/jfr-scripts/memory-profiling.jfrs
jfr> open /tmp/sample.jfr
jfr> events/jdk.ObjectAllocationInNewTLAB | sum(allocationSize)
jfr> events/jdk.ObjectAllocationInNewTLAB | groupBy(objectClass/name) | top(20, by=sum(allocationSize))
jfr> events/jdk.GarbageCollection | stats(duration)
jfr> record stop
# I/O profiling
jfr> record start ~/jfr-scripts/io-profiling.jfrs
jfr> open /tmp/sample.jfr
jfr> events/jdk.FileRead | sum(bytes)
jfr> events/jdk.FileWrite | sum(bytes)
jfr> events/jdk.FileRead[bytes>=10000] --limit 20
jfr> record stopNow you have a library:
~/jfr-scripts/
├── cpu-profiling.jfrs
├── memory-profiling.jfrs
└── io-profiling.jfrs
Share exact reproduction steps with teammates.
Scenario: You found an issue and want to share your diagnostic steps.
jfr> record start /tmp/issue-123-diagnosis.jfrs
jfr> open /tmp/prod-snapshot.jfr
jfr> info
jfr> events/jdk.JavaMonitorEnter | groupBy(monitorClass/name) | top(10, by=count)
jfr> events/jdk.ThreadPark | groupBy(parkedClass/name) | top(10, by=count)
jfr> events/jdk.JavaMonitorWait | stats(duration)
jfr> record stopSend /tmp/issue-123-diagnosis.jfrs to your teammate. They can:
# Run your exact analysis on their recording
jfr-shell script issue-123-diagnosis.jfrsOr convert to parameterized version for their recordings:
# Edit to use $1 variable
vim issue-123-diagnosis.jfrs
# Run on their recording
./issue-123-diagnosis.jfrs recording=/tmp/their-recording.jfrRecord your analysis as a teaching aid.
Example: Onboarding New Team Member
jfr> record start /tmp/onboarding-demo.jfrs
jfr> # Let me show you how to analyze thread contention
jfr> open /tmp/example-recording.jfr
jfr> # First, check overall recording info
jfr> info
jfr> # Look for monitor wait events
jfr> events/jdk.JavaMonitorEnter | count()
jfr> # Find which monitors have most contention
jfr> events/jdk.JavaMonitorEnter | groupBy(monitorClass/name) | top(10, by=count)
jfr> # Check wait times
jfr> events/jdk.JavaMonitorWait | stats(duration)
jfr> record stopShare /tmp/onboarding-demo.jfrs:
- Comments explain the thinking
- New team members see the exact commands
- They can replay on their own recordings
# Good
jfr> record start /tmp/thread-deadlock-analysis.jfrs
jfr> record start ~/scripts/performance-regression-check.jfrs
jfr> record start ./diagnostics/memory-leak-investigation.jfrs
# Bad
jfr> record start /tmp/test.jfrs
jfr> record start /tmp/script1.jfrs
jfr> record start /tmp/temp.jfrsWhile recording doesn't capture your typed comments (comments starting with #), you can add them afterward when converting to a parameterized script.
If you make a mistake during recording:
Option A: Stop and restart:
jfr> record stop
jfr> record start /tmp/my-analysis.jfrs # Start freshOption B: Edit the recording after stopping:
jfr> record stop
# Edit /tmp/my-analysis.jfrs to remove incorrect commandsAlways test recorded scripts before sharing:
# Create recording
jfr> record start /tmp/new-script.jfrs
jfr> ...commands...
jfr> record stop
# Test immediately
$ jfr-shell script /tmp/new-script.jfrs
# Verify output is correctStore recordings in version control:
project/
├── jfr-scripts/
│ ├── daily-health-check.jfrs
│ ├── performance-analysis.jfrs
│ └── memory-diagnostic.jfrs
└── README.mdUpdate the README with usage instructions.
Create a directory structure:
jfr-scripts/
├── diagnostics/
│ ├── thread-deadlock.jfrs
│ ├── memory-leak.jfrs
│ └── cpu-spike.jfrs
├── performance/
│ ├── baseline-profile.jfrs
│ ├── regression-check.jfrs
│ └── load-test-analysis.jfrs
└── monitoring/
├── daily-summary.jfrs
└── weekly-report.jfrsEdit recorded scripts to add comprehensive headers:
#!/usr/bin/env -S jbang jfr-shell@btraceio script -
# Thread Contention Analysis
#
# Purpose:
# Identifies threads with high contention and blocking operations
#
# Created: 2025-12-26 (from recorded session)
# Author: Your Team
#
# Usage:
# ./thread-contention.jfrs recording=/path/to/file.jfr top_n=10
#
# Arguments:
# recording - JFR recording file path
# top_n - Number of top results to show
...rest of script...Maintain a log of your recordings:
# JFR Script Journal
## 2025-12-26: Thread Deadlock Analysis
- File: `diagnostics/thread-deadlock-check.jfrs`
- Created from investigation of PROD-123
- Useful for checking lock acquisition patterns
## 2025-12-25: GC Pause Investigation
- File: `performance/gc-pause-analysis.jfrs`
- Created during performance tuning session
- Parameters: recording, max_acceptable_pause_msScenario: Production outage, need to analyze JFR snapshots.
# Start investigation recording
jfr> record start /tmp/prod-outage-20251226-analysis.jfrs
# Load production snapshot
jfr> open /tmp/prod-snapshot-14-30.jfr
# Check basic stats
jfr> info
# Look for thread issues
jfr> events/jdk.JavaMonitorEnter | groupBy(monitorClass/name) | top(20, by=count)
jfr> events/jdk.ThreadPark | count()
# Check GC pressure
jfr> events/jdk.GarbageCollection | stats(duration)
jfr> events/jdk.ObjectAllocationInNewTLAB | sum(allocationSize)
# Examine CPU usage
jfr> events/jdk.ExecutionSample | groupBy(sampledThread/javaName) | top(15, by=count)
jfr> record stop
Recording stopped: /tmp/prod-outage-20251226-analysis.jfrs
# Now parameterize and run on all snapshotsConvert to parameterized script and analyze all snapshots:
for snapshot in /tmp/prod-snapshot-*.jfr; do
echo "=== Analyzing $snapshot ==="
./outage-analysis.jfrs recording=$snapshot
doneScenario: Create baseline performance profile.
jfr> record start ~/baselines/app-v1.0-baseline.jfrs
jfr> open /tmp/v1.0-reference.jfr
jfr> info
jfr> events/jdk.ExecutionSample | count()
jfr> events/jdk.GarbageCollection | stats(duration)
jfr> events/jdk.FileRead | sum(bytes)
jfr> events/jdk.FileWrite | sum(bytes)
jfr> record stopUse this baseline for future comparison:
# Compare new version against baseline
jfr-shell script app-v1.0-baseline.jfrs # Run baseline
# ... compare output with new version metricsProblem: Recording seems to not save.
Solutions:
- Ensure you called
record stopor exited cleanly - Check the recording path:
record status - Verify write permissions on the target directory
Problem: Can't locate the recorded file.
Solution: Check the default location:
ls -lt ~/.jfr-shell/recordings/Recent recordings appear first.
Problem: Replaying recorded script fails.
Possible Causes:
- Hardcoded paths no longer exist
- Fix: Parameterize the script with variables
- Recording file moved
- Fix: Update path in the script
- Different JFR event types
- Fix: Use
--continue-on-errorflag
- Fix: Use
Now that you can record commands:
- Practice: Record your next analysis session
- Build Library: Create a collection of common analysis scripts
- Share: Collaborate with teammates using recorded scripts
- Parameterize: Convert recordings to reusable templates
- Automate: Integrate scripts into CI/CD pipelines
You've learned to:
- ✅ Start and stop command recording
- ✅ View recorded scripts
- ✅ Replay recorded commands
- ✅ Convert recordings to parameterized scripts
- ✅ Use recordings for collaboration and documentation
- ✅ Organize and maintain a script library
Recording transforms exploratory analysis into reusable, shareable, and documentable workflows!
Happy recording! 🎬