An educational web application that intentionally creates performance problems in Python/FastAPI applications for practicing Azure App Service diagnostics. Built for Azure support engineers and developers learning to diagnose real-world performance issues.
This application is for educational purposes ONLY. It intentionally creates:
- High CPU usage via multiprocessing workers
- Memory pressure through byte array allocations
- Thread pool starvation via synchronous blocking
- Event loop blocking (async anti-pattern demonstration)
- Slow responses and application crashes
Do NOT deploy to production environments or shared infrastructure.
- Python 3.14 or higher
- pip (Python package manager)
# Clone the repository
git clone https://github.com/azure-support/perf-simulator-python.git
cd perf-simulator-python
# Create and activate virtual environment
python -m venv .venv
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Run the application
uvicorn src.main:app --reload
# Open http://localhost:8000 in your browser# Build the image
docker build -t perf-simulator-python .
# Run the container
docker run -p 8000:8000 perf-simulator-python
# Open http://localhost:8000The real-time dashboard provides:
- Live CPU and memory metrics via WebSocket (500ms updates)
- Interactive charts using Chart.js
- Simulation control panel
- Event log for tracking operations
| Simulation | API Endpoint | Diagnostic Signature |
|---|---|---|
| CPU Stress | POST /api/cpu/start |
High CPU %, visible in py-spy |
| Memory Pressure | POST /api/memory/allocate |
Growing memory, tracemalloc |
| Sync Blocking | POST /api/blocking/sync |
Thread pool exhaustion |
| Async Blocking | POST /api/blocking/async |
Event loop freeze (anti-pattern) |
| Slow Requests | GET /api/slow?delay_seconds=5 |
Latency spikes |
| Crashes | POST /api/crash |
Process termination |
/docs.html- Complete API reference/azure-diagnostics.html- Azure diagnostic tools guide/azure-deployment.html- Deployment instructions
βββ src/
β βββ config/ # Settings with env var support
β βββ models/ # Pydantic models and entities
β βββ routers/ # API endpoint handlers
β βββ services/ # Business logic and simulations
β βββ middleware/ # Error handling, logging
β βββ websocket/ # Real-time metrics broadcast
β βββ static/ # Dashboard HTML/CSS/JS
β βββ app.py # FastAPI application factory
β βββ main.py # ASGI entry point
βββ tests/
β βββ unit/ # Unit tests
β βββ integration/ # Integration tests
βββ docs/ # Documentation files
β βββ simulations/ # Per-simulation guides
βββ .github/workflows/ # CI/CD pipelines
βββ Dockerfile # Container configuration
βββ pyproject.toml # Project metadata
βββ requirements.txt # Dependencies
Environment variables (can be set in .env file or as Azure App Service Application Settings):
| Variable | Description | Default |
|---|---|---|
HEALTH_PROBE_RATE |
Health probe interval in ms (min 100). Lower values = more granular latency data but increased overhead. | 200 |
IDLE_TIMEOUT_MINUTES |
Minutes of inactivity before suspending health probes (0 = disabled). Reduces network traffic and Application Insights telemetry when idle. | 20 |
PAGE_FOOTER |
Custom HTML footer text displayed at the bottom of the dashboard. Supports HTML links for attribution. | None |
Controls how frequently the dashboard probes the application for latency measurement. Lower values provide more granular data but can cause probe overlaps during profiling.
- Default: 200ms (5 probes/second)
- Minimum: 100ms (values below this are clamped)
- Chart updates: The latency chart always updates at 100ms using interpolation
# Set via Azure CLI
az webapp config appsettings set \
--resource-group rg-perfsimpython \
--name perfsimpython \
--settings HEALTH_PROBE_RATE=400When the application is idle (no dashboard connections or load test requests), health probes are automatically suspended to reduce unnecessary network traffic to Azure's frontend, AppLens, and Application Insights.
- Default: 20 minutes
- Wake-up: Simply reload the dashboard or send any API request
- Activity sources: Dashboard page loads, API calls (load test traffic counts as activity)
- Disable: Set to 0 to disable idle mode completely
# Set via Azure CLI
az webapp config appsettings set \
--resource-group rg-perfsimpython \
--name perfsimpython \
--settings IDLE_TIMEOUT_MINUTES=30The PAGE_FOOTER environment variable allows you to customize the footer credits displayed on the dashboard. This is useful for attributing tools, teams, or linking to relevant resources.
# Example value
export PAGE_FOOTER='Created by <a href="https://speckit.org/" target="_blank">SpecKit</a>'
# Set via Azure CLI
az webapp config appsettings set \
--resource-group rg-perfsimpython \
--name perfsimpython \
--settings PAGE_FOOTER='Created by <a href="https://speckit.org/" target="_blank">SpecKit</a>'The footer is retrieved via the /api/footer endpoint and rendered in the dashboard's footer section. If PAGE_FOOTER is not set, the default credits are shown.
# All tests
pytest
# With coverage
pytest --cov=src --cov-report=html --cov-report=term
# Unit tests only
pytest tests/unit/
# Integration tests only
pytest tests/integration/# Format code
black src tests
# Lint
ruff check src tests
# Type check
mypy srcThis application is designed for deployment to Azure App Service for educational diagnostics practice.
The repository includes CI/CD workflows for automated deployment using Azure OIDC (no secrets required):
- Create an Azure App Service
- Set up Workload Identity Federation
- Configure repository secrets
- Push to trigger deployment
See Deployment Guide for detailed instructions.
# Deploy using Azure CLI
az webapp up \
--runtime "PYTHON:3.14" \
--name your-app-name \
--resource-group your-rg- Dashboard: http://localhost:8000/
- API Docs (Swagger): http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- Documentation: http://localhost:8000/docs.html
- Project Overview
- Azure Diagnostics Guide
- Linux Tools Guide
- Simulation Guides:
- Deploy to Azure App Service with Application Insights enabled
- Establish baseline - Record normal metrics
- Trigger a simulation - Use the dashboard or API
- Investigate in Azure Portal:
- App Service Diagnostics
- Application Insights Performance
- Azure Monitor Metrics
- Correlate findings with the simulation
- Document the diagnostic path for team training
- App Service Diagnostics - Built-in problem detection
- Application Insights - APM, distributed tracing, Live Metrics
- Azure Monitor - Metrics, alerts, Log Analytics
- Kudu - SSH access, process explorer, log streaming
- py-spy - Python profiler for production use
- tracemalloc - Memory allocation tracking
This is an educational tool for Azure support engineers. Contributions that improve diagnostic scenarios or educational content are welcome.
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests and linting
- Submit a pull request
MIT License - See LICENSE for details.
Note: This application is part of the Azure Support engineering training materials. For questions or issues, contact the Azure Support Tools team.