Skip to content

Latest commit

 

History

History
287 lines (212 loc) · 9.11 KB

File metadata and controls

287 lines (212 loc) · 9.11 KB

Performance Problem Simulator - Python Edition

Python FastAPI License: MIT

An educational web application that intentionally creates performance problems in Python/FastAPI applications for practicing Azure App Service diagnostics. Built for Azure support engineers and developers learning to diagnose real-world performance issues.

⚠️ Warning

This application is for educational purposes ONLY. It intentionally creates:

  • High CPU usage via multiprocessing workers
  • Memory pressure through byte array allocations
  • Thread pool starvation via synchronous blocking
  • Event loop blocking (async anti-pattern demonstration)
  • Slow responses and application crashes

Do NOT deploy to production environments or shared infrastructure.

🚀 Quick Start

Prerequisites

  • Python 3.14 or higher
  • pip (Python package manager)

Local Development

# Clone the repository
git clone https://github.com/azure-support/perf-simulator-python.git
cd perf-simulator-python

# Create and activate virtual environment
python -m venv .venv
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Run the application
uvicorn src.main:app --reload

# Open http://localhost:8000 in your browser

Using Docker

# Build the image
docker build -t perf-simulator-python .

# Run the container
docker run -p 8000:8000 perf-simulator-python

# Open http://localhost:8000

📊 Features

Dashboard

The real-time dashboard provides:

  • Live CPU and memory metrics via WebSocket (500ms updates)
  • Interactive charts using Chart.js
  • Simulation control panel
  • Event log for tracking operations

Performance Simulations

Simulation API Endpoint Diagnostic Signature
CPU Stress POST /api/cpu/start High CPU %, visible in py-spy
Memory Pressure POST /api/memory/allocate Growing memory, tracemalloc
Sync Blocking POST /api/blocking/sync Thread pool exhaustion
Async Blocking POST /api/blocking/async Event loop freeze (anti-pattern)
Slow Requests GET /api/slow?delay_seconds=5 Latency spikes
Crashes POST /api/crash Process termination

Documentation

  • /docs.html - Complete API reference
  • /azure-diagnostics.html - Azure diagnostic tools guide
  • /azure-deployment.html - Deployment instructions

🏗️ Project Structure

├── src/
│   ├── config/          # Settings with env var support
│   ├── models/          # Pydantic models and entities
│   ├── routers/         # API endpoint handlers
│   ├── services/        # Business logic and simulations
│   ├── middleware/      # Error handling, logging
│   ├── websocket/       # Real-time metrics broadcast
│   ├── static/          # Dashboard HTML/CSS/JS
│   ├── app.py           # FastAPI application factory
│   └── main.py          # ASGI entry point
├── tests/
│   ├── unit/            # Unit tests
│   └── integration/     # Integration tests
├── docs/                # Documentation files
│   └── simulations/     # Per-simulation guides
├── .github/workflows/   # CI/CD pipelines
├── Dockerfile           # Container configuration
├── pyproject.toml       # Project metadata
└── requirements.txt     # Dependencies

⚙️ Configuration

Environment variables (can be set in .env file or as Azure App Service Application Settings):

Variable Description Default
HEALTH_PROBE_RATE Health probe interval in ms (min 100). Lower values = more granular latency data but increased overhead. 200
IDLE_TIMEOUT_MINUTES Minutes of inactivity before suspending health probes (0 = disabled). Reduces network traffic and Application Insights telemetry when idle. 20
PAGE_FOOTER Custom HTML footer text displayed at the bottom of the dashboard. Supports HTML links for attribution. None

HEALTH_PROBE_RATE

Controls how frequently the dashboard probes the application for latency measurement. Lower values provide more granular data but can cause probe overlaps during profiling.

  • Default: 200ms (5 probes/second)
  • Minimum: 100ms (values below this are clamped)
  • Chart updates: The latency chart always updates at 100ms using interpolation
# Set via Azure CLI
az webapp config appsettings set \
    --resource-group rg-perfsimpython \
    --name perfsimpython \
    --settings HEALTH_PROBE_RATE=400

IDLE_TIMEOUT_MINUTES

When the application is idle (no dashboard connections or load test requests), health probes are automatically suspended to reduce unnecessary network traffic to Azure's frontend, AppLens, and Application Insights.

  • Default: 20 minutes
  • Wake-up: Simply reload the dashboard or send any API request
  • Activity sources: Dashboard page loads, API calls (load test traffic counts as activity)
  • Disable: Set to 0 to disable idle mode completely
# Set via Azure CLI
az webapp config appsettings set \
    --resource-group rg-perfsimpython \
    --name perfsimpython \
    --settings IDLE_TIMEOUT_MINUTES=30

PAGE_FOOTER

The PAGE_FOOTER environment variable allows you to customize the footer credits displayed on the dashboard. This is useful for attributing tools, teams, or linking to relevant resources.

# Example value
export PAGE_FOOTER='Created by <a href="https://speckit.org/" target="_blank">SpecKit</a>'

# Set via Azure CLI  
az webapp config appsettings set \
    --resource-group rg-perfsimpython \
    --name perfsimpython \
    --settings PAGE_FOOTER='Created by <a href="https://speckit.org/" target="_blank">SpecKit</a>'

The footer is retrieved via the /api/footer endpoint and rendered in the dashboard's footer section. If PAGE_FOOTER is not set, the default credits are shown.

🧪 Development

Running Tests

# All tests
pytest

# With coverage
pytest --cov=src --cov-report=html --cov-report=term

# Unit tests only
pytest tests/unit/

# Integration tests only
pytest tests/integration/

Code Quality

# Format code
black src tests

# Lint
ruff check src tests

# Type check
mypy src

☁️ Azure Deployment

This application is designed for deployment to Azure App Service for educational diagnostics practice.

GitHub Actions (Recommended)

The repository includes CI/CD workflows for automated deployment using Azure OIDC (no secrets required):

  1. Create an Azure App Service
  2. Set up Workload Identity Federation
  3. Configure repository secrets
  4. Push to trigger deployment

See Deployment Guide for detailed instructions.

Manual Deployment

# Deploy using Azure CLI
az webapp up \
  --runtime "PYTHON:3.14" \
  --name your-app-name \
  --resource-group your-rg

📚 Documentation

Online (in app)

Markdown Documentation

🔧 Diagnostic Practice

Recommended Workflow

  1. Deploy to Azure App Service with Application Insights enabled
  2. Establish baseline - Record normal metrics
  3. Trigger a simulation - Use the dashboard or API
  4. Investigate in Azure Portal:
    • App Service Diagnostics
    • Application Insights Performance
    • Azure Monitor Metrics
  5. Correlate findings with the simulation
  6. Document the diagnostic path for team training

Azure Tools Covered

  • App Service Diagnostics - Built-in problem detection
  • Application Insights - APM, distributed tracing, Live Metrics
  • Azure Monitor - Metrics, alerts, Log Analytics
  • Kudu - SSH access, process explorer, log streaming
  • py-spy - Python profiler for production use
  • tracemalloc - Memory allocation tracking

🤝 Contributing

This is an educational tool for Azure support engineers. Contributions that improve diagnostic scenarios or educational content are welcome.

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Run tests and linting
  5. Submit a pull request

📄 License

MIT License - See LICENSE for details.


Note: This application is part of the Azure Support engineering training materials. For questions or issues, contact the Azure Support Tools team.