SVFA now provides Python alternatives to the bash scripts for better maintainability, cross-platform compatibility, and enhanced features.
- Purpose: Execute Securibench test suites and save results to disk
- Replaces:
run-securibench-tests.sh - Dependencies: Python 3.6+ (standard library only)
- Purpose: Compute accuracy metrics with automatic test execution
- Replaces:
compute-securibench-metrics.sh - Dependencies: Python 3.6+ (standard library only)
# Execute tests
./scripts/run_securibench_tests.py inter rta
./scripts/run_securibench_tests.py all cha
# Compute metrics
./scripts/compute_securibench_metrics.py inter rta
./scripts/compute_securibench_metrics.py all# Verbose output with detailed progress
./scripts/run_securibench_tests.py inter spark --verbose
# CSV-only mode (no console output)
./scripts/compute_securibench_metrics.py all spark --csv-only
# Clean and execute in one command
./scripts/run_securibench_tests.py all --clean --verbose- Structured Code: Clear class hierarchies and function organization
- Type Hints: Optional type annotations for better code quality
- Error Handling: Proper exception handling vs bash error codes
- Testing: Easy to unit test individual functions
- Colored Output: Better visual feedback with ANSI colors
- Progress Tracking: Clear progress indicators and timing
- Better Argument Parsing: Robust argument validation with
argparse - JSON Processing: Native JSON handling for test results
- CSV Generation: Built-in CSV creation with proper formatting
- Windows Support: Works identically on Windows, macOS, Linux
- Path Handling: Proper path handling with
pathlib - Process Management: Reliable subprocess execution
- IDE Support: Full autocomplete, debugging, refactoring support
- Linting: Can use pylint, flake8, mypy for code quality
- Documentation: Built-in help with rich formatting
| Feature | Bash Scripts | Python Scripts |
|---|---|---|
| Execution Speed | ⚡⚡⚡ | ⚡⚡ |
| Maintainability | ⭐⭐ | ⭐⭐⭐⭐⭐ |
| Error Handling | ⭐⭐ | ⭐⭐⭐⭐⭐ |
| Cross-Platform | ⭐⭐ | ⭐⭐⭐⭐⭐ |
| Dependencies | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Features | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Testing | ⭐⭐ | ⭐⭐⭐⭐⭐ |
| IDE Support | ⭐⭐ | ⭐⭐⭐⭐⭐ |
# Only standard library imports
import argparse
import subprocess
import json
import csv
from pathlib import Path
from datetime import datetime
from typing import List, Dict, Optionaltry:
result = subprocess.run(cmd, capture_output=True, text=True, timeout=3600)
if result.returncode == 0:
print_success("Test execution completed")
return True
else:
print_error("Test execution failed")
return False
except subprocess.TimeoutExpired:
print_error("Test execution timed out")
return False
except Exception as e:
print_error(f"Unexpected error: {e}")
return Falseclass Colors:
RESET = '\033[0m'
BOLD = '\033[1m'
RED = '\033[91m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
def print_success(message: str) -> None:
print(f"{Colors.GREEN}✅ {message}{Colors.RESET}")import unittest
from unittest.mock import patch, MagicMock
class TestSecuribenchRunner(unittest.TestCase):
def test_suite_validation(self):
# Test suite name validation
pass
def test_callgraph_validation(self):
# Test call graph algorithm validation
pass
@patch('subprocess.run')
def test_sbt_execution(self, mock_run):
# Test SBT command execution
pass# Test basic functionality
python3 -m pytest scripts/test_python_scripts.py
# Test with different Python versions
python3.6 scripts/run_securibench_tests.py --help
python3.8 scripts/run_securibench_tests.py --help
python3.10 scripts/run_securibench_tests.py --help- ✅ Both bash and Python scripts available
- ✅ Same command-line interface
- ✅ Same output format and file locations
- ✅ Users can choose preferred version
- 🔄 Add progress bars with
tqdm(optional dependency) - 🔄 Add configuration file support
- 🔄 Add parallel test execution
- 🔄 Add test result caching and incremental runs
- 🔄 Update documentation to prefer Python scripts
- 🔄 Add deprecation warnings to bash scripts
- 🔄 Eventually remove bash scripts
# Quick development cycle with verbose output
./scripts/run_securibench_tests.py basic rta --verbose
./scripts/compute_securibench_metrics.py basic rta --verbose
# Clean slate for fresh analysis
./scripts/run_securibench_tests.py all --clean
./scripts/compute_securibench_metrics.py all# Compare different call graph algorithms
for cg in spark cha rta vta spark_library; do
./scripts/run_securibench_tests.py inter $cg
./scripts/compute_securibench_metrics.py inter $cg --csv-only
done
# Analyze results
ls target/metrics/securibench_metrics_*_*.csv#!/bin/bash
# CI script using Python versions
set -e
# Run tests with timeout protection
timeout 3600 ./scripts/run_securibench_tests.py all spark || exit 1
# Generate metrics
./scripts/compute_securibench_metrics.py all spark --csv-only || exit 1
# Upload results
aws s3 cp target/metrics/ s3://results-bucket/ --recursive1. Python Version Compatibility
# Check Python version
python3 --version # Should be 3.6+
# Use specific Python version
python3.8 scripts/run_securibench_tests.py --help2. Import Errors
# Ensure scripts are in the correct directory
ls -la scripts/run_securibench_tests.py
ls -la scripts/compute_securibench_metrics.py
# Check file permissions
chmod +x scripts/*.py3. SBT Integration
# Test SBT availability
sbt --version
# Check Java version
java -version # Should be Java 8- Progress Bars: Visual progress indication with
tqdm - Parallel Execution: Run multiple suites simultaneously
- Configuration Files: YAML/JSON configuration support
- Result Caching: Incremental analysis and smart caching
- Web Dashboard: Optional web interface for results
- Docker Support: Containerized execution environment
# Plugin architecture for custom analyzers
class CustomAnalyzer:
def analyze(self, results: List[TestResult]) -> Dict[str, Any]:
# Custom analysis logic
pass
# Register custom analyzer
register_analyzer('custom', CustomAnalyzer())- USAGE_SCRIPTS.md - Comprehensive script usage guide
- CALL_GRAPH_ALGORITHMS.md - Call graph algorithm details
- README.md - Main project documentation
For questions or issues with Python scripts, please create an issue with the python-scripts label.