This guide explains how to run performance benchmarks comparing Quantum Geometric Learning (QGL) against TensorFlow.
- macOS with M2 Ultra (or equivalent hardware)
- Python 3.8 or higher
- CMake 3.15 or higher
- OpenMPI (for distributed benchmarks)
- Set up the benchmark environment:
./tools/setup_benchmark_env.sh
source activate_benchmark_env.sh- Configure IBM Quantum (optional but recommended):
./tools/setup_ibm_quantum.sh- Build the library:
mkdir -p build && cd build
cmake ..
make -j$(nproc)
cd ..The benchmark suite compares QGL against TensorFlow across several dimensions:
-
Quantum State Evolution
- System sizes from 1K to 1M qubits
- Measures execution time and memory usage
- Tests O(log n) quantum attention scaling
-
Error Correction
- Error rates from 0.1% to 10%
- Measures error reduction and correction time
- Tests topological protection effectiveness
-
Memory Efficiency
- State preparation
- Evolution
- Measurement
- Error correction
-
Distributed Performance
- Scaling from 2 to 16 processes
- Tests MPI communication efficiency
- Measures speedup and resource utilization
-
GPU Utilization
- Quantum attention operations
- Tensor contractions
- FFT operations
- State transformations
To run the complete benchmark suite:
./benchmarks/run_comparison.shThis will:
- Warm up both frameworks
- Run all benchmark categories
- Generate visualizations
- Produce a detailed report
For quantum state evolution:
./build/benchmarks/benchmark_quantum_geometric --test=evolution --size=10000
python3 benchmarks/benchmark_tensorflow.py --test=evolution --size=10000For error correction:
./build/benchmarks/benchmark_quantum_geometric --test=error --rate=0.01
python3 benchmarks/benchmark_tensorflow.py --test=error --rate=0.01For memory usage:
./build/benchmarks/benchmark_quantum_geometric --test=memory --op="State prep"
python3 benchmarks/benchmark_tensorflow.py --test=memory --op="State prep"For distributed performance:
mpirun -np 4 ./build/benchmarks/benchmark_quantum_geometric --test=distributed
python3 benchmarks/benchmark_tensorflow.py --test=distributed --procs=4For GPU utilization:
./build/benchmarks/benchmark_quantum_geometric --test=gpu --op="Attention"
python3 benchmarks/benchmark_tensorflow.py --test=gpu --op="Attention"The benchmark results are presented in several formats:
-
Terminal Output
- Real-time benchmark progress
- Raw performance numbers
- System utilization metrics
-
Visualizations
evolution_performance.png: Execution time and speedup plotserror_correction.png: Error rates and improvement ratiosmemory_usage.png: Memory consumption comparisondistributed_scaling.png: Scaling efficiency plotsgpu_utilization.png: GPU utilization charts
-
PDF Report
- Detailed analysis of all benchmarks
- Statistical significance tests
- Hardware utilization insights
- Scaling characteristics
-
Execution Time
- Reported in milliseconds
- Lower is better
- Includes warmup iterations
- Excludes data transfer time
-
Error Rates
- Reported as absolute values
- Lower is better
- Measured against ideal state
- Includes quantum noise effects
-
Memory Usage
- Reported in gigabytes
- Lower is better
- Peak memory consumption
- Includes GPU memory
-
Speedup
- Ratio of TensorFlow time to QGL time
- Higher is better
- Linear scaling = 1:1 ratio
- Super-linear possible with quantum attention
-
GPU Utilization
- Percentage of theoretical peak
- Higher is better
- Measured over operation duration
- Accounts for memory bandwidth
-
Out of Memory
- Reduce system size
- Adjust
QGL_GPU_MEMORY_FRACTION - Enable state compression
- Use distributed mode
-
Poor Scaling
- Check process pinning
- Verify MPI configuration
- Monitor network bandwidth
- Adjust workload distribution
-
Low GPU Utilization
- Check thermal throttling
- Monitor power limits
- Verify backend selection
- Adjust batch sizes
-
High Error Rates
- Verify quantum state fidelity
- Check error correction settings
- Monitor syndrome measurements
- Adjust protection parameters
The benchmark framework can be extended through:
-
Configuration Files
etc/quantum_geometric/benchmark_config.jsonetc/quantum_geometric/gpu_config.jsonetc/quantum_geometric/mpi_config.json
-
Environment Variables
QGL_BENCHMARK_MODE: Enable detailed profilingQGL_GPU_MEMORY_FRACTION: Control memory usageQGL_ERROR_CORRECTION: Set correction levelQGL_CIRCUIT_OPTIMIZATION: Enable optimizations
-
Command Line Options
--size: System size--rate: Error rate--op: Operation type--procs: Process count
When submitting benchmark results:
-
Hardware Details
- CPU model and configuration
- GPU type and memory
- Memory size and speed
- Storage specifications
-
Software Versions
- Operating system
- Compiler toolchain
- Library dependencies
- Driver versions
-
Methodology
- Number of iterations
- Warmup procedures
- System conditions
- Error margins
-
Raw Data
- Timing measurements
- Memory statistics
- GPU metrics
- Log files
Submit results through:
- GitHub Issues
- Pull Requests
- Discussion Forums
- Email to maintainers
For more information, see: