-
-
Notifications
You must be signed in to change notification settings - Fork 248
Description
Is your feature request related to a problem? Please describe.
Currently, the MonteCarlo class calculates basic statistics (mean and standard deviation) for simulation results. However, users lack a metric to evaluate the reliability of these statistics.
Without confidence intervals, it is difficult to determine if the sample size (number of simulations) is sufficient. Users need a way to quantify the uncertainty of their results (e.g., "We are 95% confident the true mean apogee is between X and Y").
Describe the solution you'd like
Implement a method within the MonteCarlo class that uses Bootstrapping (resampling with replacement) to estimate confidence intervals for simulation outputs.
Implementation Details
- Target File:
rocketpy/simulation/monte_carlo.py - Recommended Approach: Use
scipy.stats.bootstrap(preferred for robustness) ornumpyrandom choice to resample the data stored inself.results. - New Method: Create a method likely named
calculate_confidence_intervalorestimate_ci.
Proposed Usage
# After running simulations
analysis = MonteCarlo(...)
analysis.simulate(number_of_simulations=500)
# Calculate 95% CI for Apogee
ci_result = analysis.calculate_confidence_interval(
variable="apogee",
confidence_level=0.95,
method="BCa" # or 'percentile'
)
print(f"95% CI for Apogee: {ci_result.low} to {ci_result.high}")Acceptance Criteria
- Implement a method to calculate Confidence Intervals (CI) for any scalar output variable (e.g., apogee, impact_velocity).
- Allow the user to specify the confidence level (default to 0.95).
- Add unit tests to verify the statistical bounds are reasonable.
- Update documentation with a brief explanation of how to interpret the CI.
Additional Context