Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 5 additions & 8 deletions src/simulation/investment.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ use std::fmt::Display;
pub mod appraisal;
use appraisal::coefficients::calculate_coefficients_for_assets;
use appraisal::{
AppraisalOutput, appraise_investment, sort_appraisal_outputs_by_investment_priority,
AppraisalOutput, appraise_investment, count_equal_and_best_appraisal_outputs,
sort_appraisal_outputs_by_investment_priority,
};

/// A map of demand across time slices for a specific market
Expand Down Expand Up @@ -651,7 +652,7 @@ fn get_candidate_assets<'a>(
}

/// Print debug message if there are multiple equally good outputs
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc comment still says “Print debug message…”, but this helper is named log_on_equal_appraisal_outputs and uses the logger. Updating the comment wording (and/or the comment at the call site that says “Warn”) would avoid confusion about log level/behavior.

Suggested change
/// Print debug message if there are multiple equally good outputs
/// Log a debug-level message if there are multiple equally good outputs

Copilot uses AI. Check for mistakes.
fn warn_on_equal_appraisal_outputs(
fn log_on_equal_appraisal_outputs(
outputs: &[AppraisalOutput],
agent_id: &AgentID,
commodity_id: &CommodityID,
Expand All @@ -661,11 +662,7 @@ fn warn_on_equal_appraisal_outputs(
return;
}

// Count the number of identical (or nearly identical) appraisal outputs
let num_identical = outputs[1..]
.iter()
.take_while(|output| outputs[0].compare_metric(output).is_eq())
.count();
let num_identical = count_equal_and_best_appraisal_outputs(outputs);

if num_identical > 0 {
let asset_details = outputs[..=num_identical]
Expand Down Expand Up @@ -829,7 +826,7 @@ fn select_best_assets(
}

// Warn if there are multiple equally good assets
warn_on_equal_appraisal_outputs(&outputs_for_opts, &agent.id, &commodity.id, region_id);
log_on_equal_appraisal_outputs(&outputs_for_opts, &agent.id, &commodity.id, region_id);

let best_output = outputs_for_opts.into_iter().next().unwrap();

Expand Down
148 changes: 148 additions & 0 deletions src/simulation/investment/appraisal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -378,6 +378,21 @@ pub fn sort_appraisal_outputs_by_investment_priority(outputs_for_opts: &mut Vec<
});
}

/// Counts the number of top appraisal outputs in a sorted slice that are indistinguishable
/// by both metric and fallback ordering. Excludes the first element from the count.
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This helper is only used within the crate (by investment.rs) and doesn’t appear to be intended as part of the external API, but it’s declared pub in a public module. Consider reducing visibility to pub(crate) (or pub(super)) to avoid expanding the crate’s public surface area unnecessarily.

Suggested change
pub fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {
pub(crate) fn count_equal_and_best_appraisal_outputs(outputs: &[AppraisalOutput]) -> usize {

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is overkill

if outputs.is_empty() {
return 0;
}
outputs[1..]
.iter()
.take_while(|output| {
output.compare_metric(&outputs[0]).is_eq()
&& compare_asset_fallback(&output.asset, &outputs[0].asset).is_eq()
})
Comment on lines +383 to +392
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

count_equal_and_best_appraisal_outputs will panic on an empty slice because it indexes outputs[0] / outputs[1..]. Since this is a pub function, it should defensively handle outputs.is_empty() (e.g., return 0 early) to avoid exposing a panic as part of the public API.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess in practice, we won't call this function with an empty slice. I'd probably just handle this case in the function anyway, personally, but you could also just mention that it panics in the doc comment instead.

.count()
Comment on lines +383 to +393
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are extensive existing unit tests in this module, but the new tie-detection logic in count_equal_and_best_appraisal_outputs isn’t covered. Adding tests for cases like: empty/singleton slices, equal metric but fallback differentiates, and equal metric + fallback tie would help prevent regressions in the logging behavior.

Copilot uses AI. Check for mistakes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some tests would be nice. Copilot would probably do an ok job!

}

#[cfg(test)]
mod tests {
use super::*;
Expand Down Expand Up @@ -943,4 +958,137 @@ mod tests {
// The invalid output should have been filtered out
assert_eq!(outputs.len(), 0);
}

/// Tests for counting number of equal metrics using identical assets so only metric values
/// affect the count.
#[rstest]
#[case(vec![5.0], 0, "single_element")]
#[case(vec![5.0, 5.0, 5.0], 2, "all_equal_returns_len_minus_one")]
#[case(vec![1.0, 2.0, 3.0], 0, "none_equal_to_best")]
#[case(vec![5.0, 5.0, 9.0], 1, "partial_equality_stops_at_first_difference")]
#[case(vec![5.0, 5.0, 9.0, 5.0], 1, "equality_does_not_resume_after_gap")]
fn count_equal_best_lcox_metric(
asset: Asset,
#[case] metric_values: Vec<f64>,
#[case] expected_count: usize,
#[case] description: &str,
) {
let metrics: Vec<Box<dyn MetricTrait>> = metric_values
.into_iter()
.map(|v| Box::new(LCOXMetric::new(MoneyPerActivity(v))) as Box<dyn MetricTrait>)
.collect();

let outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);

Comment on lines +981 to +983
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new tests call count_equal_and_best_appraisal_outputs on outputs that haven’t been sorted with sort_appraisal_outputs_by_investment_priority, but the helper’s docstring explicitly requires a sorted slice. This makes the tests depend on the construction order rather than the intended contract; consider sorting outputs in these tests before asserting (same applies to the other new count_equal_best_* tests below).

Suggested change
let outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);
let mut outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);
sort_appraisal_outputs_by_investment_priority(&mut outputs);

Copilot uses AI. Check for mistakes.
assert_eq!(
count_equal_and_best_appraisal_outputs(&outputs),
expected_count,
"Failed for case: {description}"
);
}

/// Empty slice count should return 0.
#[test]
fn count_equal_best_empty_slice_returns_zero() {
let outputs: Vec<AppraisalOutput> = vec![];
assert_eq!(count_equal_and_best_appraisal_outputs(&outputs), 0);
}

/// Equal metrics but differing asset fallback (commissioned vs. candidate) →
/// outputs are distinguishable, so count should be 0.
#[rstest]
fn count_equal_best_equal_metric_different_fallback_returns_zero(
process: Process,
region_id: RegionID,
agent_id: AgentID,
) {
let process_rc = Rc::new(process);
let capacity = Capacity(10.0);

let commissioned = Asset::new_commissioned(
agent_id.clone(),
process_rc.clone(),
region_id.clone(),
capacity,
2020,
)
.unwrap();
let candidate =
Asset::new_candidate(process_rc.clone(), region_id.clone(), capacity, 2020).unwrap();

let metric_value = MoneyPerActivity(5.0);
let outputs = appraisal_outputs(
vec![commissioned, candidate],
vec![
Box::new(LCOXMetric::new(metric_value)),
Box::new(LCOXMetric::new(metric_value)),
],
);

assert_eq!(count_equal_and_best_appraisal_outputs(&outputs), 0);
}

/// Equal metrics and equal asset fallback (same commissioned status and commission year) →
/// the second element is indistinguishable, so count should be 1.
#[rstest]
fn count_equal_best_equal_metric_and_equal_fallback_returns_one(
process: Process,
region_id: RegionID,
agent_id: AgentID,
) {
let process_rc = Rc::new(process);
let capacity = Capacity(10.0);
let year = 2020;

let asset1 = Asset::new_commissioned(
agent_id.clone(),
process_rc.clone(),
region_id.clone(),
capacity,
year,
)
.unwrap();
let asset2 = Asset::new_commissioned(
agent_id.clone(),
process_rc.clone(),
region_id.clone(),
capacity,
year,
)
.unwrap();

let metric_value = MoneyPerActivity(5.0);
let outputs = appraisal_outputs(
vec![asset1, asset2],
vec![
Box::new(LCOXMetric::new(metric_value)),
Box::new(LCOXMetric::new(metric_value)),
],
);

assert_eq!(count_equal_and_best_appraisal_outputs(&outputs), 1);
}

/// Equal NPV metrics and identical assets → second element should be counted.
#[rstest]
fn count_equal_best_equal_npv_metrics(asset: Asset) {
let make_npv = |surplus: f64, fixed_cost: f64| {
Box::new(NPVMetric::new(ProfitabilityIndex {
total_annualised_surplus: Money(surplus),
annualised_fixed_cost: Money(fixed_cost),
})) as Box<dyn MetricTrait>
};

let metrics = vec![
make_npv(200.0, 100.0),
make_npv(200.0, 100.0), // Equal to best
make_npv(100.0, 100.0), // Worse
];

let outputs =
appraisal_outputs_with_investment_priority_invariant_to_assets(metrics, &asset);

assert_eq!(count_equal_and_best_appraisal_outputs(&outputs), 1);
}
}
Loading