Strong branching fixings + dual cutoff#922
Conversation
|
/ok to test f95031d |
📝 WalkthroughWalkthroughAdds a pruning helper to remove fixed fractional variables and propagates upper bound information into strong-branching and reduced-cost strengthening flows. Tightening now updates bounds, prunes fractionals, and triggers advanced-basis re-solves or cutoff handling when appropriate. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2488-2497: The bounds_strengthening call mutates original_lp_
without synchronization; wrap the call to bounds_strengthening and any immediate
subsequent mutations/use of original_lp_ (e.g., the bounds vectors and the
following prune_fixed_fractional_variables call) in the mutex_original_lp_ lock
to prevent races with external set_new_solution() — acquire mutex_original_lp_
before calling bounds_strengthening(...) and release it after the related
updates to original_lp_ complete.
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 85-87: The code currently treats dual::status_t::ITERATION_LIMIT
as a proven bound and calls compute_objective(child_problem, solution.x); remove
ITERATION_LIMIT from that condition so obj is computed only for proven statuses
(dual::status_t::OPTIMAL and dual::status_t::CUTOFF) and ensure any logic that
consumes obj (root fixings / cutoff deductions) is not executed when status ==
dual::status_t::ITERATION_LIMIT; update the condition around compute_objective
in pseudo_costs.cpp (the block handling child_problem, solution.x and status) to
skip objective materialization and subsequent bound/fathom logic for
ITERATION_LIMIT.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/branch_and_bound/pseudo_costs.cppcpp/src/branch_and_bound/pseudo_costs.hppcpp/src/mip_heuristics/solver.cu
| } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::ITERATION_LIMIT || | ||
| status == dual::status_t::CUTOFF) { | ||
| obj = compute_objective(child_problem, solution.x); |
There was a problem hiding this comment.
Do not use ITERATION_LIMIT strong-branch results as proof bounds.
Line 85 currently materializes obj for ITERATION_LIMIT, and those deltas are later used for root fixings/cutoff deductions. An iteration-limited solve is not a proven LP bound, so this can incorrectly tighten bounds or prematurely fathom.
🔧 Suggested safe handling
- } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::ITERATION_LIMIT ||
- status == dual::status_t::CUTOFF) {
+ } else if (status == dual::status_t::OPTIMAL || status == dual::status_t::CUTOFF) {
obj = compute_objective(child_problem, solution.x);
+ } else if (status == dual::status_t::ITERATION_LIMIT) {
+ // Not a proven bound; keep it out of fixing logic.
+ obj = std::numeric_limits<f_t>::quiet_NaN();
} else {As per coding guidelines, "Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 85 - 87, The code
currently treats dual::status_t::ITERATION_LIMIT as a proven bound and calls
compute_objective(child_problem, solution.x); remove ITERATION_LIMIT from that
condition so obj is computed only for proven statuses (dual::status_t::OPTIMAL
and dual::status_t::CUTOFF) and ensure any logic that consumes obj (root fixings
/ cutoff deductions) is not executed when status ==
dual::status_t::ITERATION_LIMIT; update the condition around compute_objective
in pseudo_costs.cpp (the block handling child_problem, solution.x and status) to
skip objective materialization and subsequent bound/fathom logic for
ITERATION_LIMIT.
There was a problem hiding this comment.
Dual simplex is dual-safe, the bound is always valid even if the termination is early. It will just be looser than the true bound
There was a problem hiding this comment.
@aliceb-nv, thank you for the clarification! You're absolutely right—I missed that crucial property of the dual simplex method. Since dual feasibility is maintained throughout the algorithm, an early termination due to ITERATION_LIMIT still provides a valid (though potentially looser) lower bound on the optimal objective. This makes it safe to use for bound tightening and fixing logic in the strong branching context.
I appreciate you explaining this domain-specific behavior. The code is correct as written.
✏️ Learnings added
Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 922
File: cpp/src/branch_and_bound/pseudo_costs.cpp:85-87
Timestamp: 2026-03-03T15:03:14.218Z
Learning: In cuOpt's dual simplex solver (cpp/src/dual_simplex/ and cpp/src/branch_and_bound/ files), the dual simplex method is "dual-safe": it maintains dual feasibility throughout execution. Therefore, when the solver terminates early with status ITERATION_LIMIT, the current objective value is still a valid lower bound (for minimization), just looser than the optimal bound. This makes ITERATION_LIMIT results safe to use in strong branching for bound tightening, fixings, and cutoff logic.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 814
File: cpp/src/dual_simplex/cuts.cpp:403-408
Timestamp: 2026-01-30T22:11:38.999Z
Learning: In the cuOpt dual simplex knapsack cut generation (cpp/src/dual_simplex/cuts.cpp), the greedy_knapsack_problem function intentionally allows division by zero in the ratio calculation `ratios[i] = values[i] / weights[i]`. When weights[i] = 0 and values[i] != 0, the division produces infinity (under IEEE 754), which correctly prioritizes zero-weight items with value in the greedy selection.
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 599
File: cpp/src/dual_simplex/cuts.cpp:79-99
Timestamp: 2026-01-22T22:55:38.995Z
Learning: In the cuOPT dual simplex cut pool (cpp/src/dual_simplex/cuts.cpp), cuts with zero norm are prevented from entering the cut pool by validation logic elsewhere in the codebase. However, near-zero norms should still be guarded against in functions like cut_orthogonality to ensure numerical stability.
Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 899
File: cpp/src/mip_heuristics/solve.cu:169-193
Timestamp: 2026-02-24T14:37:15.024Z
Learning: In cuOpt's MIP solver (cpp/src/mip_heuristics/solve.cu and related files), the objective_scaling_factor sign encodes the user problem's objective sense: if >= 0, the user problem is minimization; if < 0, the user problem is maximization (which is internally converted to minimization by negating the objective). Therefore, checking the sign of objective_scaling_factor is a valid way to determine the problem direction and set appropriate "no bound" values for callbacks.
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 814
File: cpp/src/dual_simplex/cuts.cpp:1278-1278
Timestamp: 2026-01-30T23:34:26.318Z
Learning: In the cuOpt dual simplex solver (cpp/src/dual_simplex/cuts.cpp and related files), slack variable coefficients in the constraint matrix are always exactly 1.0 or -1.0 (not floating-point approximations), so exact equality checks like `assert(std::abs(lp.A.x[col_start]) == 1.0)` are appropriate and should not be flagged as requiring epsilon tolerances.
Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 899
File: cpp/src/branch_and_bound/branch_and_bound.hpp:111-117
Timestamp: 2026-02-23T18:58:17.440Z
Learning: In cpp/src/branch_and_bound/branch_and_bound.hpp, the initial_cutoff_ member is only written via set_initial_cutoff() before branch_and_bound_t::solve() is invoked, and all B&B worker threads that read it via get_cutoff() are spawned inside solve(). This sequential ordering guarantees no concurrent read/write, so initial_cutoff_ does not require atomic synchronization.
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 602
File: cpp/src/linear_programming/solve.cu:732-742
Timestamp: 2025-12-04T20:09:09.264Z
Learning: In cpp/src/linear_programming/solve.cu, the barrier solver does not currently return INFEASIBLE or UNBOUNDED status. It only returns OPTIMAL, TIME_LIMIT, NUMERICAL_ISSUES, or CONCURRENT_LIMIT.
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 814
File: cpp/src/dual_simplex/branch_and_bound.hpp:154-159
Timestamp: 2026-01-29T04:25:05.434Z
Learning: In cpp/src/dual_simplex/branch_and_bound.hpp and branch_and_bound.cpp, the mutex_original_lp_ protects original_lp_ specifically during the root cut pass phase. The public API set_new_solution() can be called from external threads during this phase while the main thread is adding cuts/slacks to original_lp_. Functions like best_first_thread, exploration_ramp_up, and plunge_from execute after the cut passes complete (after the OpenMP parallel barrier) when original_lp_ is fixed, so they don't need mutex protection for their reads.
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results
Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 901
File: cpp/src/mip_heuristics/presolve/single_lock_dual_aggregation.hpp:42-53
Timestamp: 2026-02-23T20:28:13.155Z
Learning: In Papilo-based presolvers (cpp/src/mip_heuristics/presolve/*.cpp), Papilo standardizes integer variable bounds during early presolve by rounding fractional bounds (ceiling for lower, floor for upper). By the time custom presolvers execute, all integer and implied-integer variables have exact integer bounds, so exact equality comparisons like `lower_bounds[col] == 0.0 && upper_bounds[col] == 1.0` are correct and don't require epsilon tolerances.
<!-- </add_learning>
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Validate correct initialization of variable bounds, constraint coefficients, and algorithm state before solving; ensure reset when transitioning between algorithm phases (presolve, simplex, diving, crossover)
Learnt from: aliceb-nv
Repo: NVIDIA/cuopt PR: 798
File: cpp/src/dual_simplex/phase2.cpp:2658-2684
Timestamp: 2026-02-06T17:38:38.088Z
Learning: The work-unit recording code in cpp/src/dual_simplex/phase2.cpp using the 1e8 divisor (around lines 2658-2684) is a temporary placeholder for release. It will be replaced by a more involved predictor system that was elided from PR `#798`.
Learnt from: CR
Repo: NVIDIA/cuopt PR: 0
File: .github/.coderabbit_review_guide.md:0-0
Timestamp: 2025-11-25T10:20:49.822Z
Learning: Applies to **/*.{cu,cuh,cpp,hpp,h} : Identify assertions with overly strict numerical tolerances that fail on legitimate degenerate/edge cases (near-zero pivots, singular matrices, empty problems)
Learnt from: chris-maes
Repo: NVIDIA/cuopt PR: 856
File: cpp/src/dual_simplex/basis_updates.cpp:2215-2219
Timestamp: 2026-02-13T04:47:36.658Z
Learning: In cpp/src/dual_simplex/basis_updates.cpp, the `update()` method (both dense and sparse vector overloads) requires that `etilde` be non-empty as a precondition. An empty `etilde` would make the update operation ill-defined, so no defensive checks for `etilde.i.size() == 0` are needed when computing work estimates involving `std::log2(etilde.i.size())`.
|
/ok to test c84928b |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 2513-2516: The log prints "from propagation" as num_fixed -
num_tightened which can be negative because num_fixed counts fixed fractionals
and num_tightened counts tighten operations; fix by computing an explicit
non-negative propagated count before logging (e.g., int num_from_propagation =
num_fixed - num_tightened; if (num_from_propagation < 0) num_from_propagation =
0) and use that variable in the settings_.log.printf call (reference
settings_.log.printf, num_fixed, num_tightened) so the log never shows a
negative "from propagation" value.
- Around line 2526-2556: The current handling of dual::status_t lp_status (from
dual_phase2_with_advanced_basis) treats any non-OPTIMAL/TIME_LIMIT result as
NUMERICAL; change this to explicitly handle WORK_LIMIT and ITERATION_LIMIT (and
any other non-fatal statuses) instead of mapping them to
mip_status_t::NUMERICAL. Locate the lp_status checks around the root solve
(variable lp_status, compute_objective, root_relax_soln_) and the later re-solve
site, and: 1) add explicit branches for dual::status_t::WORK_LIMIT and
dual::status_t::ITERATION_LIMIT that propagate a corresponding solver_status_
(or return a distinct mip_status_t like WORK_LIMIT/ITERATION_LIMIT) OR use the
current objective (root_objective_) as a valid bound for branching/cutoff logic
before continuing; 2) only treat truly numerical failures as
mip_status_t::NUMERICAL. Ensure set_final_solution/set_solution_at_root behavior
is updated to match the new status paths.
| settings_.log.printf( | ||
| "Strong branching bounds tightening: %d variables fixed (%d from propagation)\n", | ||
| num_fixed, | ||
| num_fixed - num_tightened); |
There was a problem hiding this comment.
num_fixed - num_tightened can go negative in the log output.
Line 2516 mixes different counters (num_fixed = fixed fractionals; num_tightened = bound tighten operations), so the “from propagation” number can become negative and misleading.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2513 - 2516, The
log prints "from propagation" as num_fixed - num_tightened which can be negative
because num_fixed counts fixed fractionals and num_tightened counts tighten
operations; fix by computing an explicit non-negative propagated count before
logging (e.g., int num_from_propagation = num_fixed - num_tightened; if
(num_from_propagation < 0) num_from_propagation = 0) and use that variable in
the settings_.log.printf call (reference settings_.log.printf, num_fixed,
num_tightened) so the log never shows a negative "from propagation" value.
| dual::status_t lp_status = dual_phase2_with_advanced_basis(2, | ||
| 0, | ||
| initialize_basis, | ||
| exploration_stats_.start_time, | ||
| original_lp_, | ||
| lp_settings, | ||
| root_vstatus_, | ||
| basis_update, | ||
| basic_list, | ||
| nonbasic_list, | ||
| root_relax_soln_, | ||
| iter, | ||
| edge_norms_); | ||
| exploration_stats_.total_lp_iters += iter; | ||
| root_objective_ = compute_objective(original_lp_, root_relax_soln_.x); | ||
| if (lp_status == dual::status_t::OPTIMAL) { | ||
| fractional.clear(); | ||
| num_fractional = | ||
| fractional_variables(settings_, root_relax_soln_.x, var_types_, fractional); | ||
| if (num_fractional == 0) { | ||
| set_solution_at_root(solution, cut_info); | ||
| return mip_status_t::OPTIMAL; | ||
| } | ||
| } else if (lp_status == dual::status_t::TIME_LIMIT) { | ||
| solver_status_ = mip_status_t::TIME_LIMIT; | ||
| set_final_solution(solution, root_objective_); | ||
| return solver_status_; | ||
| } else { | ||
| settings_.log.printf("LP re-solve after SB tightening returned status %d\n", lp_status); | ||
| return mip_status_t::NUMERICAL; | ||
| } |
There was a problem hiding this comment.
Don’t classify non-numerical LP statuses as NUMERICAL in re-solve paths.
At Line 2554-2556 and Line 2617-2619, any status other than OPTIMAL/TIME_LIMIT is treated as numerical. WORK_LIMIT and ITERATION_LIMIT are valid outcomes and should be propagated explicitly (or handled with a fallback solve), not reported as numerical failure.
Suggested status handling adjustment
- } else {
- settings_.log.printf("LP re-solve after SB tightening returned status %d\n", lp_status);
- return mip_status_t::NUMERICAL;
- }
+ } else if (lp_status == dual::status_t::WORK_LIMIT) {
+ solver_status_ = mip_status_t::WORK_LIMIT;
+ set_final_solution(solution, root_objective_);
+ return solver_status_;
+ } else if (lp_status == dual::status_t::ITERATION_LIMIT) {
+ // Preserve non-numerical termination semantics (or run a fallback full solve).
+ solver_status_ = mip_status_t::WORK_LIMIT;
+ set_final_solution(solution, root_objective_);
+ return solver_status_;
+ } else {
+ settings_.log.printf("LP re-solve after SB tightening returned status %d\n", lp_status);
+ return mip_status_t::NUMERICAL;
+ }- } else {
- settings_.log.printf("LP re-solve after RC tightening returned status %d\n", lp_status);
- return mip_status_t::NUMERICAL;
- }
+ } else if (lp_status == dual::status_t::WORK_LIMIT) {
+ solver_status_ = mip_status_t::WORK_LIMIT;
+ set_final_solution(solution, root_objective_);
+ return solver_status_;
+ } else if (lp_status == dual::status_t::ITERATION_LIMIT) {
+ solver_status_ = mip_status_t::WORK_LIMIT;
+ set_final_solution(solution, root_objective_);
+ return solver_status_;
+ } else {
+ settings_.log.printf("LP re-solve after RC tightening returned status %d\n", lp_status);
+ return mip_status_t::NUMERICAL;
+ }Based on learnings: “In cuOpt, the dual simplex method is dual-feasible throughout execution… when status is ITERATION_LIMIT, the current objective is still a valid lower bound … safe to use in strong branching for bound tightening, fixings, and cutoff logic.”
Also applies to: 2589-2619
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 2526 - 2556, The
current handling of dual::status_t lp_status (from
dual_phase2_with_advanced_basis) treats any non-OPTIMAL/TIME_LIMIT result as
NUMERICAL; change this to explicitly handle WORK_LIMIT and ITERATION_LIMIT (and
any other non-fatal statuses) instead of mapping them to
mip_status_t::NUMERICAL. Locate the lp_status checks around the root solve
(variable lp_status, compute_objective, root_relax_soln_) and the later re-solve
site, and: 1) add explicit branches for dual::status_t::WORK_LIMIT and
dual::status_t::ITERATION_LIMIT that propagate a corresponding solver_status_
(or return a distinct mip_status_t like WORK_LIMIT/ITERATION_LIMIT) OR use the
current objective (root_objective_) as a valid bound for branching/cutoff logic
before continuing; 2) only treat truly numerical failures as
mip_status_t::NUMERICAL. Ensure set_final_solution/set_solution_at_root behavior
is updated to match the new status paths.
This PR introduces fixings performed during the strong branching at the root if infeasible branches are detected.
The existing code always performs full strong branching on all fractionals at the root. If infeasibility is detected, we can safely conclude, for free, that:
We also incorporate the incumbent value into the strong branching cutoffs. This may enable tighter fixings as well if we can prove that certain branches cannot provide better incumbents than the cutoff.
Reduced cost strengthening is also now turned on with the default
cuopt_clisettings.This change introduces a -0.5% improvement in average on the dual bound, and tips
qap10to optimality within 10min on my setup.Description
Issue
Checklist