Skip to content

Refactor OETC as a Solver subclass #683

@FabianHofmann

Description

@FabianHofmann

Follow-up to #682. That PR introduces the stateful Solver interface (Solver.from_name(...).solve() → Result, model.apply_result(...)) but left OETC on the old remote=OetcHandler(...) branch in Model.solve. This issue tracks folding OETC into the new shape.

Motivation

  • Model.solve still has a special-case for isinstance(remote, OetcHandler) (linopy/model.py:1626–1644) that bypasses the new Solver pipeline.
  • OETC users can't say m.solve("oetc", ...); they have to construct an OetcHandler and pass it via remote=.
  • The returned solution is patched in field-by-field rather than going through apply_result / label-indexed Solution.
  • Aligning OETC with the Solver interface sets up the async-job seam coroa flagged in refactor: stateful Solver instances and two-step solve API #682 (Gurobi batch, OETC).

Proposed design (TBD)

Add class Oetc(Solver[OetcSettings]) in linopy/solvers.py. Treat netcdf-over-GCP as io_api="direct" — the "native model" is the linopy Model shipped as netcdf; don't introduce a new IO_APIS entry.

Dataclass fields (on top of inherited model/io_api/options):

  • settings: OetcSettings | None = None (resolve via OetcSettings.from_env() if absent)
  • Runtime: _handler: OetcHandler | None, _job_uuid: str | None, _solved_model: Model | None

Lifecycle:

  • _build_direct(...) — instantiate OetcHandler(self.settings), serialize self.model to a temp netcdf, upload to GCP, cache _vlabels/_clabels via _cache_model_labels(self.model). Do not submit the job here.
  • _run_direct(...) — submit, poll, download, read_netcdf into self._solved_model, assemble Solution/Status/SolverReport, return via self._make_result(...).
  • _run_file — not implemented.

Solution assembly. The round-tripped model shares labels with the source. Build dense label-indexed arrays of size _n_vars / _n_cons by iterating local self.model.variables / constraints, reading labels per name, and indexing primal[labels.ravel()] = solved.variables[name].solution.values.ravel() (and same for duals). Objective from solved.objective.value. Missing entries stay NaN.

SolverReport. runtime = job_result.duration_in_seconds, solver_runtime = job_result.solving_duration_in_seconds. The orchestrator doesn't expose mip_gap / dual_bound / iterations — leave None.

Registration. Add SolverName.OETC = "oetc"; append "oetc" to available_solvers when _oetc_deps_available.

Public API:

m.solve("oetc", settings=oetc_settings, options={"solver": "gurobi", "Method": 2})

# or explicit:
solver = Oetc.from_model(m, settings=oetc_settings, options={...})
result = solver.solve()
m.apply_result(result)

Deprecation. In Model.solve, when remote is an OetcHandler, emit a DeprecationWarning and route through Solver.from_name("oetc", self, settings=remote.settings, options=solver_options). OetcHandler stays for one release.

Open design question

Async-job seam. coroa noted in #682 that the interface should be extensible to async solving (Gurobi batch, OETC) — return early with a job handle, retrieve later. Cleanest hook: split _run_direct into _submit() (sets self._job_uuid, returns) and _collect() (polls, downloads, builds Result). _run_direct calls both serially today; a future solve(blocking=False) returns after _submit(), and a later solver.solve() / solver.collect() finishes. Should this PR do the split, or stay synchronous and let the async PR carve it out? Recommendation: do the split now (two extra methods, locks in a usable seam).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions