Skip to content

Native Julia implementation of the Performance Estimation Programming (PEP) methodology

License

Notifications You must be signed in to change notification settings

PerformanceEstimation/PEPit.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PEPit.jl

Hello world!

PEPit.jl is a native Julia implementation of the Performance Estimation Programming (PEP) methodology [1,2,3] and the Python package PEPit [4] for worst-case analysis of first-order optimization algorithms. The core idea in PEP is to model the design and analysis of first-order optimization algorithms themselves as higher level optimization problems called performance estimation problems (PEPs) that are semidefinite programming programs (SDPs). We then solve such SDPs numerically to obtain tight worst-case bounds for known algorithms and also to discover new algorithms under suitable conditions.

The intent of this Julia package is to be functionally equivalent to existing packages such as PESTO [5] and PEPit while providing a clean, Julia-native API along with a broader support of commercial and open-source solvers under the JuMP ecosystem [6].

Install and test

You can install the package by typing the following in the Julia REPL:

] add PEPit

To install directly from the github repo, type the following in the Julia REPL:

] add https://github.com/PerformanceEstimation/PEPit.jl

Then in Julia, you can run the following test to see if the package is working as intended:

] test PEPit

Minimal example

Below is a condensed example following the style of the examples in examples/; for other examples please take a look in the examples/ folder. It computes a worst-case bound for accelerated gradient method for smooth strongly convex minimization (please also see the Step-by-Step tutorial for more details):

using PEPit, OrderedCollections

function wc_accelerated_gradient_strongly_convex(mu, L, n; verbose=false)
    problem = PEP()
    param = OrderedDict("mu" => mu, "L" => L)
    func = declare_function!(problem, SmoothStronglyConvexFunction, param; reuse_gradient=true)

    xs = stationary_point!(func)
    fs = value!(func, xs)
    x0 = set_initial_point!(problem)

    set_initial_condition!(problem, value!(func, x0) - fs + mu / 2 * (x0 - xs)^2 <= 1)

    kappa = mu / L
    x_new, y = x0, x0
    for i in 1:n
        x_old = x_new
        x_new = y - 1 / L * gradient!(func, y)
        y = x_new + (1 - sqrt(kappa)) / (1 + sqrt(kappa)) * (x_new - x_old)
    end

    set_performance_metric!(problem, value!(func, x_new) - fs)

    PEPit_tau = solve!(problem, verbose=verbose)
    theoretical_tau = (1 - sqrt(kappa))^n
    mu == 0 && @warn "Momentum is tuned for strongly convex functions!"

    if verbose
        @info "🐱 Example file: worst-case performance of the accelerated gradient method" 
        @info "💻  PEPit guarantee: f(x_n)-f_*  <= $(round(PEPit_tau, digits=6)) (f(x_0) - f(x_*) + mu/2*||x_0 - x_*||^2)"
        @info "📝 Theoretical guarantee: f(x_n)-f_*  <= $(round(theoretical_tau, digits=6)) (f(x_0) - f(x_*) + mu/2*||x_0 - x_*||^2)"
    end

    return PEPit_tau, theoretical_tau

end

PEPit_val, theoretical_val = wc_accelerated_gradient_strongly_convex(0.1, 1.0, 2, verbose=true)

Key steps in the PEP workflow:

  1. Create a PEP() instance.
  2. Declare function/operator classes with declare_function!.
  3. Create points and initial conditions.
  4. Run algorithmic steps to define iterates.
  5. Define a performance metric.
  6. Call solve! to get the worst-case bound.

Step-by-Step tutorial

Please take a look at the step-by-step tutorial for the minimal example that we just saw for accelerated gradient method for smooth strongly convex minimization below:

Tutorial 1: Accelerated gradient method for smooth strongly convex minimization

You can find other tutorials in the tutorials folder of the package.

Examples

We also have many examples in the examples folder of the package, please take a look.

Solvers

PEPit.jl uses JuMP and builds an SDP internally. Currently we support: Clarabel and Mosek, but we plan to add more solvers in the future. Note: using Mosek requires a valid license to use (free for academic use), where Clarabel is open-source.

Repository layout

  • src/: core implementation
    • core/: points, expressions, constraints, PSD matrices, and PEP assembly
    • functions/: function class definitions (smooth, convex, etc.)
    • operators/: operator class definitions (monotone, nonexpansive, etc.)
    • primitive_steps/: algorithmic building blocks (gradient, proximal, line search)
  • examples/: runnable scripts for specific algorithms and settings
  • tutorials/: literate tutorials (Markdown, Jupyter, LaTeX/PDF)
  • test/: unit tests for core machinery

Running examples

Examples are standard Julia scripts. For instance, you can run them as:

include("examples/unconstrained_convex_optimization/gradient_exact_line_search.jl")

Notes and scope

This is a research-oriented tool aimed at building and solving PEPs. The API is still evolving, but the structure is already suitable for reproducing standard worst-case analyses for first-order methods and monotone inclusion algorithms.

Contact

Please report any issues via the Github Issue Tracker. All types of issues are welcome including bug reports, feature requests, implementation for a specific research problem and so on. Also, please feel free to send an email 📧 to sd158@rice.edu or adrien.taylor@inria.fr if you want to say hi 🚀!

References

[1] Y. Drori, M. Teboulle (2014). Performance of first-order methods for smooth convex minimization: a novel approach. Mathematical Programming 145(1–2), 451–482.

[2] A. Taylor, J. Hendrickx, F. Glineur (2017). Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2), 307-345.

[3] A. Taylor, J. Hendrickx, F. Glineur (2017). Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283–1313.

[4] B Goujaud, C. Moucer, F. Glineur, J.M. Hendrickx, A.B. Taylor, A. Dieuleveut (2024). PEPit: computer-assisted worst-case analyses of first-order optimization methods in Python. Mathematical Programming Computation 16 (3), 337-367.

[5] A. Taylor, J. Hendrickx, F. Glineur (2017). Performance Estimation Toolbox (PESTO): automated worst-case analysis of first-order optimization methods. In 56th IEEE Conference on Decision and Control (CDC).

[6] M. Lubin, O. Dowson, J.D. Garcia, J. Huchette, B. Legat, J.P. Vielma . JuMP 1.0: Recent improvements to a modeling language for mathematical optimization.. Mathematical Programming Computation. 2023 Sep;15(3):581-9.

About

Native Julia implementation of the Performance Estimation Programming (PEP) methodology

Resources

License

Stars

Watchers

Forks

Packages

No packages published