You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/tutorials/regularization/regularization.md
+19-30Lines changed: 19 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,10 @@
5
5
For ridge regularization, you can simply use `SemRidge` as an additional loss function
6
6
(for example, a model with the loss functions `SemML` and `SemRidge` corresponds to ridge-regularized maximum likelihood estimation).
7
7
8
-
For lasso, elastic net and (far) beyond, you can load the `ProximalAlgorithms.jl` and `ProximalOperators.jl` packages alongside `StructuralEquationModels`:
8
+
For lasso, elastic net and (far) beyond, you can use the [`ProximalOperators.jl`](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
9
+
and optimize the model with [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl)
10
+
that provides so-called *proximal optimization* algorithms.
11
+
It can handle, amongst other things, various forms of regularization.
9
12
10
13
```@setup reg
11
14
using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
@@ -19,24 +22,22 @@ Pkg.add("ProximalOperators")
19
22
using StructuralEquationModels, ProximalAlgorithms, ProximalOperators
20
23
```
21
24
22
-
## `SemOptimizerProximal`
25
+
## Proximal optimization
23
26
24
-
To estimate regularized models, we provide a "building block" for the optimizer part, called `SemOptimizerProximal`.
25
-
It connects our package to the [`ProximalAlgorithms.jl`](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl) optimization backend, providing so-called proximal optimization algorithms.
26
-
Those can handle, amongst other things, various forms of regularization.
27
-
28
-
It can be used as
27
+
With *ProximalAlgorithms* package loaded, it is now possible to use `:Proximal` optimization engine
28
+
in `SemOptimizer` for estimating regularized models.
29
29
30
30
```julia
31
-
SemOptimizerProximal(
31
+
SemOptimizer(;
32
+
engine =:Proximal,
32
33
algorithm = ProximalAlgorithms.PANOC(),
33
34
operator_g,
34
35
operator_h =nothing
35
36
)
36
37
```
37
38
38
-
The proximal operator (aka the regularization function) can be passed as `operator_g`.
39
-
The available Algorithms are listed [here](https://juliafirstorder.github.io/ProximalAlgorithms.jl/stable/guide/implemented_algorithms/).
39
+
The proximal operator (aka the regularization function) can be passed as `operator_g`, available options are listed [here](https://juliafirstorder.github.io/ProximalOperators.jl/stable/functions/).
40
+
The available algorithms are listed [here](https://juliafirstorder.github.io/ProximalAlgorithms.jl/stable/guide/implemented_algorithms/).
40
41
41
42
## First example - lasso
42
43
@@ -100,26 +101,18 @@ From the previously linked [documentation](https://juliafirstorder.github.io/Pro
100
101
101
102
```@example reg
102
103
λ = zeros(31); λ[ind] .= 0.02
103
-
```
104
-
105
-
and use `SemOptimizerProximal`.
106
104
107
-
```@example reg
108
-
optimizer_lasso = SemOptimizerProximal(
105
+
optimizer_lasso = SemOptimizer(
106
+
engine = :Proximal,
109
107
operator_g = NormL1(λ)
110
108
)
111
-
112
-
model_lasso = Sem(
113
-
specification = partable,
114
-
data = data
115
-
)
116
109
```
117
110
118
111
Let's fit the regularized model
119
112
120
113
```@example reg
121
114
122
-
fit_lasso = fit(optimizer_lasso, model_lasso)
115
+
fit_lasso = fit(optimizer_lasso, model)
123
116
```
124
117
125
118
and compare the solution to unregularizted estimates:
## Second example - mixed l1 and l0 regularization
144
138
145
139
You can choose to penalize different parameters with different types of regularization functions.
146
-
Let's use the lasso again on the covariances, but additionally penalyze the error variances of the observed items via l0 regularization.
140
+
Let's use the lasso again on the covariances, but additionally penalize the error variances of the observed items via l0 regularization.
147
141
148
142
The l0 penalty is defined as
149
143
```math
150
144
\lambda \mathrm{nnz}(\theta)
151
145
```
152
146
153
-
To define a sup of separable proximal operators (i.e. no parameter is penalized twice),
147
+
To define a sum of separable proximal operators (i.e. no parameter is penalized twice),
154
148
we can use [`SlicedSeparableSum`](https://juliafirstorder.github.io/ProximalOperators.jl/stable/calculus/#ProximalOperators.SlicedSeparableSum) from the `ProximalOperators` package:
0 commit comments