Skip to content

Commit 0757815

Browse files
authored
JuMP Interface documentation update (#62)
* jump interface documentation * MadNLPGPU added * MadNLP bump version --------- Co-authored-by: Sungho Shin <[email protected]>
1 parent a17ddb6 commit 0757815

File tree

5 files changed

+23
-9
lines changed

5 files changed

+23
-9
lines changed

Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ oneAPI = "1"
4343
MathOptInterface = "1.19"
4444
Ipopt = "1.6"
4545
NLPModelsIpopt = "0.10"
46-
MadNLP = "0.7"
46+
MadNLP = "0.8"
4747

4848
[extras]
4949
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
![Logo](full-logo.svg)
1+
![Logo](full-logo.svg)
22

33
*An [algebraic modeling](https://en.wikipedia.org/wiki/Algebraic_modeling_language) and [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation) tool in [Julia Language](https://julialang.org/), specialized for [SIMD](https://en.wikipedia.org/wiki/Single_instruction,_multiple_data) abstraction of [nonlinear programs](https://en.wikipedia.org/wiki/Nonlinear_programming).*
44

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
77
KernelAbstractions = "63c18a36-062a-441e-b654-da1e3ab1ce7c"
88
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
99
MadNLP = "2621e9c9-9eb4-46b1-8089-e8c72242dfb6"
10+
MadNLPGPU = "d72a61cc-809d-412f-99be-fd81f4b8a598"
1011
MathOptInterface = "b8f27783-ece8-5eb3-8dc8-9495eed66fee"
1112
NLPModels = "a4795742-8479-5a88-8948-cc11e1c8c1a6"
1213
NLPModelsIpopt = "f4238b75-b362-5c4c-b852-0801c9a21d71"

docs/make.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,14 +10,14 @@ if !(@isdefined _PAGES)
1010
"Mathematical Abstraction" => "simd.md",
1111
"Tutorial" => [
1212
"guide.md",
13-
"jump.md",
1413
"performance.md",
1514
"gpu.md",
1615
"develop.md",
1716
"quad.md",
1817
"distillation.md",
1918
"opf.md",
2019
],
20+
"JuMP Interface (experimental)" => "jump.md",
2121
"API Manual" => "core.md",
2222
"References" => "ref.md",
2323
]

docs/src/jump.jl

Lines changed: 19 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1-
# # JuMP Interface
1+
# # JuMP Interface (Experimental)
22

3+
# ## JuMP to an ExaModel
34
# We have an experimental interface to JuMP model. A JuMP model can be directly converted to a `ExaModel`. It is as simple as this:
45

5-
using ExaModels, JuMP
6+
using ExaModels, JuMP, CUDA
67

78
N = 10
89
jm = Model()
@@ -16,11 +17,23 @@ jm = Model()
1617
)
1718
@objective(jm, Min, sum(100(x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N))
1819

19-
em = ExaModel(jm)
20+
em = ExaModel(jm; backend = CUDABackend())
2021

2122
# Here, note that only scalar objective/constraints created via `@constraint` and `@objective` API are supported. Older syntax like `@NLconstraint` and `@NLobjective` are not supported.
22-
# We can solve the model using any of the solvers supported by ExaModels. For example, we can use Ipopt:
23+
# We can solve the model using any of the solvers supported by ExaModels. For example, we can use MadNLP:
2324

24-
using NLPModelsIpopt
25+
using MadNLP, MadNLPGPU
2526

26-
result = ipopt(em)
27+
result = madnlp(em)
28+
29+
30+
# ## JuMP Optimizer
31+
# Alternatively, one can use the `Optimizer` interface provided by `ExaModels`. This feature can be used as follows.
32+
33+
using ExaModels, JuMP, CUDA
34+
using MadNLP, MadNLPGPU
35+
36+
set_optimizer(jm, () -> ExaModels.MadNLPOptimizer(CUDABackend()))
37+
optimize!(jm)
38+
39+
# Again, only scalar objective/constraints created via `@constraint` and `@objective` API are supported. Older syntax like `@NLconstraint` and `@NLobjective` are not supported.

0 commit comments

Comments
 (0)