SNOPT¶
SNOPT is a sparse nonlinear optimizer that is particularly useful for solving large-scale constrained problems with smooth objective functions and constraints. The algorithm consists of a sequential quadratic programming (SQP) algorithm that uses a smooth augmented Lagrangian merit function, while making explicit provision for infeasibility in the original problem and in the quadratic programming subproblems. The Hessian of the Lagrangian is approximated using the BFGS quasi-Newton update.
Installation¶
SNOPT is available for purchase here. Upon purchase, you should receive a zip file. Within the zip file, there is a folder called src
. To use SNOPT with pyoptsparse, paste all files from src
except snopth.f into pyoptsparse/pySNOPT/source
.
From v2.0 onwards, only SNOPT v7.7.x is officially supported. To use pyOptSparse with previous versions of SNOPT, please checkout release v1.2. The current version of SNOPT being tested is v7.7.5, although v7.7.1 is also expected to work.
Options¶
Please refer to the SNOPT user manual for a complete listing of options and their default values. The following are the options set in Python for the wrapper.
The SNOPT option
Proximal iterations limit
has its default value changed to 10000, in order to fully solve the proximal point problem to optimalityThe option
Save major iteration variables
is unique to the Python wrapper, and takes a list of values which can be saved at each iteration to the History file. Possible values arestep
merit
feasibility
optimality
penalty
Hessian
slack
lambda
condZHZ
The option
Start
corresponds to the value directly passed to the SNOPT kernel, and will be overwritten if another option, e.g.Cold start
is supplied.The default value for
Total character workspace
is set internally to the minimum work array length of 500. The default values forTotal integer workspace
andTotal real workspace
depend on the number of design variables and constraints. They are computed based on recommendations in the SNOPT manual.If SNOPT determines that the default values for
Total character workspace
,Total integer workspace
, orTotal real workspace
are too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with82
,83
, or84
, but this should automate the storage allocation for most cases. User-specified values are not overwritten.
Option name |
Type |
Default value |
---|---|---|
|
int |
18 |
|
int |
19 |
|
str |
SNOPT_print.out |
|
str |
SNOPT_summary.out |
|
str |
[‘Minimize’, ‘Maximize’, ‘Feasible point’] |
|
str |
[‘Cold’, ‘Warm’] |
|
int |
3 |
|
int |
10000 |
|
int |
None |
|
int |
None |
|
int |
None |
|
list |
[‘step’, ‘merit’, ‘feasibility’, ‘optimality’, ‘penalty’] |
Informs¶
Code |
Description |
---|---|
|
finished successfully |
|
optimality conditions satisfied |
|
feasible point found |
|
requested accuracy could not be achieved |
|
weak QP minimizer |
|
the problem appears to be infeasible |
|
infeasible linear constraints |
|
infeasible linear equalities |
|
nonlinear infeasibilities minimized |
|
infeasibilities minimized |
|
infeasible linear constraints in QP subproblem |
|
the problem appears to be unbounded |
|
unbounded objective |
|
constraint violation limit reached |
|
resource limit error |
|
iteration limit reached |
|
major iteration limit reached |
|
the superbasics limit is too small |
|
terminated after numerical difficulties |
|
current point cannot be improved |
|
singular basis |
|
cannot satisfy the general constraints |
|
ill-conditioned null-space basis |
|
error in the user-supplied functions |
|
incorrect objective derivatives |
|
incorrect constraint derivatives |
|
the QP Hessian is indefinite |
|
incorrect second derivatives |
|
incorrect derivatives |
|
irregular or badly scaled problem functions |
|
undefined user-supplied functions |
|
undefined function at the first feasible point |
|
undefined function at the initial point |
|
unable to proceed into undefined region |
|
user requested termination |
|
terminated during function evaluation |
|
terminated during constraint evaluation |
|
terminated during objective evaluation |
|
terminated from monitor routine |
|
insufficient storage allocated |
|
work arrays must have at least 500 elements |
|
not enough character storage |
|
not enough integer storage |
|
not enough real storage |
|
input arguments out of range |
|
invalid input argument |
|
basis file dimensions do not match this problem |
|
the QP Hessian is indefinite |
|
finished successfully |
|
SPECS file read |
|
Jacobian structure estimated |
|
MPS file read |
|
memory requirements estimated |
|
user-supplied derivatives appear to be correct |
|
no derivatives were checked |
|
some SPECS keywords were not recognized |
|
errors while processing MPS data |
|
no MPS file specified |
|
problem-size estimates too small |
|
fatal error in the MPS file |
|
errors while estimating Jacobian structure |
|
cannot find Jacobian structure at given point |
|
fatal errors while reading the SP |
|
no SPECS file (iSpecs le 0 or iSpecs gt 99) |
|
End-of-file while looking for a BEGIN |
|
End-of-file while reading SPECS file |
|
ENDRUN found before any valid SPECS |
|
system error |
|
wrong no of basic variables |
|
error in basis package |
API¶
-
class
pyoptsparse.pySNOPT.pySNOPT.
SNOPT
(*args: Any, **kwargs: Any)[source]¶ SNOPT Optimizer Class - Inherited from Optimizer Abstract Class
SNOPT Optimizer Class Initialization
-
__call__
(optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True, timeLimit=None)[source]¶ This is the main routine used to solve the optimization problem.
- Parameters
- optProbOptimization or Solution class instance
This is the complete description of the optimization problem to be solved by the optimizer
- sensstr or python Function.
Specifiy method to compute sensitivities. The default is None which will use SNOPT’s own finite differences which are vastly superiour to the pyOptSparse implementation. To explictly use pyOptSparse gradient class to do the derivatives with finite differenes use ‘FD’. ‘sens’ may also be ‘CS’ which will cause pyOptSpare to compute the derivatives using the complex step method. Finally, ‘sens’ may be a python function handle which is expected to compute the sensitivities directly. For expensive function evaluations and/or problems with large numbers of design variables this is the preferred method.
- sensStepfloat
Set the step size to use for design variables. Defaults to 1e-6 when sens is ‘FD’ and 1e-40j when sens is ‘CS’.
- sensModestr
Use ‘pgc’ for parallel gradient computations. Only available with mpi4py and each objective evaluation is otherwise serial
- storeHistorystr
File name of the history file into which the history of this optimization will be stored
- hotStartstr
File name of the history file to “replay” for the optimziation. The optimization problem used to generate the history file specified in ‘hotStart’ must be IDENTICAL to the currently supplied ‘optProb’. By identical we mean, EVERY SINGLE PARAMETER MUST BE IDENTICAL. As soon as he requested evaluation point from SNOPT does not match the history, function and gradient evaluations revert back to normal evaluations.
- storeSensbool
Flag sepcifying if sensitivities are to be stored in hist. This is necessay for hot-starting only.
- timeLimitfloat
Specify the maximum amount of time for optimizer to run. Must be in seconds. This can be useful on queue systems when you want an optimization to cleanly finish before the job runs out of time.
-