SNOPT
SNOPT is a sparse nonlinear optimizer that is particularly useful for solving large-scale constrained problems with smooth objective functions and constraints. The algorithm consists of a sequential quadratic programming (SQP) algorithm that uses a smooth augmented Lagrangian merit function, while making explicit provision for infeasibility in the original problem and in the quadratic programming subproblems. The Hessian of the Lagrangian is approximated using the BFGS quasi-Newton update.
Installation
Building from source
SNOPT is available for purchase here. Upon purchase, you should receive a zip file. Within the zip file, there is a folder called src
. To use SNOPT with pyoptsparse, paste all files from src
except snopth.f into pyoptsparse/pySNOPT/source
.
From v2.0 onwards, only SNOPT v7.7.x is officially supported. To use pyOptSparse with previous versions of SNOPT, please checkout release v1.2. We currently test v7.7.7 and v7.7.1.
Installation by conda
When installing via conda, all pyoptsparse binaries are pre-compiled and installed as part of the package. However, the snopt binding module cannot be included as part of the package due to license restrictions.
If you are installing via conda and would like to use SNOPT, you will need to build the snopt binding module on your own, and inform pyoptsparse that it should use that library.
Suppose you have built the binding file, producing snopt.cpython-310.so
, living in the folder ~/snopt-bind
.
To use this module, set the environment variable, PYOPTSPARSE_IMPORT_SNOPT_FROM
, e.g.:
PYOPTSPARSE_IMPORT_SNOPT_FROM=~/snopt-bind/
This will attempt to load the snopt
binding module from ~/snopt-bind
. If the module cannot be loaded from this path, a warning will be raised at import time, and an error will be raised if attempting to run the SNOPT optimizer.
Options
Please refer to the SNOPT user manual for a complete listing of options and their default values. The following are a list of
options which have values changed from the defaults within SNOPT
options unique to pyOptSparse, implemented in the Python wrapper and not found in SNOPT
Name |
Type |
Default value |
Description |
---|---|---|---|
|
int |
18 |
Print File Output Unit (override internally in snopt?) |
|
int |
19 |
Summary File Output Unit (override internally in snopt?) |
|
str |
|
Print file name |
|
str |
|
Summary file name |
|
int |
0 |
Minor iterations print level |
|
str |
|
This specifies the problem type for SNOPT.
|
|
str |
|
This value is directly passed to the SNOPT kernel, and will be overwritten if another option, e.g.
|
|
int |
3 |
The SNOPT derivative level. Only “3” is tested, where all derivatives are provided to SNOPT. |
|
int |
10000000 |
The limit on the total number of minor iterations, summed over all major iterations. This option is set to a very large number to prevent premature termination of SNOPT. |
|
int |
10000 |
The limit on the number of minor iterations for each major iteration. This option is set to a very large number to prevent premature termination of SNOPT. |
|
int |
10000 |
The iterations limit for solving the proximal point problem. We set this by default to a very large value in order to fully solve the proximal point problem to optimality |
|
int |
None |
The total character workspace length. If |
|
int |
None |
The total integer workspace length. If |
|
int |
None |
The total real workspace length. If |
|
list |
[] |
This option is unique to the Python wrapper, and takes a list of values which can be saved at each major iteration to the History file. The possible values are
In addition, a set of default parameters are saved to the history file and cannot be changed. These are
|
|
bool |
False |
This option is unique to the Python wrapper. If True, internal SNOPT work arrays are also returned at the end of the optimization. These arrays can be used to hot start a subsequent optimization. The SNOPT option ‘Sticky parameters’ will also be automatically set to ‘Yes’ to facilitate the hot start. |
|
NoneType or str |
None |
This option is unique to the Python wrapper. The SNOPT work arrays will be pickled and saved to this file after each major iteration. This file is useful if you want to restart an optimization that did not exit cleanly. If None, the work arrays are not saved. |
|
NoneType or function |
None |
This option is unique to the Python wrapper. A function handle can be supplied which is called at the end of each major iteration. The following is an example of a callback function that saves the restart dictionary to a different file after each major iteration.
|
|
list |
[] |
This option is unique to the Python wrapper.
It specifies a list of arguments that will be passed to the snSTOP function handle.
|
Informs
Code |
Description |
---|---|
|
finished successfully |
|
optimality conditions satisfied |
|
feasible point found |
|
requested accuracy could not be achieved |
|
elastic objective minimized |
|
elastic infeasibilities minimized |
|
the problem appears to be infeasible |
|
infeasible linear constraints |
|
infeasible linear equalities |
|
nonlinear infeasibilities minimized |
|
infeasibilities minimized |
|
infeasible linear constraints in QP subproblem |
|
infeasible nonelastic constraints |
|
the problem appears to be unbounded |
|
unbounded objective |
|
constraint violation limit reached |
|
resource limit error |
|
iteration limit reached |
|
major iteration limit reached |
|
the superbasics limit is too small |
|
time limit reached |
|
terminated after numerical difficulties |
|
current point cannot be improved |
|
singular basis |
|
cannot satisfy the general constraints |
|
ill-conditioned null-space basis |
|
unable to compute acceptable LU factors |
|
error in the user-supplied functions |
|
incorrect objective derivatives |
|
incorrect constraint derivatives |
|
irregular or badly scaled problem functions |
|
undefined user-supplied functions |
|
undefined function at the first feasible point |
|
undefined function at the initial point |
|
unable to proceed into undefined region |
|
user requested termination |
|
terminated during function evaluation |
|
terminated from monitor routine |
|
insufficient storage allocated |
|
work arrays must have at least 500 elements |
|
not enough character storage |
|
not enough integer storage |
|
not enough real storage |
|
input arguments out of range |
|
invalid input argument |
|
basis file dimensions do not match this problem |
|
system error |
|
wrong no of basic variables |
|
error in basis package |
API
- class pyoptsparse.pySNOPT.pySNOPT.SNOPT(*args, **kwargs)[source]
SNOPT Optimizer Class
This is the base optimizer class that all optimizers inherit from. We define common methods here to avoid code duplication.
- Parameters:
- namestr
Optimizer name
- categorystr
Typically local or global
- defaultOptionsdictionary
A dictionary containing the default options
- informsdict
Dictionary of the inform codes
- __call__(optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True, timeLimit=None, restartDict=None)[source]
This is the main routine used to solve the optimization problem.
- Parameters:
- optProbOptimization or Solution class instance
This is the complete description of the optimization problem to be solved by the optimizer
- sensstr or python Function.
Specify method to compute sensitivities. The default is None which will use SNOPT’s own finite differences which are vastly superior to the pyOptSparse implementation. To explicitly use pyOptSparse gradient class to do the derivatives with finite differences use FD. sens may also be CS which will cause pyOptSpare to compute the derivatives using the complex step method. Finally, sens may be a python function handle which is expected to compute the sensitivities directly. For expensive function evaluations and/or problems with large numbers of design variables this is the preferred method.
- sensStepfloat
Set the step size to use for design variables. Defaults to
1e-6
when sens is FD and1e-40j
when sens is CS.- sensModestr
Use pgc for parallel gradient computations. Only available with mpi4py and each objective evaluation is otherwise serial
- storeHistorystr
File name of the history file into which the history of this optimization will be stored
- hotStartstr
File name of the history file to “replay” for the optimization. The optimization problem used to generate the history file specified in hotStart must be IDENTICAL to the currently supplied optProb. By identical we mean, EVERY SINGLE PARAMETER MUST BE IDENTICAL. As soon as he requested evaluation point from SNOPT does not match the history, function and gradient evaluations revert back to normal evaluations.
- storeSensbool
Flag specifying if sensitivities are to be stored in hist. This is necessary for hot-starting only.
- timeLimitfloat
Specify the maximum amount of time for optimizer to run. Must be in seconds. This can be useful on queue systems when you want an optimization to cleanly finish before the job runs out of time. From SNOPT 7.7, use the “Time limit” option instead.
- restartDictdict
A dictionary containing the necessary information for hot-starting SNOPT. This is typically the same dictionary returned by this function on a previous invocation.
- Returns:
- solSolution object
The optimization solution
- restartDictdict
If Return work arrays is True, a dictionary of arrays is also returned