SNOPT¶
SNOPT is a sparse nonlinear optimizer that is particularly useful for solving largescale constrained problems with smooth objective functions and constraints. The algorithm consists of a sequential quadratic programming (SQP) algorithm that uses a smooth augmented Lagrangian merit function, while making explicit provision for infeasibility in the original problem and in the quadratic programming subproblems. The Hessian of the Lagrangian is approximated using the BFGS quasiNewton update.
Installation¶
SNOPT is available for purchase here. Upon purchase, you should receive a zip file. Within the zip file, there is a folder called src
. To use SNOPT with pyoptsparse, paste all files from src
except snopth.f into pyoptsparse/pySNOPT/source
.
From v2.0 onwards, only SNOPT v7.7.x is officially supported. To use pyOptSparse with previous versions of SNOPT, please checkout release v1.2. We currently test v7.7.7 and v7.7.1.
Options¶
Please refer to the SNOPT user manual for a complete listing of options and their default values. The following are a list of
options which have values changed from the defaults within SNOPT
options unique to pyOptSparse, implemented in the Python wrapper and not found in SNOPT
Name 
Type 
Default value 
Description 


int 
18 
Print File Output Unit (override internally in snopt?) 

int 
19 
Summary File Output Unit (override internally in snopt?) 

str 

Print file name 

str 

Summary file name 

str 

This specifies the problem type for SNOPT.


str 

This value is directly passed to the SNOPT kernel, and will be overwritten if another option, e.g.


int 
3 
The SNOPT derivative level. Only “3” is tested, where all derivatives are provided to SNOPT. 

int 
10000 
The iterations limit for solving the proximal point problem. We set this by default to a very large value in order to fully solve the proximal point problem to optimality 

int 
None 
The total character workspace length. If 

int 
None 
The total integer workspace length. If 

int 
None 
The total real workspace length. If 

list 
[‘step’, ‘merit’, ‘feasibility’, ‘optimality’, ‘penalty’] 
This option is unique to the Python wrapper, and takes a list of values which can be saved at each iteration to the History file. This specifies the list of major iteration variables to be stored in the history file. 
Informs¶
Code 
Description 


finished successfully 

optimality conditions satisfied 

feasible point found 

requested accuracy could not be achieved 

weak QP minimizer 

the problem appears to be infeasible 

infeasible linear constraints 

infeasible linear equalities 

nonlinear infeasibilities minimized 

infeasibilities minimized 

infeasible linear constraints in QP subproblem 

the problem appears to be unbounded 

unbounded objective 

constraint violation limit reached 

resource limit error 

iteration limit reached 

major iteration limit reached 

the superbasics limit is too small 

terminated after numerical difficulties 

current point cannot be improved 

singular basis 

cannot satisfy the general constraints 

illconditioned nullspace basis 

error in the usersupplied functions 

incorrect objective derivatives 

incorrect constraint derivatives 

the QP Hessian is indefinite 

incorrect second derivatives 

incorrect derivatives 

irregular or badly scaled problem functions 

undefined usersupplied functions 

undefined function at the first feasible point 

undefined function at the initial point 

unable to proceed into undefined region 

user requested termination 

terminated during function evaluation 

terminated during constraint evaluation 

terminated during objective evaluation 

terminated from monitor routine 

insufficient storage allocated 

work arrays must have at least 500 elements 

not enough character storage 

not enough integer storage 

not enough real storage 

input arguments out of range 

invalid input argument 

basis file dimensions do not match this problem 

the QP Hessian is indefinite 

finished successfully 

SPECS file read 

Jacobian structure estimated 

MPS file read 

memory requirements estimated 

usersupplied derivatives appear to be correct 

no derivatives were checked 

some SPECS keywords were not recognized 

errors while processing MPS data 

no MPS file specified 

problemsize estimates too small 

fatal error in the MPS file 

errors while estimating Jacobian structure 

cannot find Jacobian structure at given point 

fatal errors while reading the SP 

no SPECS file (iSpecs le 0 or iSpecs gt 99) 

Endoffile while looking for a BEGIN 

Endoffile while reading SPECS file 

ENDRUN found before any valid SPECS 

system error 

wrong no of basic variables 

error in basis package 
API¶
 class pyoptsparse.pySNOPT.pySNOPT.SNOPT(*args, **kwargs)[source]¶
SNOPT Optimizer Class  Inherited from Optimizer Abstract Class
SNOPT Optimizer Class Initialization
 __call__(optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True, timeLimit=None)[source]¶
This is the main routine used to solve the optimization problem.
 Parameters
 optProbOptimization or Solution class instance
This is the complete description of the optimization problem to be solved by the optimizer
 sensstr or python Function.
Specifiy method to compute sensitivities. The default is None which will use SNOPT’s own finite differences which are vastly superiour to the pyOptSparse implementation. To explictly use pyOptSparse gradient class to do the derivatives with finite differenes use ‘FD’. ‘sens’ may also be ‘CS’ which will cause pyOptSpare to compute the derivatives using the complex step method. Finally, ‘sens’ may be a python function handle which is expected to compute the sensitivities directly. For expensive function evaluations and/or problems with large numbers of design variables this is the preferred method.
 sensStepfloat
Set the step size to use for design variables. Defaults to 1e6 when sens is ‘FD’ and 1e40j when sens is ‘CS’.
 sensModestr
Use ‘pgc’ for parallel gradient computations. Only available with mpi4py and each objective evaluation is otherwise serial
 storeHistorystr
File name of the history file into which the history of this optimization will be stored
 hotStartstr
File name of the history file to “replay” for the optimziation. The optimization problem used to generate the history file specified in ‘hotStart’ must be IDENTICAL to the currently supplied ‘optProb’. By identical we mean, EVERY SINGLE PARAMETER MUST BE IDENTICAL. As soon as he requested evaluation point from SNOPT does not match the history, function and gradient evaluations revert back to normal evaluations.
 storeSensbool
Flag sepcifying if sensitivities are to be stored in hist. This is necessay for hotstarting only.
 timeLimitfloat
Specify the maximum amount of time for optimizer to run. Must be in seconds. This can be useful on queue systems when you want an optimization to cleanly finish before the job runs out of time.