SNOPT

SNOPT is a sparse nonlinear optimizer that is particularly useful for solving large-scale constrained problems with smooth objective functions and constraints. The algorithm consists of a sequential quadratic programming (SQP) algorithm that uses a smooth augmented Lagrangian merit function, while making explicit provision for infeasibility in the original problem and in the quadratic programming subproblems. The Hessian of the Lagrangian is approximated using the BFGS quasi-Newton update.

Installation

SNOPT is available for purchase here. Upon purchase, you should receive a zip file. Within the zip file, there is a folder called src. To use SNOPT with pyoptsparse, paste all files from src except snopth.f into pyoptsparse/pySNOPT/source.

From v2.0 onwards, only SNOPT v7.7.x is officially supported. To use pyOptSparse with previous versions of SNOPT, please checkout release v1.2. The current version of SNOPT being tested is v7.7.5, although v7.7.1 is also expected to work.

Options

Please refer to the SNOPT user manual for a complete listing of options and their default values. The following are the options set in Python for the wrapper.

  • The SNOPT option Proximal iterations limit has its default value changed to 10000, in order to fully solve the proximal point problem to optimality

  • The option Save major iteration variables is unique to the Python wrapper, and takes a list of values which can be saved at each iteration to the History file. Possible values are

    • step

    • merit

    • feasibility

    • optimality

    • penalty

    • Hessian

    • slack

    • lambda

    • condZHZ

  • The option Start corresponds to the value directly passed to the SNOPT kernel, and will be overwritten if another option, e.g. Cold start is supplied.

  • The default value for Total character workspace is set internally to the minimum work array length of 500. The default values for Total integer workspace and Total real workspace depend on the number of design variables and constraints. They are computed based on recommendations in the SNOPT manual.

  • If SNOPT determines that the default values for Total character workspace, Total integer workspace, or Total real workspace are too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Option name

Type

Default value

iPrint

int

18

iSumm

int

19

Print file

str

SNOPT_print.out

Summary file

str

SNOPT_summary.out

Problem Type

str

[‘Minimize’, ‘Maximize’, ‘Feasible point’]

Start

str

[‘Cold’, ‘Warm’]

Derivative level

int

3

Proximal iterations limit

int

10000

Total character workspace

int

None

Total integer workspace

int

None

Total real workspace

int

None

Save major iteration variables

list

[‘step’, ‘merit’, ‘feasibility’, ‘optimality’, ‘penalty’]

Informs

Code

Description

0

finished successfully

1

optimality conditions satisfied

2

feasible point found

3

requested accuracy could not be achieved

4

weak QP minimizer

10

the problem appears to be infeasible

11

infeasible linear constraints

12

infeasible linear equalities

13

nonlinear infeasibilities minimized

14

infeasibilities minimized

15

infeasible linear constraints in QP subproblem

20

the problem appears to be unbounded

21

unbounded objective

22

constraint violation limit reached

30

resource limit error

31

iteration limit reached

32

major iteration limit reached

33

the superbasics limit is too small

40

terminated after numerical difficulties

41

current point cannot be improved

42

singular basis

43

cannot satisfy the general constraints

44

ill-conditioned null-space basis

50

error in the user-supplied functions

51

incorrect objective derivatives

52

incorrect constraint derivatives

53

the QP Hessian is indefinite

54

incorrect second derivatives

55

incorrect derivatives

56

irregular or badly scaled problem functions

60

undefined user-supplied functions

61

undefined function at the first feasible point

62

undefined function at the initial point

63

unable to proceed into undefined region

70

user requested termination

71

terminated during function evaluation

72

terminated during constraint evaluation

73

terminated during objective evaluation

74

terminated from monitor routine

80

insufficient storage allocated

81

work arrays must have at least 500 elements

82

not enough character storage

83

not enough integer storage

84

not enough real storage

90

input arguments out of range

91

invalid input argument

92

basis file dimensions do not match this problem

93

the QP Hessian is indefinite

100

finished successfully

101

SPECS file read

102

Jacobian structure estimated

103

MPS file read

104

memory requirements estimated

105

user-supplied derivatives appear to be correct

106

no derivatives were checked

107

some SPECS keywords were not recognized

110

errors while processing MPS data

111

no MPS file specified

112

problem-size estimates too small

113

fatal error in the MPS file

120

errors while estimating Jacobian structure

121

cannot find Jacobian structure at given point

130

fatal errors while reading the SP

131

no SPECS file (iSpecs le 0 or iSpecs gt 99)

132

End-of-file while looking for a BEGIN

133

End-of-file while reading SPECS file

134

ENDRUN found before any valid SPECS

140

system error

141

wrong no of basic variables

142

error in basis package

API

class pyoptsparse.pySNOPT.pySNOPT.SNOPT(*args: Any, **kwargs: Any)[source]

SNOPT Optimizer Class - Inherited from Optimizer Abstract Class

SNOPT Optimizer Class Initialization

__call__(optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True, timeLimit=None)[source]

This is the main routine used to solve the optimization problem.

Parameters
optProbOptimization or Solution class instance

This is the complete description of the optimization problem to be solved by the optimizer

sensstr or python Function.

Specifiy method to compute sensitivities. The default is None which will use SNOPT’s own finite differences which are vastly superiour to the pyOptSparse implementation. To explictly use pyOptSparse gradient class to do the derivatives with finite differenes use ‘FD’. ‘sens’ may also be ‘CS’ which will cause pyOptSpare to compute the derivatives using the complex step method. Finally, ‘sens’ may be a python function handle which is expected to compute the sensitivities directly. For expensive function evaluations and/or problems with large numbers of design variables this is the preferred method.

sensStepfloat

Set the step size to use for design variables. Defaults to 1e-6 when sens is ‘FD’ and 1e-40j when sens is ‘CS’.

sensModestr

Use ‘pgc’ for parallel gradient computations. Only available with mpi4py and each objective evaluation is otherwise serial

storeHistorystr

File name of the history file into which the history of this optimization will be stored

hotStartstr

File name of the history file to “replay” for the optimziation. The optimization problem used to generate the history file specified in ‘hotStart’ must be IDENTICAL to the currently supplied ‘optProb’. By identical we mean, EVERY SINGLE PARAMETER MUST BE IDENTICAL. As soon as he requested evaluation point from SNOPT does not match the history, function and gradient evaluations revert back to normal evaluations.

storeSensbool

Flag sepcifying if sensitivities are to be stored in hist. This is necessay for hot-starting only.

timeLimitfloat

Specify the maximum amount of time for optimizer to run. Must be in seconds. This can be useful on queue systems when you want an optimization to cleanly finish before the job runs out of time.