SNOPT

SNOPT is a sparse nonlinear optimizer that is particularly useful for solving large-scale constrained problems with smooth objective functions and constraints. The algorithm consists of a sequential quadratic programming (SQP) algorithm that uses a smooth augmented Lagrangian merit function, while making explicit provision for infeasibility in the original problem and in the quadratic programming subproblems. The Hessian of the Lagrangian is approximated using the BFGS quasi-Newton update.

Installation

SNOPT is available for purchase here. Upon purchase, you should receive a zip file. Within the zip file, there is a folder called src. To use SNOPT with pyoptsparse, paste all files from src except snopth.f into pyoptsparse/pySNOPT/source.

From v2.0 onwards, only SNOPT v7.7.x is officially supported. To use pyOptSparse with previous versions of SNOPT, please checkout release v1.2. We currently test v7.7.7 and v7.7.1.

Options

Please refer to the SNOPT user manual for a complete listing of options and their default values. The following are a list of

  • options which have values changed from the defaults within SNOPT

  • options unique to pyOptSparse, implemented in the Python wrapper and not found in SNOPT

SNOPT Default Options

Name

Type

Default value

Description

iPrint

int

18

Print File Output Unit (override internally in snopt?)

iSumm

int

19

Summary File Output Unit (override internally in snopt?)

Print file

str

SNOPT_print.out

Print file name

Summary file

str

SNOPT_summary.out

Summary file name

Problem Type

str

Minimize

This specifies the problem type for SNOPT.

  • Minimize: minimization problem.

  • Maximize: maximization problem.

  • Feasible point: compute a feasible point only.

Start

str

Cold

This value is directly passed to the SNOPT kernel, and will be overwritten if another option, e.g. Cold start is supplied, in accordance with SNOPT options precedence.

  • Cold: Cold start

  • Warm: Warm start

Derivative level

int

3

The SNOPT derivative level. Only “3” is tested, where all derivatives are provided to SNOPT.

Proximal iterations limit

int

10000

The iterations limit for solving the proximal point problem. We set this by default to a very large value in order to fully solve the proximal point problem to optimality

Total character workspace

int

None

The total character workspace length. If None, a default value of 500 is used, as recommended by SNOPT. If SNOPT determines that the default value is too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Total integer workspace

int

None

The total integer workspace length. If None, a default value of 500 + 100 * (ncon + nvar) is used, as recommended by SNOPT. If SNOPT determines that the default value is too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Total real workspace

int

None

The total real workspace length. If None, a default value of 500 + 200 * (ncon + nvar) is used, as recommended by SNOPT. If SNOPT determines that the default value is too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Save major iteration variables

list

[‘step’, ‘merit’, ‘feasibility’, ‘optimality’, ‘penalty’]

This option is unique to the Python wrapper, and takes a list of values which can be saved at each iteration to the History file. This specifies the list of major iteration variables to be stored in the history file. Hessian, slack, lambda and condZHZ are also supported.

Informs

SNOPT Informs

Code

Description

0

finished successfully

1

optimality conditions satisfied

2

feasible point found

3

requested accuracy could not be achieved

4

weak QP minimizer

10

the problem appears to be infeasible

11

infeasible linear constraints

12

infeasible linear equalities

13

nonlinear infeasibilities minimized

14

infeasibilities minimized

15

infeasible linear constraints in QP subproblem

20

the problem appears to be unbounded

21

unbounded objective

22

constraint violation limit reached

30

resource limit error

31

iteration limit reached

32

major iteration limit reached

33

the superbasics limit is too small

40

terminated after numerical difficulties

41

current point cannot be improved

42

singular basis

43

cannot satisfy the general constraints

44

ill-conditioned null-space basis

50

error in the user-supplied functions

51

incorrect objective derivatives

52

incorrect constraint derivatives

53

the QP Hessian is indefinite

54

incorrect second derivatives

55

incorrect derivatives

56

irregular or badly scaled problem functions

60

undefined user-supplied functions

61

undefined function at the first feasible point

62

undefined function at the initial point

63

unable to proceed into undefined region

70

user requested termination

71

terminated during function evaluation

72

terminated during constraint evaluation

73

terminated during objective evaluation

74

terminated from monitor routine

80

insufficient storage allocated

81

work arrays must have at least 500 elements

82

not enough character storage

83

not enough integer storage

84

not enough real storage

90

input arguments out of range

91

invalid input argument

92

basis file dimensions do not match this problem

93

the QP Hessian is indefinite

100

finished successfully

101

SPECS file read

102

Jacobian structure estimated

103

MPS file read

104

memory requirements estimated

105

user-supplied derivatives appear to be correct

106

no derivatives were checked

107

some SPECS keywords were not recognized

110

errors while processing MPS data

111

no MPS file specified

112

problem-size estimates too small

113

fatal error in the MPS file

120

errors while estimating Jacobian structure

121

cannot find Jacobian structure at given point

130

fatal errors while reading the SP

131

no SPECS file (iSpecs le 0 or iSpecs gt 99)

132

End-of-file while looking for a BEGIN

133

End-of-file while reading SPECS file

134

ENDRUN found before any valid SPECS

140

system error

141

wrong no of basic variables

142

error in basis package

API

class pyoptsparse.pySNOPT.pySNOPT.SNOPT(*args, **kwargs)[source]

SNOPT Optimizer Class - Inherited from Optimizer Abstract Class

SNOPT Optimizer Class Initialization

__call__(optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True, timeLimit=None)[source]

This is the main routine used to solve the optimization problem.

Parameters
optProbOptimization or Solution class instance

This is the complete description of the optimization problem to be solved by the optimizer

sensstr or python Function.

Specifiy method to compute sensitivities. The default is None which will use SNOPT’s own finite differences which are vastly superiour to the pyOptSparse implementation. To explictly use pyOptSparse gradient class to do the derivatives with finite differenes use ‘FD’. ‘sens’ may also be ‘CS’ which will cause pyOptSpare to compute the derivatives using the complex step method. Finally, ‘sens’ may be a python function handle which is expected to compute the sensitivities directly. For expensive function evaluations and/or problems with large numbers of design variables this is the preferred method.

sensStepfloat

Set the step size to use for design variables. Defaults to 1e-6 when sens is ‘FD’ and 1e-40j when sens is ‘CS’.

sensModestr

Use ‘pgc’ for parallel gradient computations. Only available with mpi4py and each objective evaluation is otherwise serial

storeHistorystr

File name of the history file into which the history of this optimization will be stored

hotStartstr

File name of the history file to “replay” for the optimziation. The optimization problem used to generate the history file specified in ‘hotStart’ must be IDENTICAL to the currently supplied ‘optProb’. By identical we mean, EVERY SINGLE PARAMETER MUST BE IDENTICAL. As soon as he requested evaluation point from SNOPT does not match the history, function and gradient evaluations revert back to normal evaluations.

storeSensbool

Flag sepcifying if sensitivities are to be stored in hist. This is necessay for hot-starting only.

timeLimitfloat

Specify the maximum amount of time for optimizer to run. Must be in seconds. This can be useful on queue systems when you want an optimization to cleanly finish before the job runs out of time.