SNOPT

SNOPT is a sparse nonlinear optimizer that is particularly useful for solving large-scale constrained problems with smooth objective functions and constraints. The algorithm consists of a sequential quadratic programming (SQP) algorithm that uses a smooth augmented Lagrangian merit function, while making explicit provision for infeasibility in the original problem and in the quadratic programming subproblems. The Hessian of the Lagrangian is approximated using the BFGS quasi-Newton update.

Installation

Building from source

SNOPT is available for purchase here. Upon purchase, you should receive a zip file. Within the zip file, there is a folder called src. To use SNOPT with pyoptsparse, paste all files from src except snopth.f into pyoptsparse/pySNOPT/source.

From v2.0 onwards, only SNOPT v7.7.x is officially supported. To use pyOptSparse with previous versions of SNOPT, please checkout release v1.2. We currently test v7.7.7 and v7.7.1.

Installation by conda

When installing via conda, all pyoptsparse binaries are pre-compiled and installed as part of the package. However, the snopt binding module cannot be included as part of the package due to license restrictions.

If you are installing via conda and would like to use SNOPT, you will need to build the snopt binding module on your own, and inform pyoptsparse that it should use that library.

Suppose you have built the binding file, producing snopt.cpython-310.so, living in the folder ~/snopt-bind.

To use this module, set the environment variable, PYOPTSPARSE_IMPORT_SNOPT_FROM, e.g.:

PYOPTSPARSE_IMPORT_SNOPT_FROM=~/snopt-bind/

This will attempt to load the snopt binding module from ~/snopt-bind. If the module cannot be loaded from this path, a warning will be raised at import time, and an error will be raised if attempting to run the SNOPT optimizer.

Options

Please refer to the SNOPT user manual for a complete listing of options and their default values. The following are a list of

  • options which have values changed from the defaults within SNOPT

  • options unique to pyOptSparse, implemented in the Python wrapper and not found in SNOPT

SNOPT Default Options

Name

Type

Default value

Description

iPrint

int

18

Print File Output Unit (override internally in snopt?)

iSumm

int

19

Summary File Output Unit (override internally in snopt?)

Print file

str

SNOPT_print.out

Print file name

Summary file

str

SNOPT_summary.out

Summary file name

Minor print level

int

0

Minor iterations print level

Problem Type

str

Minimize

This specifies the problem type for SNOPT.

  • Minimize: minimization problem.

  • Maximize: maximization problem.

  • Feasible point: compute a feasible point only.

Start

str

Cold

This value is directly passed to the SNOPT kernel, and will be overwritten if another option, e.g. Cold start is supplied, in accordance with SNOPT options precedence.

  • Cold: Cold start

  • Hot: Hot start

Derivative level

int

3

The SNOPT derivative level. Only “3” is tested, where all derivatives are provided to SNOPT.

Iterations limit

int

10000000

The limit on the total number of minor iterations, summed over all major iterations. This option is set to a very large number to prevent premature termination of SNOPT.

Minor iterations limit

int

10000

The limit on the number of minor iterations for each major iteration. This option is set to a very large number to prevent premature termination of SNOPT.

Proximal iterations limit

int

10000

The iterations limit for solving the proximal point problem. We set this by default to a very large value in order to fully solve the proximal point problem to optimality

Total character workspace

int

None

The total character workspace length. If None, a default value of 500 is used, as recommended by SNOPT. If SNOPT determines that the default value is too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Total integer workspace

int

None

The total integer workspace length. If None, a default value of 500 + 100 * (ncon + nvar) is used, as recommended by SNOPT. If SNOPT determines that the default value is too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Total real workspace

int

None

The total real workspace length. If None, a default value of 500 + 200 * (ncon + nvar) is used, as recommended by SNOPT. If SNOPT determines that the default value is too small, the Python wrapper will overwrite the defaults with estimates for the required workspace lengths from SNOPT and initialize the optimizer for a second time. SNOPT might still exit with 82, 83, or 84, but this should automate the storage allocation for most cases. User-specified values are not overwritten.

Save major iteration variables

list

[]

This option is unique to the Python wrapper, and takes a list of values which can be saved at each major iteration to the History file. The possible values are

  • Hessian

  • slack

  • lambda

  • nS

  • BSwap

  • maxVi

  • penalty_vector

In addition, a set of default parameters are saved to the history file and cannot be changed. These are

  • nMajor

  • nMinor

  • step

  • feasibility

  • optimality

  • merit

  • condZHZ

  • penalty

Return work arrays

bool

False

This option is unique to the Python wrapper. If True, internal SNOPT work arrays are also returned at the end of the optimization. These arrays can be used to hot start a subsequent optimization. The SNOPT option ‘Sticky parameters’ will also be automatically set to ‘Yes’ to facilitate the hot start.

Work arrays save file

NoneType or str

None

This option is unique to the Python wrapper. The SNOPT work arrays will be pickled and saved to this file after each major iteration. This file is useful if you want to restart an optimization that did not exit cleanly. If None, the work arrays are not saved.

snSTOP function handle

NoneType or function

None

This option is unique to the Python wrapper. A function handle can be supplied which is called at the end of each major iteration. The following is an example of a callback function that saves the restart dictionary to a different file after each major iteration.

def snstopCallback(iterDict, restartDict):
    # Get the major iteration number
    nMajor = iterDict["nMajor"]

    # Save the restart dictionary
    writePickle(f"restart_{nMajor}.pickle", restartDict)

    return 0

snSTOP arguments

list

[]

This option is unique to the Python wrapper. It specifies a list of arguments that will be passed to the snSTOP function handle. iterDict is always passed as an argument. Additional arguments are passed in the same order as this list. The possible values are

  • restartDict

Informs

SNOPT Informs

Code

Description

0

finished successfully

1

optimality conditions satisfied

2

feasible point found

3

requested accuracy could not be achieved

5

elastic objective minimized

6

elastic infeasibilities minimized

10

the problem appears to be infeasible

11

infeasible linear constraints

12

infeasible linear equalities

13

nonlinear infeasibilities minimized

14

infeasibilities minimized

15

infeasible linear constraints in QP subproblem

16

infeasible nonelastic constraints

20

the problem appears to be unbounded

21

unbounded objective

22

constraint violation limit reached

30

resource limit error

31

iteration limit reached

32

major iteration limit reached

33

the superbasics limit is too small

34

time limit reached

40

terminated after numerical difficulties

41

current point cannot be improved

42

singular basis

43

cannot satisfy the general constraints

44

ill-conditioned null-space basis

45

unable to compute acceptable LU factors

50

error in the user-supplied functions

51

incorrect objective derivatives

52

incorrect constraint derivatives

56

irregular or badly scaled problem functions

60

undefined user-supplied functions

61

undefined function at the first feasible point

62

undefined function at the initial point

63

unable to proceed into undefined region

70

user requested termination

71

terminated during function evaluation

74

terminated from monitor routine

80

insufficient storage allocated

81

work arrays must have at least 500 elements

82

not enough character storage

83

not enough integer storage

84

not enough real storage

90

input arguments out of range

91

invalid input argument

92

basis file dimensions do not match this problem

140

system error

141

wrong no of basic variables

142

error in basis package

API

class pyoptsparse.pySNOPT.pySNOPT.SNOPT(*args, **kwargs)[source]

SNOPT Optimizer Class

This is the base optimizer class that all optimizers inherit from. We define common methods here to avoid code duplication.

Parameters:
namestr

Optimizer name

categorystr

Typically local or global

defaultOptionsdictionary

A dictionary containing the default options

informsdict

Dictionary of the inform codes

__call__(optProb, sens=None, sensStep=None, sensMode=None, storeHistory=None, hotStart=None, storeSens=True, timeLimit=None, restartDict=None)[source]

This is the main routine used to solve the optimization problem.

Parameters:
optProbOptimization or Solution class instance

This is the complete description of the optimization problem to be solved by the optimizer

sensstr or python Function.

Specify method to compute sensitivities. The default is None which will use SNOPT’s own finite differences which are vastly superior to the pyOptSparse implementation. To explicitly use pyOptSparse gradient class to do the derivatives with finite differences use FD. sens may also be CS which will cause pyOptSpare to compute the derivatives using the complex step method. Finally, sens may be a python function handle which is expected to compute the sensitivities directly. For expensive function evaluations and/or problems with large numbers of design variables this is the preferred method.

sensStepfloat

Set the step size to use for design variables. Defaults to 1e-6 when sens is FD and 1e-40j when sens is CS.

sensModestr

Use pgc for parallel gradient computations. Only available with mpi4py and each objective evaluation is otherwise serial

storeHistorystr

File name of the history file into which the history of this optimization will be stored

hotStartstr

File name of the history file to “replay” for the optimization. The optimization problem used to generate the history file specified in hotStart must be IDENTICAL to the currently supplied optProb. By identical we mean, EVERY SINGLE PARAMETER MUST BE IDENTICAL. As soon as he requested evaluation point from SNOPT does not match the history, function and gradient evaluations revert back to normal evaluations.

storeSensbool

Flag specifying if sensitivities are to be stored in hist. This is necessary for hot-starting only.

timeLimitfloat

Specify the maximum amount of time for optimizer to run. Must be in seconds. This can be useful on queue systems when you want an optimization to cleanly finish before the job runs out of time. From SNOPT 7.7, use the “Time limit” option instead.

restartDictdict

A dictionary containing the necessary information for hot-starting SNOPT. This is typically the same dictionary returned by this function on a previous invocation.

Returns:
solSolution object

The optimization solution

restartDictdict

If Return work arrays is True, a dictionary of arrays is also returned