Optimization

class pyoptsparse.pyOpt_optimization.Optimization(name, objFun, comm=None, sens=None)[source]

The main purpose of this class is to describe the structure and potentially, sparsity pattern of an optimization problem.

Parameters:
namestr

Name given to optimization problem.

objFunPython function handle

Function handle used to evaluate the objective function.

commMPI intra communication

The communicator this problem will be solved on. This is required for both analysis when the objective is computed in parallel as well as to use the internal parallel gradient computations. Defaults to MPI.COMM_WORLD if not given.

sensstr or python Function.

Specify method to compute sensitivities.

addCon(name, *args, **kwargs)[source]

Convenience function. See addConGroup() for more information

addConGroup(name, nCon, lower=None, upper=None, scale=1.0, linear=False, wrt=None, jac=None)[source]

Add a group of variables into a variable set. This is the main function used for adding variables to pyOptSparse.

Parameters:
namestr

Constraint name. All names given to constraints must be unique

nConint

The number of constraints in this group

lowerscalar or array

The lower bound(s) for the constraint. If it is a scalar, it is applied to all nCon constraints. If it is an array, the array must be the same length as nCon.

upperscalar or array

The upper bound(s) for the constraint. If it is a scalar, it is applied to all nCon constraints. If it is an array, the array must be the same length as nCon.

scalescalar or array

A scaling factor for the constraint. It is generally advisable to have most optimization constraint around the same order of magnitude.

linearbool

Flag to specify if this constraint is linear. If the constraint is linear, both the wrt and jac keyword arguments must be given to specify the constant portion of the constraint Jacobian.

The intercept term of linear constraints must be supplied as part of the bound information. The linear constraint \(g_L \leq Ax + b \leq g_U\) is equivalent to \(g_L - b \leq Ax \leq g_U - b\), and pyOptSparse requires the latter form. In this case, the arguments should be:

jac = {"dvName" : A, ...}, lower = gL - b, upper = gU - b
wrtiterable (list, set, OrderedDict, array etc)

‘wrt’ stand for stands for ‘With Respect To’. This specifies for what dvs have non-zero Jacobian values for this set of constraints. The order is not important.

jacdictionary

For linear and sparse non-linear constraints, the constraint Jacobian must be passed in. The structure of jac dictionary is as follows:

{'dvName1':matrix1, 'dvName2', matrix1, ...}

They keys of the Jacobian must correspond to the dvGroups given in the wrt keyword argument. The dimensions of each “chunk” of the constraint Jacobian must be consistent. For example, matrix1 must have a shape of (nCon, nDvs) where nDVs is the total number of design variables in dvName1. matrix1 may be a dense numpy array or it may be scipy sparse matrix. However, it is HIGHLY recommended that sparse constraints are supplied to pyOptSparse using the pyOptSparse’s simplified sparse matrix format. The reason for this is that it is impossible for force scipy sparse matrices to keep a fixed sparsity pattern; if the sparsity pattern changes during an optimization, IT WILL FAIL.

The three simplified pyOptSparse sparse matrix formats are summarized below:

mat = {'coo':[row, col, data], 'shape':[nrow, ncols]} # A coo matrix
mat = {'csr':[rowp, colind, data], 'shape':[nrow, ncols]} # A csr matrix
mat = {'coo':[colp, rowind, data], 'shape':[nrow, ncols]} # A csc matrix

Note that for nonlinear constraints (linear=False), the values themselves in the matrices in jac do not matter, but the sparsity structure does matter. It is imperative that entries that will at some point have non-zero entries have non-zero entries in jac argument. That is, we do not let the sparsity structure of the Jacobian change throughout the optimization. This stipulation is automatically checked internally.

addObj(name, *args, **kwargs)[source]

Add Objective into Objectives Set

addVar(name, *args, **kwargs)[source]

This is a convenience function. It simply calls addVarGroup() with nVars=1. Variables added with addVar() are returned as scalars.

addVarGroup(name, nVars, varType='c', value=0.0, lower=None, upper=None, scale=1.0, offset=0.0, choices=[], **kwargs)[source]

Add a group of variables into a variable set. This is the main function used for adding variables to pyOptSparse.

Parameters:
namestr

Name of variable group. This name should be unique across all the design variable groups

nVarsint

Number of design variables in this group.

varTypestr.

String representing the type of variable. Suitable values for type are: ‘c’ for continuous variables, ‘i’ for integer values and ‘d’ for discrete selection.

valuescalar or array.

Starting value for design variables. If it is a a scalar, the same value is applied to all ‘nVars’ variables. Otherwise, it must be iterable object with length equal to ‘nVars’.

lowerscalar or array.

Lower bound of variables. Scalar/array usage is the same as value keyword

upperscalar or array.

Upper bound of variables. Scalar/array usage is the same as value keyword

scalescalar or array. Define a user supplied scaling

variable for the design variable group. This is often necessary when design variables of widely varying magnitudes are used within the same optimization. Scalar/array usage is the same as value keyword.

offsetscalar or array. Define a user supplied offset

variable for the design variable group. This is often necessary when design variable has a large magnitude, but only changes a little about this value.

choiceslist

Specify a list of choices for discrete design variables

Notes

Calling addVar() and addVarGroup(…, nVars=1, …) are NOT equivalent! The variable added with addVar() will be returned as scalar, while variable returned from addVarGroup will be an array of length 1.

It is recommended that the addVar() and addVarGroup() calls follow the examples above by including all the keyword arguments. This make it very clear the intent of the script’s author. The type, value, lower, upper and scale should be given for all variables even if the default value is used.

Examples

>>> # Add a single design variable 'alpha'
>>> optProb.addVar('alpha', varType='c', value=2.0, lower=0.0, upper=10.0, scale=0.1)
>>> # Add 10 unscaled variables of 0.5 between 0 and 1 with name 'y'
>>> optProb.addVarGroup('y', varType='c', value=0.5, lower=0.0, upper=1.0, scale=1.0)
checkConName(conName)[source]

Check if the desired constraint name has already been added. If it is has already been added, return a mangled name (a number appended) that is valid. This is intended to be used by classes that automatically add constraints to pyOptSparse.

Parameters:
conNamestr

Constraint name to check validity on

Returns:
validNamestr

A valid constraint name. May be the same as conName it that was, in fact, a valid name.

checkVarName(varName)[source]

Check if the desired variable name varName if has already been added. If it is has already been added, return a mangled name (a number appended) that is valid. This is intended to be used by classes that automatically add variables to pyOptSparse

Parameters:
varNamestr

Variable name to check validity on

Returns:
validNamestr

A valid variable name. May be the same as varName it that was, in fact, a valid name.

delVar(name)[source]

Delete a variable or variable group

Parameters:
namestr

Name of variable or variable group to remove

evaluateLinearConstraints(x, fcon)[source]

This function is required for optimizers that do not explicitly treat the linear constraints. For those optimizers, we will evaluate the linear constraints here. We place the values of the linear constraints in the fcon dictionary such that it appears as if the user evaluated these constraints.

Parameters:
xarray

This must be the processed x-vector from the optimizer

fcondict

Dictionary of the constraints. The linear constraints are to be added to this dictionary.

finalize()[source]

This is a helper function which will only finalize the optProb if it’s not already finalized.

getDVConIndex(startIndex=1, printIndex=True)[source]

Return the index of a scalar DV/constraint, or the beginning and end index (inclusive) of a DV/constraint array. This is useful for looking at SNOPT gradient check output, and the default startIndex=1 is for that purpose

getDVs()[source]

Return a dictionary of the design variables. In most common usage, this function is not required.

Returns:
outDVsdict

The dictionary of variables. This is the same as ‘x’ that would be used to call the user objective function.

getOrdering(conOrder, oneSided, noEquality=False)[source]

Internal function that is used to produce a index list that reorders the constraints the way a particular optimizer needs.

Parameters:
conOrderlist

This must contain the following 4 strings: ‘ni’, ‘li’, ‘ne’, ‘le’ which stand for nonlinear inequality, linear inequality, nonlinear equality and linear equality. This defines the order that the optimizer wants the constraints

oneSidedbool

Flag to do all constraints as one-sided instead of two sided. Most optimizers need this but some can deal with the two-sided constraints properly (snopt and ipopt for example)

noEqualitybool

Flag to split equality constraints into two inequality constraints. Some optimizers (CONMIN for example) can’t do equality constraints explicitly.

printSparsity(verticalPrint=False)[source]

This function prints an (ASCII) visualization of the Jacobian sparsity structure. This helps the user visualize what pyOptSparse has been given and helps ensure it is what the user expected. It is highly recommended this function be called before the start of every optimization to verify the optimization problem setup.

Parameters:
verticalPrintbool

True if the design variable names in the header should be printed vertically instead of horizontally. If true, this will make the constraint Jacobian print out more narrow and taller.

Warning

This function is collective on the optProb comm. It is therefore necessary to call this function on all processors of the optProb comm.

processConstraintJacobian(gcon)[source]

This generic function is used to assemble the entire constraint Jacobian. The order of the constraint Jacobian is in ‘natural’ ordering, that is the order the constraints have been added (mostly; since it can be different when constraints are added on different processors).

The input is gcon, which is dict or an array. The array format should only be used when the pyOpt_gradient class is used since this results in a dense (and correctly oriented) Jacobian. The user should NEVER return a dense Jacobian since this extremely fickle and easy to break. The dict ‘gcon’ must contain only the non-linear constraints Jacobians; the linear ones will be added automatically.

Parameters:
gconarray or dict

Constraint gradients. Either a complete 2D array or a nested dictionary of gradients given with respect to the variables.

Returns:
gcondict with csr data

Return the Jacobian in a sparse csr format. can be easily converted to csc, coo or dense format as required by individual optimizers

Warning

This function should not need to be called by the user

processContoDict(fcon_in, scaled=True, dtype='d', natural=False, multipliers=False)[source]

A function that converts an array of constraints into a dictionary

Parameters:
fcon_inarray

Array of constraint values to be converted into a dictionary

scaledbool

Flag specifying if the returned array should be scaled by the pyOpt scaling. The only time this is not true is when the automatic derivatives are used

dtypestr

String specifying the data type to return. Normally this is ‘d’ for a float. The complex-step derivative computations will call this function with ‘D’ to ensure that the complex perturbations pass through correctly.

naturalbool

Flag to specify if the input data is in the natural ordering. This is only used when computing gradient automatically with FD/CS.

multipliersbool

Flag that indicates whether this deprocessing is for the multipliers or the constraint values. In the case of multipliers, no constraint offset should be applied.

Warning

This function should not need to be called by the user

processContoVec(fcon_in, scaled=True, dtype='d', natural=False)[source]

A function that converts a dictionary of constraints into a vector

Parameters:
fcon_indict

Dictionary of constraint values

scaledbool

Flag specifying if the returned array should be scaled by the pyOpt scaling. The only type this is not true is when the automatic derivatives are used

dtypestr

String specifying the data type to return. Normally this is ‘d’ for a float. The complex-step derivative computations will call this function with ‘D’ to ensure that the complex perturbations pass through correctly.

naturalbool

Flag to specify if the data should be returned in the natural ordering. This is only used when computing gradient automatically with FD/CS.

Warning

This function should not need to be called by the user

processObjectiveGradient(funcsSens)[source]

This generic function is used to assemble the objective gradient(s)

Parameters:
funcsSensdict

Dictionary of all function gradients. Just extract the objective(s) we need here.

Warning

This function should not need to be called by the user

processObjtoDict(fobj_in, scaled=True)[source]

This function converts the objective in array form to the corresponding dictionary form.

Parameters:
fobj_infloat or ndarray

The objective in array format. In the case of a single objective, a float can also be accepted.

scaledbool

Flag specifying if the returned dictionary should be scaled by the pyOpt scaling.

Returns:
fobjdictionary

The dictionary form of fobj_in, which is just a key:value pair for each objective.

processObjtoVec(funcs, scaled=True)[source]

This is currently just a stub-function. It is here since it the future we may have to deal with multiple objectives so this function will deal with that

Parameters:
funcsdictionary of function values
Returns:
objfloat or array

Processed objective(s).

Warning

This function should not need to be called by the user

processXtoDict(x)[source]

Take the flattened array of variables in ‘x’ and return a dictionary of variables keyed on the name of each variable.

Parameters:
xarray

Flattened array from optimizer

Warning

This function should not need to be called by the user

processXtoVec(x)[source]

Take the dictionary form of x and convert back to flattened array.

Parameters:
xdict

Dictionary form of variables

Returns:
x_arrayarray

Flattened array of variables

Warning

This function should not need to be called by the user

setDVs(inDVs)[source]

Set one or more groups of design variables from a dictionary. In most common usage, this function is not required.

Parameters:
inDVsdict

The dictionary of variables. The keys are the names of the variable groups, and the values are the desired design variable values for each variable group.

setDVsFromHistory(histFile, key=None)[source]

Set optimization variables from a previous optimization. This is like a cold start, but some variables may have been added or removed from the previous optimization. This will try to set all variables it can.

Parameters:
histFilestr

Filename of the history file to read

keystr

Key of the history file to use for the x values. The default is None which will use the last x-value stored in the dictionary.

summary_str(minimal_print=False)[source]

Print Structured Optimization Problem

Parameters:
minimal_printbool

Flag to specify if the printed results should only include variables and constraints with a non-empty status (for example a violated bound). This defaults to False, which will print all results.