Implementing Algorithms



Viewing and Modifying the Matrix

Whilst the simple procedures laid out in the previous chapters will be sufficient for many users of the Xpress‑Optimizer, it is sometimes necessary after a model has been loaded to view and change certain properties of a problem's matrix, prior to re-optimizing the altered problem. This may be particularly important if the original problem was found to be infeasible and constraints (matrix rows) needed to be removed to make the feasible region nontrivial. For library users, Xpress‑MP provides several functions specifically dedicated to this purpose, a few of which we mention below.

Viewing the Matrix

The Optimizer supports functions which provide access to the objective function, constraint right hand sides, bounds and the matrix elements both prior to and following optimization. In the former case, all information about the problem is available, although clearly the solution will not be. In the latter case, the structures available are dependent on whether or not the matrix is in a presolved state. This is described fully in the following section, so we do not elaborate on it here, other than to remark that full matrix information is only available if the matrix is not presolved. If it is still presolved, then only partial information will be available. If you are unsure of the matrix status, you may consult the problem attribute PRESOLVESTATE to determine its state.

For this section we will be concerned only with matrices which are not in a presolved state. For such matrices, the rows represent the constraints of the problem and may be obtained using XPRSgetrows. Their type and range are accessed via the functions XPRSgetrowtype and XPRSgetrowrange, whilst the names for each constraint may be returned by the XPRSgetnames command. The right hand side values and their ranges are available by means of the XPRSgetrhs and XPRSgetrhsrange routines. Related to the matrix rows, the coefficients of the objective function are similarly obtainable with use of the XPRSgetobj routine, whilst the related XPRSgetqobj function returns coefficients for quadratic objective functions.

The matrix columns represent the decision variables for the problem and as such the user may also be interested in information about these. The columns will typically have names, which the user may want to access, again possible using the XPRSgetnames function. Upper and lower bounds for the columns may be accessed with the commands XPRSgetub and XPRSgetlb, whilst their type and range are obtainable by means of the functions XPRSgetcoltype and XPRSgetcolrange, much as for the matrix rows. The XPRSgetcols function obtains the matrix columns themselves.

The reference section of this manual gives the precise form and usage for each of these functions in Console and Library Functions and the user is advised to consult those pages for details and examples before employing them in their own programs.

Modifying the Matrix

In some instances, a model which has previously been solved must be changed before the modified matrix is re-presented for optimization, and a set of routines is provided for this task. Rows and columns can be added (using XPRSaddrows, XPRSaddcols) or deleted from the model (using XPRSdelrows, XPRSdelcols). If rows or columns are to be added, for maximum efficiency, space should be reserved for them before the matrix is read by setting the EXTRAROWS, EXTRACOLS, EXTRAELEMS and EXTRAMIPENTS controls. If this is not done, resizing will be carried out automatically, but more space may be allocated than the user actually requires, potentially resulting in slower solution times.

In the same way, existing row and column types may also be altered (using XPRSchgrowtype, XPRSchgcoltype), as may the matrix coefficients (using XPRSchgcoef, or XPRSchgmcoef if several are to be changed). Right hand sides and their ranges may be changed with XPRSchgrhs and XPRSchgrhsrange, whilst the coefficients of the objective function may be changed with XPRSchgobj. Those for quadratic objective functions may be similarly altered using XPRSchgqobj or XPRSchgmqobj if several such are to be changed.

As mentioned above, a matrix may not be modified if it has been presolved and has not been postsolved, except that the variable bounds may be altered (using XPRSchgbounds). In the following section the Presolve facility will be discussed, along with some suggestions for getting around the difficulties of working with a presolved matrix. Examples of all the above functions and their precise syntax may be found within the reference pages of this manual in Console and Library Functions, to which the user is referred for details of how they might be employed in an Optimizer library program.

Working with Presolve

The Optimizer provides a number of algorithms for simplifying a problem prior to the optimization process. This elaborate collection of procedures, known as presolve, can often greatly improve the Optimizer's performance by modifying the problem matrix, making it easier to solve. The presolve algorithms identify and remove redundant rows and columns, reducing the size of the matrix, for which reason most users will find it a helpful tool in reducing solution times. However, presolve is included as an option and can be disabled if not required by setting the PRESOLVE control to 0. Usually this is set to 1 and presolve is called by default.

For some users the presolve routines can result in confusion since a problem viewed in its presolved form will look very different to the original model. Under standard use of the Optimizer this may cause no difficulty. On a few occasions, however, if errors occur or if a user tries to access additional properties of the matrix for certain types of problem, the presolved values may be returned instead. In this section we provide a few notes on how such confusion may be best avoided. If you are unsure if the matrix is in a presolved state or not, check the PRESOLVESTATE attribute

Linear Programming Problems

For a linear problem, presolve is called as a default by the XPRSmaxim (MAXIM) and XPRSminim (MINIM) routines, tidying the matrix before the main optimization algorithm is invoked. Following optimization, the whole matrix is automatically postsolved to recover a solution to the original problem and restoring the original matrix. Consequently, either before optimization or immediately following solution the full matrix may be viewed and altered as described above, being in its original form.

If for some reason the optimization is interrupted before it has completed, either using the CTRL-C key combination, or due to insufficient LPITERLIMIT or MAXTIME settings, then the problem will remain in its presolved form. The problem may be returned to its original state by calling XPRSpostsolve (POSTSOLVE).

(Mixed) Integer Programming Problems

If a model contains global entities, integer presolve methods such as bound tightening and coefficient tightening are also applied to tighten the LP relaxation. A simple example of this might be if the matrix has a binary variable x and one of the constraints of the matrix is xMaths/leq.png0.2. It follows that x can be fixed at zero since it can never take the value 1. If presolve uses the global entities to alter the matrix in this way, then the LP relaxation is said to have been tightened. For Console users, notice of this is sent to the screen; for library users it may be sent to a callback function, or printed to the log file if one has been set up. In such circumstances, the optimal objective function value of the LP relaxation for a presolved matrix may be different from that for the unpresolved matrix.

The strict LP solution to a model with global entities can be obtained by specifying the l flag with the XPRSmaxim (MAXIM) or XPRSminim (MINIM) command. This removes the global constraints from the variables, preventing the LP relaxation being tightened and solves the resulting matrix. In the example above, x would not be fixed at 0, but allowed to range between 0 and 0.2. If you are not interested in the LP relaxation, then it is slightly more efficient to solve the LP relaxation and do the global search in one go, which can be done by specifying the g flag for the XPRSmaxim (MAXIM) or XPRSminim (MINIM) command.

When XPRSglobal (GLOBAL) finds an integer solution, it is postsolved and saved in memory. The solution can be read with the XPRSgetmipsol function. A permanent copy can be saved to a solution file by calling XPRSwritebinsol (WRITEBINSOL). This can be retrieved later by calling XPRSreadbinsol (READBINSOL).

After calling XPRSglobal (GLOBAL), the matrix will be postsolved whenever the MIP search has completed. If the MIP search hasn't completed the matrix can be postsolved by calling the XPRSpostsolve (POSTSOLVE) function.

Common Causes of Confusion

It should be noted that most of the library routines described above and in Console and Library Functions, which modify the matrix will not work on a presolved matrix. The only exceptions are that cuts may be added using the cut pool manager and that the variable bounds may be changed (using XPRSchgbounds). Any of these functions expect references to the presolved problem. If one tries to retrieve rows, columns, bounds or the number of these, such information will come from the presolved matrix and not the original. A few functions exist which are specifically designed to work with presolved and scaled matrices, although care should be exercised in using them. Examples of these include the commands XPRSgetpresolvebasis, XPRSgetscaledinfeas, XPRSloadpresolvebasis and XPRSloadpresolvedirs.

Using the Callbacks

Optimizer Output

Console users are constantly provided with information on the standard output device by the Optimizer as it searches for a solution to the current problem. The same output is also available to library users if a log file has been set up using XPRSsetlogfile. However, whilst Console users can respond to this information as it is produced and allow it to influence their session, the same is not immediately true for library users, since their program must be written and compiled before the session is initiated. For such users, a more interactive alternative to the above forms of output is provided by the use of callback functions.

The library callbacks are a collection of functions which allow user-defined routines to be specified to the Optimizer. In this way, users may define their own routines which should be called at various stages during the optimization process, prompting the Optimizer to return to the user's program before continuing with the solution algorithm. Perhaps the three most general of the callback functions are those associated with the search for an LP solution. However, by far the vast majority of situations in which such routines might be called are associated with the global search, and will be addressed below.

LP Search Callbacks

In place of catching the standard output from the Optimizer and saving it to a log file, the callback XPRSsetcbmessage allows the user to define a routine which should be called every time a text line is output by the Optimizer. Since this returns the status of each message output, the user's routine could test for error or warning messages and take appropriate action accordingly.

Alternatively, the pair of functions XPRSsetcblplog and XPRSsetcbbarlog allow the user to respond after each iteration of either the simplex or barrier algorithms respectively. The controls LPLOG and BAROUTPUT may additionally be set to reduce the frequency at which this routine should be called.

Global Search Callbacks

When a problem with global entities is to be optimized, a large number of LP problems, called nodes, must typically be solved as part of the global tree search. At various points in this process user-defined routines can be called, depending on the callback that is used to specify the routine to the Optimizer. Such a routine may be called whenever a new node is selected using XPRSsetcbprenode, and could be used to change the choice of node. Routines could be specified using either of the XPRSsetcboptnode or XPRSsetcbintsol callbacks, to be invoked whenever an optimal solution or an integer solution is found at a particular node, or using XPRSsetcbinfnode for when a node is found to be infeasible. Whenever a node is cut off as a result of an improved integer solution being found, a routine may be called if specified using XPRSsetcbnodecutoff. Using XPRSsetcbchgbranch or XPRSsetcbchgnode, routines can be specified to be invoked whenever a new branching variable is set or whenever the code backtracks to select a new node. Perhaps more technically, XPRSsetcbsepnode and XPRSsetcbestimate may specify routines determining how to separate on a node or obtaining the estimated degradation at each node from branching on the user's global entities.

The final global callback, XPRSsetcbgloballog, is more similar to the LP search callbacks, allowing a user's routine to be called whenever a line of the global log is printed. The frequency with which this occurs is set by the control MIPLOG.

Working with the Cut Manager

Cuts and the Cut Pool

The global search for a solution of a (mixed) integer problem involves optimization of a large number of LP problems, known as nodes. This process is often made more efficient by supplying additional rows (constraints) to the matrix which reduce the size of the feasible region, whilst ensuring that it still contains any optimal integer solution. Such additional rows are called cutting planes, or cuts.

By default, cuts are automatically added to the matrix by the Optimizer during a global search to speed up the solution process. However, for advanced users, the Optimizer library provides greater freedom, allowing the possibility of choosing which cuts are to be added at particular nodes, or removing cuts entirely. The cutting planes themselves are held in a cut pool, which may be manipulated using library functions.

Cuts may be added directly to the matrix at a particular node, or may be stored in the cut pool first before subsequently being loaded into the matrix. It often makes little difference which of these two approaches are adopted, although as a general rule if cuts are cheap to generate, it may be preferable to add the cuts directly to the matrix and delete any redundant cuts after each sub-problem (node) has been optimized. Any cuts added to the matrix at a node and not deleted at that node will automatically be added to the cut pool. If you wish to save all the cuts that are generated, it is better to add the cuts to the cut pool first. Cuts can then be loaded into the matrix from the cut pool. This approach has the advantage that the cut pool routines can be used to identify duplicate cuts and save only the stronger cuts.

To help you keep track of the cuts that have been added to the matrix at different nodes, the cuts can be classified according to a user-defined cut type. The cut type can either be a number such as the node number or it can be a bit map. In the latter case each bit of the cut type may be used to indicate a property of the cut. For example cuts could be classified as local cuts applicable at the current node and its descendants, or as global cuts applicable at all nodes. If the first bit of the cut type is set this could indicate a local cut and if the second bit is set this could indicate a global cut. Other bits of the cut type could then be used to signify other properties of the cuts. The advantage of using bit maps is that all cuts with a particular property can easily be selected, for example all local cuts.

Cut Management Routines

Cuts may be added directly into the matrix at the current node using XPRSaddcuts. Any cuts added to the matrix at a node will be automatically added to the cut pool and hence restored at descendant nodes unless specifically deleted at that node, using XPRSdelcuts. Cuts may be deleted from a parent node which have been automatically restored, as well as those added to the current node using XPRSaddcuts, or loaded from the cut pool using XPRSloadcuts.

It is usually best to delete only those cuts with basic slacks, or else the basis will no longer be valid and it may take many iterations to recover an optimal basis. If the second argument to XPRSdelcuts is set to 1, this will ensure that cuts with non-basic slacks will not be deleted, even if the other controls specify that they should be. It is highly recommended that this is always set to 1.

Cuts may be saved directly to the cut pool using the function XPRSstorecuts. Since cuts added to the cut pool are not automatically added to the matrix at the current node, any such cut must be explicitly loaded into the matrix using XPRSloadcuts before it can become active. If the third argument of XPRSstorecuts is set to 1, the cut pool will be checked for duplicate cuts with a cut type identical to the cuts being added. If a duplicate cut is found, the new cut will only be added if its right hand side value makes the cut stronger. If the cut in the cut pool is weaker than the added cut, it will be removed unless it has already been applied to active nodes of the tree. If, instead, this argument is set to 2, the same test is carried out on all cuts, ignoring the cut type. The routine XPRSdelcpcuts allows the user to remove cuts from the cut pool, unless they have already been applied to active nodes in the Branch and Bound tree.

A list of cuts in the cut pool may be obtained using the command XPRSgetcpcuts, whilst XPRSgetcpcutlist returns a list of their indices. A list of those cuts which are active at the current node may be returned using XPRSgetcutlist.

User Cut Manager Routines

Users may also write their own cut manager routines to be called at various points during the Branch and Bound search. Such routines must be defined in advance using library function calls, similar to callbacks and are defined according to the frequency at which they should be called. At the beginning of the process a cut manager initialization routine may be called and should be specified using XPRSsetcbinitcutmgr. In the same way, at the end of the process, XPRSsetcbfreecutmgr allows the specification of a termination routine. The command XPRSsetcbcutmgr allows the definition of a routine which may be called at each node in the tree.

Further details of these functions may be found in Console and Library Functions within the functional reference which follows.

Goal Programming

Overview

Goal programming is an extension of linear programming in which targets are specified for a set of constraints. In goal programming there are two basic models: the pre-emptive (lexicographic) model and the Archimedian model. In the pre-emptive model, goals are ordered according to priorities. The goals at a certain priority level are considered to be infinitely more important than the goals at the next level. With the Archimedian model, weights or penalties for not achieving targets must be specified and one attempts to minimize the weighted sum of goal under-achievement.

In the Optimizer, goals can be constructed either from constraints or from objective functions (N rows). If constraints are used to construct the goals, then the goals are to minimize the violation of the constraints. The goals are met when the constraints are satisfied. In the pre-emptive case we try to meet as many goals as possible, taking them in priority order. In the Archimedian case, we minimize a weighted sum of penalties for not meeting each of the goals. If the goals are constructed from N rows, then, in the pre-emptive case, a target for each N row is calculated from the optimal value for the N row. this may be done by specifying either a percentage or absolute deviation that may be allowed from the optimal value for the N rows. In the Archimedian case, the problem becomes a multi-objective linear programming problem in which a weighted sum of the objective functions is to be minimized.

In this section four examples will be provided of the four different types of goal programming available. Goal programming itself is performed using the XPRSgoal (GOAL) command, whose syntax is described in full in the reference section of this manual.

Pre-emptive Goal Programming Using Constraints

For this case, goals are ranked from most important to least important. Initially we try to satisfy the most important goal. Then amongst all the solutions that satisfy the first goal, we try to come as close as possible to satisfying the second goal. We continue in this fashion until the only way we can come closer to satisfying a goal is to increase the deviation from a higher priority goal.

An example of this is as follows:

goal 1 (G1): 7x + 3y Maths/geq.png 40
goal 2 (G2): 10x + 5y = 60
goal 3 (G3): 5x + 4y Maths/leq.png 35
LIMIT: 100x + 60y Maths/leq.png 600

Initially we try to meet the first goal (G1), which can be done with x=5.0 and y=1.6, but this solution does not satisfy goal 2 (G2) or goal 3 (G3). If we try to meet goal 2 while still meeting goal 1, the solution x=6.0 and y=0.0 will satisfy. However, this does not satisfy goal 3, so we repeat the process. On this occasion no solution exists which satisfies all three.

Archimedian Goal Programming Using Constraints

We must now minimize a weighted sum of violations of the constraints. Suppose that we have the following problem, this time with penalties attached:

Penalties
goal 1 (G1): 7x + 3y Maths/geq.png 40 8
goal 2 (G2): 10x + 5y = 60 3
goal 3 (G3): 5x + 4y Maths/leq.png 35 1
LIMIT: 100x + 60y Maths/leq.png 600

Then the solution will be the solution of the following problem:

minimize: 8d1 + 3d2 + 3d3 + 1d4
subject to: 7x + 3y + d1 Maths/geq.png 40
10x + 5y + d2 - d3 = 60
5x + 4y + d4 Maths/geq.png 35
100x + 60y Maths/leq.png 600
d1 Maths/geq.png 0, d2 Maths/geq.png 0, d3 Maths/geq.png 0, d4 Maths/geq.png 0

In this case a penalty of 8 units is incurred for each unit that 7x + 3y is less than 40 and so on. the final solution will minimize the weighted sum of the penalties. Penalties are also referred to as weights. This solution will be x=6, y=0, d1=d2=d3=0 and d4=5, which means that the first and second most important constraints can be met, while for the third constraint the right hand side must be reduced by 5 units in order to be met.

Note that if the problem is infeasible after all the goal constraints have been relaxed, then no solution will be found.

Pre-emptive Goal Programming Using Objective Functions

Suppose that we now have a set of objective functions of which we know which are the most important. As in the pre-emptive case with constraints, goals are ranked from most to least important. Initially we find the optimal value of the first goal. Once we have found this value we turn this objective function into a constraint such that its value does not differ from its optimal value by more than a certain amount. This can be a fixed amount (or absolute deviation) or a percentage of (or relative deviation from) the optimal value found before. Now we optimize the next goal (the second most important objective function) and so on.

For example, suppose we have the following problem:

Sense D/P Deviation
goal 1 (OBJ1): 5x + 2y - 20 max P 10
goal 2 (OBJ2): -3x + 15y - 48 min D 4
goal 3 (OBJ3): 1.5x + 21y - 3.8 max P 20
LIMIT: 42x + 13y Maths/leq.png 100

For each N row the sense of the optimization (max or min) and the percentage (P) or absolute (D) deviation must be specified. For OBJ1 and OBJ3 a percentage deviation of 10% and 20% respectively have been specified, whilst for OBJ2 an absolute deviation of 4 units has been specified.

We start by maximizing the first objective function, finding that the optimal value is -4.615385. As a 10% deviation has been specified, we change this objective function into the following constraint:

5x + 2y - 20 Maths/geq.png -4.615385 - 0.14.615385

Now that we know that for any solution the value for the former objective function must be within 10% of the best possible value, we minimize the next most important objective function (OBJ2) and find the optimal value to be 51.133603. Goal 2 (OBJ2) may then be changed into a constraint such that:

-3x + 15y - 48 Maths/leq.png 51.133603 + 4

and in this way we ensure that for any solution, the value of this objective function will not be greater than the best possible minimum value plus 4 units.

Finally we have to maximize OBJ3. An optimal value of 141.943995 will be obtained. Since a 20% allowable deviation has been specified, this objective function may be changed into the following constraint:

1.5x + 21y - 3.8 Maths/geq.png 141.943995 - 0.2141.943995

The solution of this problem is x=0.238062 and y=6.923186.

Archimedian Goal Programming Using Objective Functions

In this, the final case, we optimize a weighted sum of objective functions. In other words we solve a multi-objective problem. For consider the following:

Weights Sense
goal 1 (OBJ1): 5x + 2y - 20 100 max
goal 2 (OBJ2): -3x + 15y - 48 1 min
goal 3 (OBJ3): 1.5x + 21y - 3.8 0.01 max
LIMIT: 42x + 13y Maths/leq.png 100

In this case we have three different objective functions that will be combined into a single objective function by weighting them by the values given in the weights column. The solution of this model is one that minimizes:

1(-3x + 15y - 48) - 100(5x + 2y - 20) - 0.01(1.5x + 21y - 3.8)

The resulting values that each of the objective functions will have are as follows:

OBJ1: 5x + 2y - 20 = -4.615389
OBJ2: -3x + 15y - 48 = 67.384613
OBJ3: 1.5x + 21y - 3.8 = 157.738464

The solution is x=0.0 and y=7.692308.



If you have any comments or suggestions about these pages, please send mail to docs@dashoptimization.com.