Optimality, Infeasibility and Unboundedness



The Solution Process

Once a problem matrix has been constructed within the Optimizer, typically a user will call one of the optimization routines to attempt to solve it and, having done so, would be presented with an optimal solution which can then be used. Optimization is not always this simple, however, and many things can and do go wrong during the solution process. Very often in real situations problems are large and will take some time to solve. This is particularly true with complicated MIP problems. For problems that seem to be taking an unusually long time to solve, the optimization time can often be improved with some thought and careful setting of the control parameters. This is the subject of the following chapter.

Console users can safely terminate the solution process at any point using the CTRL-C keystroke. This allows the current point in the process to be saved before the Optimizer terminates. The process can subsequently be continued if desired by reissuing the command. Whilst library users do not have this facility, limits can be placed on the amount of time taken for an Optimizer run either by setting the control LPITERLIMIT, which provides an upper bound on the number of simplex iterations carried out, or BARITERLIMIT, which performs the same role for the Newton barrier method. Alternatively the control MAXTIME can be set to impose a "real time" limit on the process. If the solution process ever ends prematurely, it is worth checking that these three controls have been set to reasonable values and that it is not one of these that is interrupting the optimization.

Assuming none of these problems arise and the optimization run completes successfully, there are a number of possible outcomes for the resulting solution. Ideally, an optimal solution will have been found, confirming that the problem was well-posed and providing an answer that is usable. However, this is not the only outcome from the process. It is equally possible that no solution can be found, where the problem is said to be infeasible. Occasionally, potentially infinite solutions are also possible from certain models, resulting in unboundedness. In this chapter we briefly discuss some of these possibilities.

Infeasibility

A problem is said to be infeasible if no solution exists which satisfies all the constraints. The Xpress‑Optimizer provides a number of means for diagnosing infeasibilities in models, depending on the point in the solution process at which they are detected.

Diagnosing Infeasibility During Presolve

The presolve facility, if used (see Working with Presolve), makes a number of checks for infeasibility. If an infeasibility is detected, it is possible to trace this back and uncover the cause of it. This diagnosis is carried out whenever the control parameter TRACE is set to 1 before the optimization routine XPRSmaxim (MAXIM) or XPRSminim (MINIM) is called. In such a situation, the cause of the infeasibility is then reported as part of the output from the optimization routine and the problem may be amended if necessary.

Irreducible Infeasible Sets

Another general technique to analyze infeasibility is to find a small portion of the matrix that is itself infeasible. The Optimizer does this by finding irreducible infeasible sets (IISs). An IIS is a minimal set of constraints and variable bounds which is infeasible, but becomes feasible if any constraint or bound in it is removed.

A model may have several infeasibilities. Repairing a single IIS may not make the model feasible, for which reason the Optimizer can find an IIS for each of the infeasibilities in a model with the use of XPRSiis (NUMIIS). This search is controlled by the parameter MAXIIS, which controls the maximum number of IISs which will be found. In some cases an infeasibility can be represented by several different IISs and Xpress‑MP will attempt to find the IIS with the smallest number of constraints in order to make the infeasibility easier to diagnose. The IISs may be retrieved by library users with XPRSgetiis.

Once an IIS has been found it is useful to know if dropping a single constraint or bound will completely remove the infeasibility represented by the IIS. An attempt is made to identify a subset of the IIS called a sub-IIS isolation. Removing any member of the sub-IIS isolation will remove all infeasibilities in the IIS without increasing the infeasibilities of other IISs. The sub-IIS isolations thus indicate the likely cause of each independent infeasibility and give an indication of which constraint or bound to drop. It is not always possible to find sub-IIS isolations, but if these subsets exist the members will be marked with a star. The sub-IIS isolations are searched for only if all independent IISs have been found.

Integer Infeasibility

In certain situations a MIP problem may turn out to be infeasible, while the equivalent LP problem (the LP relaxation) yields an optimal solution. In such circumstances the feasible region for the LP relaxation, while nontrivial, contains no solutions which satisfy the various integrality constraints and a solution may only be recovered either by dropping one of these, or one of the other problem constraints. These are perhaps the worst kind of infeasibilities as it is much harder to determine the cause.

Unboundedness

A problem is said to be unbounded if the objective function may be improved indefinitely whilst still satisfying the constraints of the model. When this occurs it usually signifies a problem in the formulation of the model being solved rather than a likely source of infinite profit. If the Optimizer detects unboundedness, the user should look again at the model being presented for missing, insufficient or inaccurate constraints, or data.

Scaling

When they are first formulated, problems sometimes contain numerical values which vary over many orders of magnitude. For example:

maximize: 106x+7y = z
subject to: 106x+0.1y Maths/leq.png 100
107x+8y Maths/leq.png 500
1012x+106y Maths/leq.png 50*106

Here the objective coefficients, constraint coefficients, and RHS values range between 0.1 and 1012. We say that the model is badly scaled.

During the optimization process, the Optimizer must perform many calculations involving subtraction and division of quantities derived from the constraints and objective function. When these calculations are carried out with values differing greatly in magnitude, the finite precision of computer arithmetic and the fixed tolerances employed by Xpress‑MP result in a build up of rounding errors to a point where the Optimizer can no longer reliably find the optimal solution.

To minimize undesirable effects, when formulating your problem try to choose units (or equivalently scale your problem) so that objective coefficients and matrix elements do not range by more than 106, and RHS and non-infinite bound values do not exceed 108. One common problem is the use of large finite bound values to represent infinite bounds (i.e., no bounds) — if you have to enter explicit infinite bounds, make sure you use values greater than 1020 which will be interpreted as infinity by the Optimizer. Avoid having large objective values that have a small relative difference — this makes it hard for the dual simplex algorithm to solve the problem. Similarly, avoid having large RHS/bound values that are close together.

In the above example, both the x-coefficient and the last constraint might be better scaled. Issues arising from the first may be overcome by column scaling, effectively a change of coordinates, with the replacement of 106x by some new variable. Those from the second may be overcome by row scaling.

Xpress‑MP also incorporates a number of automatic scaling options to improve the scaling of the matrix. However, the general techniques described below cannot replace attention to the choice of units specific to your problem. The best option is to scale your problem following the advice above, and use the automatic scaling provided by the Optimizer.

The form of scaling employed by the Optimizer depends on the bits of the control parameter SCALING which are set. To get a particular form of scaling, set SCALING to the sum of the values corresponding to the scaling required. For instance, to get row scaling, column scaling and then row scaling again, set SCALING to 1+2+4=7. The problem is scaled during XPRSreadprob (READPROB), and may be rescaled later using XPRSscale (SCALE).

Bit Value Type of Scaling
0 1 Row scaling.
1 2 Column scaling.
2 4 Row scaling again.
3 8 Maximin.
4 16 Curtis-Reid.
5 32 0 – scale by geometric mean;
1 – scale by maximum element
(not applicable if maximin or Curtis-Reid is specified).

Typically one may want to rescale the matrix following an alteration (using XPRSalter (ALTER)). By setting SCALING before calling XPRSscale (SCALE), a different scaling strategy can be employed from that used originally.

The default value of SCALING is 35, so row and column scaling are done by the maximum element method. If scaling is not required, SCALING must be set to 0 before any call to XPRSreadprob (READPROB).

The scaling is determined by the matrix elements only. The objective coefficients, right hand side values and bound values do not influence the scaling. Scaling of integer entities is not supported in XPRSglobal (GLOBAL), although a nonzero SCALING will scale the LP variables. This means that the scaling of integer entities should be considered carefully when formulating MIP problems.

Accuracy

The accuracy of the computed variable values and objective function value is affected in general by the various tolerances used in the Optimizer. Of particular relevance to MIP problems are the accuracy and cut off controls. The MIPRELCUTOFF control has a non-zero default value, which will prevent solutions very close but better than a known solution being found. This control can of course be set to zero if required.

However, for all problems it is probably ambitious to expect a level of accuracy in the objective of more than 1 in 1,000,000. Bear in mind that the default feasibility and optimality tolerances are 10-6. And you are lucky if you can compute the solution values and reduced costs to an accuracy better than 10-8 anyway (particularly for large models). It depends on the condition number of the basis matrix and the size of the RHS and cost coefficients. Under reasonable assumptions, an upper bound for the computed variable value accuracy is 4xKxMaths/parallel.pngRHSMaths/parallel.png/1016, where Maths/parallel.pngRHSMaths/parallel.png denotes the L-infinity norm of the RHS and K is the basis condition number. The basis condition number can be found using the XPRSbasiscondition (BASISCONDITION) function.

You should also bear in mind that the matrix is scaled, which would normally have the effect of increasing the apparent feasibility tolerance.



If you have any comments or suggestions about these pages, please send mail to docs@dashoptimization.com.