Using Feasible Direction to Find All Alternative Extreme Optimal Points for Linear Programming Problem

The problem of linear programming (LP) is one of the earliest formulated problems in mathematical programming where a linear function has to be maximized (minimized) over convex constraint polyhedron X. The simplex algorithm was early suggested for solving this problem by moving toward a solution on the exterior of the constraint polyhedron X. In 1984, the area of linear programming underwent a considerable change of orientation when Karmarker [1984] introduced an algorithm for solving (LP) problems which moves through the interior of the polyhedron. This algorithm of Karmarker's and subsequent additional variants established a new class of algorithms for solving linear programming problems known as the interior point methods . In the case of the linear programming sometimes the solution is not unique and decisions may be taken based on these alternatives, in this study we present a feasible direction method to find all alternatives optimal extreme points for the linear programming problem. This method is based on the conjugate gradient projection method for solving non-linear programming problem with linear constraints.


INTRODUCTION
The problem of linear programming (LP) is one of the earliest formulated problems in mathematical programming where a linear function has to be maximized (minimized) over convex constraint polyhedron X. The simplex algorithm was early suggested for solving this problem by moving toward a solution on the exterior of the constraint polyhedron X. In 1984, the area of linear programming underwent a considerable change of orientation when Karmarker [1984] introduced an algorithm for solving (LP) problems which moves through the interior of the polyhedron. This algorithm of Karmarker's and subsequent additional variants [1,2] established a new class of algorithms for solving linear programming problems known as the interior point methods .
In the case of the linear programming sometimes the solution is not unique and decisions may be taken based on these alternatives, in this study we present a feasible direction method to find all alternatives optimal extreme points for the linear programming problem. This method is based on the conjugate gradient projection method for solving non-linear programming problem with linear constraints [3,4] .

Definitions and theory:
The linear programming problem (LP) arises when a linear function is to be maximized on a convex constraint polyhedron X. this problem can be formulated as follows: (1) Where c, x Є R n , A is an (m + n) × n matrix, b Є R m+n ,we point out that the nonnegative conditions are included in the set of constraints. This problem can also be written in the form: Here a T i represents the i th row of the given matrix A, then we have in the non degenerate case an extreme point (vertex) of X lies on some n linearly independent subset of X. We shall give an iterative method for solving this problem and our task is to find all optimal alternatives extreme points for this program, this method starts with an initial feasible point then a sequence of feasible directions toward optimality is generated to find all optimal extremes of this programming, in general if x k-1 is a feasible point obtained at iteration k-1 (k = 1, 2 …) then at iteration k our procedure finds a new feasible point x k given by (3) Where d k-1 is the direction vector along which we move and given by d k-1 = H k-1 c (4) Here H k-1 is an n х n symmetric matrix given by (5) we have I is an nxn identity matrix and q is the number of active constraints at the current point while   (7) This relation states that α k-1 is always positive. Proposition 2-2 below shows that such a positive value must exist if a feasible point exists. Due to the well known Kuhn-Takucer condition [5,6] for the point x k to be an optimal solution of the linear program (1) their must exist u ≥ 0 such that A T r u = c , or simply u = (A r A T r ) -1 A r c (8) Here A r is a submatrix of the given matrix A containing only the coefficients of the set of active constraints at the current point x k .This fact will act as a stopping rule of our proposed algorithm, also we have to point out that the matrix H k-1 = H 2 k 1 − ,so we have the following proposition.

Proposition 2-1:
Any solution x k given by equation (3) is feasible and increases the objective function value.
This proves that x k increases the objective function. Next, we shall prove that x k is a feasible point. For x k to be a feasible point it must satisfy all constraints of problem (2-1), then Must hold for all i Є {1, 2… m+n} which can be written That will contradict our definition of α k-1 . Next, we shall give a result that guarantees the existence of α k-1 defined by relation (7) above.
This contradicts the fact that the norm must be positive, which implies that relation (7) cannot be true for all, i ∈ {1, 2… m+n}. Thus if a feasible point x k exists then α k-1 as defined by relation (7) must exist.
Based on the above results we shall give in the next section a full description of our algorithm for solving the linear programming problem (LP) problems, to find all alternative optimal extremes in two phases as follows: New algorithm for solving (LP) problems: Our algorithm for solving (LP) problems to find all optimal alternatives extreme points consists of the following two phases as follows:

Phase I
Step 0: set k=1, Ho =I, d o =c, let x 0 be an initial feasible point and apply relations (2-7) to compute α 0 .
Step 1: Apply relation (3) to find a new solution x k .
Step 2: Apply relation (8) to compute u, if u ≥ 0 stop. The current solution x k is the optimal solution otherwise go to step 3.

Phase II
Assuming that q is the number of active constraints at point x k then if q<n and relation (7) is satisfied this indicates that x k is an optimal non-extreme point, in this case the objective function can not be improved through any feasible direction and in this case we have H k c=0 at this point x k. let the columns of H k represents the allowed directions of movement d k from x k towards the optimal extremes x* of the form: x*= x k + α *d k , α* is free and is obtained by solving the system of linear inequalities of the form αA d k ≤ b-A x k , hence the boundary point of this interval define α*. Remark 3-1: In the case when q=n and relation (2-7) is satisfied this indicates that x k is an optimal extreme point then the columns of H k has to be computed via a subset of these active constraints at x k such that (2-8) is satisfied.

CONCLUSION
In this study we gave a feasible direction method to find all optimal alternative extreme points of linear programming problem .since decisions may be taken depending on these alternatives, our method is based on conjugate projection method and doesn't depend on the simplex version of linear program