A New Three-Term Preconditioned Gradient memory Algorithm for Nonlinear Optimization Problems

: In the present study, we proposed a three-term of preconditioned gradient memory algorithms to solve a nonlinear optimization problem. The new algorithm subsumes some other families of nonlinear preconditioned gradient memory algorithms as its subfamilies with Powell's Restart Criterion and inexact Armijo line searches. Numerical experiments on twenty one well-known test functions with various dimensions generally encouraged and showed that the new algorithm was more stable and efficient in comparison with the standard three-term CG-algorithm


INTRODUCTION
We consider the unconstrained optimization problem n x ) ( min ℜ ∈ ℜ ∈ x f x (1) where 1 : ℜ → ℜ n f is a continuously differentiable function in n ℜ and n ℜ is the n-dimensional euclidean space .Conjugate gradient methods were very useful for solving (1).They were of the form where k g denoted by λ is a step-length obtained by a line search, and k β is a scalar . The memory gradient algorithm for problem (1) was first presented in Cragg and Levy [4] with the ordinary gradient method. This method has the advantage of high speed convergence since it produces a sequence of quadratic convergent points.
A new three-term memory gradient method for problem (1) whose search directions are defined by For a given 1 H , the matrix k H was updated to 1 + k H by a formula from the class of self-scaling updates satisfying the following QN-like condition given in Cohen [3] There were infinite numbers of possible updates which satisfy the QN-condition. The class of these updates were written as (See Gill and Murray [5]) This choice for the scalar parameter k σ was made primarily because in this case k σ requires the quotient of two quantities which were already computed in the updating formula. For more details see Yabe and Takano [9].
In this study, we considered a new three-term PCG algorithm for problem (1) whose search directions were defined by by using a new line-search parameter and a positive definite matrix k H .

The Three-Term Memory Gradient Algorithm (TMG):
Consider the three term memory gradient method (4) and (5). Conditions are given on k β and k α to ensure that k d is a sufficient descent direction at the point k and where 0 1 > ∆ and 0 2 > ∆ are constants it follows from (12) that

Theorem:
If k x is not a stationary point for problem (1) then: .
Moreover, if k d is descent then: for the prove of this theorem see [6].

Outline of the Three-Term Memory Gradient Algorithm (TMG):
Step1: let Step4: set

A NEW THREE-TERM PRECONDITIONED GRADIENT MEMORY ALGORITHM
In this section we introduced a line search rule to find the best step-size parameter along the search direction at each iteration . We studied the convergent analysis of the modified Armijo step-size rules fully described in Armijo [2] given in step3 of the following new algorithm.

Outline of the New Three-Term Preconditioned Gradient Memory algorithm (NEW):
Step1: Let Step5: Step 7: If the available storage is exceeded, then employ a restart option either with Step 8: Set 1 + = k k and go to step 2 The convergence analysis of the new proposed algorithm: Consider the new three-term Preconditioned gradient memory defined in (11). Let from (22) we proposed the following property:  from (23) we proposed the following property:

RESULTS AND DISCUSSION
In this section we report some numerical results obtained by newly-written Fortran procedure with double precision.  In comparison of algorithms the function evaluation is normally assumed to be the most costly factor in each iteration and the number of iterations. The actual convergence criterion employed was ||g k ||<1×10 −6 for the two algorithms, twenty one wellknown test functions [9] (Appendices 1 and 2) and with dimensionality ranging (5-1000) are employed in the comparison. We solve each of these test function by the: • Three Term Memory Gradient algorithm (TMG) • The New proposed (New) algorithm All the numerical results are summarized in Table 1, 2 and 3. They present the Number of Iterations(NOI) versus the Number of Function Evaluations (NOF) while Table 3 give the percentage performance of the new algorithm based on both (NOI) and (NOF) against the original (TMG) algorithm.
The important thing is that the new algorithm solves each particular problem measured by (NOI) and (NOF) respectively, while the other algorithm may fail in some cases. Moreover, the new proposed algorithm always performs more stably and efficiently.   Namely there are about (7-16)% improvements of NOI for all dimensions Also there are (5-21)% improvements of NOF for all test functions.

CONCLUSIONS
In this study, we have three parameter family of preconditioned gradient algorithm suitable to solve nonlinear unconstrained optimization problems. The directions d k generated by the algorithm satisfy both the sufficient descent and lie search condition, with an inexact line search under standard Wolfe line search condition. We have proved the global convergence of the new algorithm and examines [7] their computational performances.