On the Successive Overrelaxation Method

Problem statement: A new variant of the Successive Overrelaxation (SO R) method for solving linear algebraic systems, the KSOR method w as introduced. The treatment depends on the assumption that the current component can be used s imultaneously in the evaluation in addition to the use the most recent calculated components as in the SOR method. Approach: Using the hidden explicit characterization of linear functions to in troduce a new version of the SOR, the KSOR method. Prove the convergence and the consistency analysis of the proposed method. Test the method through application to well-known examples. Results: The proposed method had the advantage of updating the first component in the first equation from the firs t step which affected all the subsequent calculatio ns. It was proved that the KSOR can converge for all po ssible values of the relaxation parameter, ω*∈R-[2, 0] not only for ( ω∈(0, 2) as in the SOR method. A new eigenvalue funct io al relation similar to that of the SOR method between the eigenvalues of the it eration matrices of the Jacobi and the KSOR methods was proved. Numerical examples illustrating his treatment, comparison with the SOR with optimal values of the relaxation parameter were con sidered. Conclusion: The relaxation parameter ω* in the proposed method, can take values, ω*∈R-[-2, 0] not only for ( ω∈(0, 2) as in the SOR. The enlargement of the domain has the affect of relaxin g the sensivity near the optimum value of the relaxation parameter. Moreover, all the advantages of the SOR method are conserved and the proposed method can be applied to any system. This approach is promising and will help in the numerical treatment of boundary value problems. Other extensi ons and applications for further work are mentioned .


INTRODUCTION
The problem of solving linear systems of algebraic equations appears as a final stage in solving many problems in different areas of science and engineering, it is the result of the discretization techniques of the mathematical models representing realistic problems (Saad and Vorst, 2000) and the references cited therein. We consider linear systems of the form Eq. 1: We assume that the system has a unique solution and the equations are ordered so that, a ii ≠0, (Darvishi and Hessari, 2011;Papadomanolaki et al., 2010;Wang, 2010;Louka et al., 2009;Salkuyeh and Toutounian, 2006). Jacobi method is the simplest known iterative method; it is a direct application of the fixed point theorem. The point Jacobi method for system (1) From the computational point of view Gauss-Seidel method is a surprising natural extension for Jacobi method. Historically, Gauss introduced his method when he was working in least squares problem, in 1823, while Jacobi work appeared in 1853, (Saad and Vorst, 2000;Hackbusch, 1994). Gauss-Seidel idea depends on the use of the most recent calculated values. The point Gauss-Seidel method for system (1) Moreover, the novel successive over relaxation approach, the SOR method, generalizes the Gauss Seidel method. The point SOR method for system (1)  x is the solution obtained by the Gauss Seidel method (Hackbusch, 1994;Burden and Faires, 2005;Varga, 1965).
Using matrix notations, the system of Eq. 1 can be written as Eq. 6: where, D is a diagonal matrix with the same diagonal elements as A and-L, -U are the strictly lower and upper triangular parts of A respectively, (Hackbusch, 1994;Burden and Faires, 2005;Varga, 1965;Young, 1971). Accordingly, we have.

Jacobi method:
( ) T j is the Jacobi iteration matrix Eq. 7.
Gauss-seidel method: T G is the Gauss-Seidel iteration matrix Eq. 8.

Definition:
The spectral radius of a matrix H, denoted ρ(H), is given by: It is well known that a necessary and sufficient condition for the convergence of a given iterative method is that the spectral radius of the corresponding iteration matrix is less than one. The smaller the spectral radius of the iteration matrix is, the faster the rate of convergence of the corresponding iterative method is (Saad and Vorst, 2000;Hackbusch, 1994;Burden and Faires, 2005;Varga, 1965;Young, 1971;1954).
The Gauss-Seidel and Successive Over-Relaxation (SOR) methods are important solvers for a class of large scale sparse linear systems due to their efficiency and simplicity in implementation. Many other surprising methods appeared in the last few decades used the same philosophy to introduce formulas that contain more parameters and include the other methods as special cases for some values of the parameters. The Accelerated Over Relaxation (AOR) method is a novel two parameter generalization of the above mentioned methods (Hadjidimos, 1978;Avdelas and Hadjidimos, 1981). Albrecht and Klein (1984), have considered extrapolated iterative methods, they have illustrated that the classical iterative methods can be interpreted as integration methods for certain systems of linear differential equations.
The basic idea in the KSOR method depends on the process of updating the residue in the right hand side of the SOR method (4). It is assumed that the current value can be used in addition to the use of the most recent calculated ones (i.e., updating the residue simultaneously with the current new component). Apparently, this process leads to an implicit formula but actually it is explicit due to the linearity of the equations. Accordingly, the first component is updated in the first step which affects all the subsequent steps. Unlike Gauss-Seidel (SOR), AOR and the extrapolated versions of iterative methods in which the solution is updated after the determination of the new component in the KSOR it is assumed that the update prosses can take place simultaneosoully with the evaluation of the new components. The iteration matrix of the proposed method is obtained, theoretical considerations are being discussed. It is proved that the method is completely consistent and can converge for values of the relaxation parameter (ω*∈R-[-2, 0]) not only for the relaxation parameter (ω∈(0, 2)) in the SOR. Moreover, the proposed method will be convergent when the classical SOR (ω∈(0, 2)) is convergent. Comparison of the results of the proposed method with other well-known iterative methods especially with SOR with optimal values of the relaxation parameter ω has proved the efficiency and reliability of the method. Numerical examples with the graphical behavior of the spectral radius of the corresponding iteration matrices as a function in ω* are discussed. Moreover, the proposed KSOR method has the same simple explicit appearance as the SOR method.

MATERIALS AND METHODS
Assuming that we can use the current component simultaneously on the evaluation of the residue appears in the SOR method in addition to the use of the most recent calculated one. It appears that the method will be implicit; however after the rearrangement of the terms, we get an explicit formula. Accordingly, the KSOR method can be written in the form Eq. 10-12: The relaxation parameter ω*∈ R-[-2, 0] plays the same role as ω in the SOR method but with extended domain. It is used to control the spectral radius of the iteration matrix, accordingly the rate of convergence.
The matrix formulation of the KSOR method is Eq. 13 and 14: where, T KSOR is the iteration matrix of the KSOR method. We first prove a basic result which gives the maximum range of values of ω* for which the KSOR iteration can converge.

Proof:
The proof is straightforward application of the definition of consistency (Young, 1971).

Theorem 3:
The characteristic equation of the KSOR iteration matrix can be written in the form Eq. 15: Proof: the characteristic equation is: We must have: This result holds for any system of the form (1) (have a unique solution with a ii ≠ 0. Moreover, for any ω*∈R-[-2, 0],β ≠ 0, because a ii ≠ 0.
Theorem 4: For any matrix that satisfies Eq. 16: In general two cyclic consistently ordered matrix in the sense of Young (1971); Theorem 3.3pag 147) and Varga (1965). The eigenvalues β of the KSOR point iteration matrix are related to the eigenvalues µ of the Jacobi point iteration matrix by the relation Eq. 17: Which proves that, * * 1/ 2 β βω 1 µ ω β + − = is an eigenvalue of the Jacobi iteration matrix. This result gives a direct correspondence between the eigenvalues β of the KSOR iteration matrix, T KSOR and those of the Jacobi iteration matrix, T j . In particular if T j has a p-fold zero eigenvalue, then T KSOR has p corresponding eigenvalues equal to 1/(1+ω*). Moreover, associated with the 2q nonzero eigenvalues ±µ i of T j there are 2q eigenvalues of T KSOR which satisfy: The correspondence between the eigenvalues β of the KSOR iteration matrix, T KSOR and those λ of the SOR iteration matrix, T SOR , will be considered in a next work. From the point of view of integration methods for certain systems of linear differential equations, Albrecht and Klein (1984) and the references therein the KSOR method can be considered as the method which uses the prediction correction philosophy in one step. From the point of view of extrapolated methods the KSOR method, like the SOR method can be considered as an extrapolated Gauss Seidel method. The KSOR method and other iterative methods can be combined from the point of view of prediction correction techniques and this will be our considerations in a subsequent work.
The KSOR algorithm: we introduce the algorithmic formulation of the KSOR method. This algorithm is similar except for some constant multipliers of the already well-established SOR algorithm (Burden and Faires, 2005).

Algorithm (KSOR):
Input: The number of equations m: • The entries a ij ,1≤ i, j≤m , of the matrix A • The entries b i , 1≤ i ≤m of b • The entries XO i ,1≤ i ≤m of XO=X (0) ; the parameter ω* • Tolerance TOL; maximum number of iterations N Output: The approximate solution x 1 ,…x m or a message that the number of iterations was exceeded: Step 1 : Set k=1 Step 2 : While (k≤N) do steps 3-6 Step 3 : For i =1,…, m ( ) Step 4 : If X-XO < TOL Then output (x 1 ,…x n ) (Procedure completed successfully.) STOP.
Step 6 : For i = 1,…, m set XO i = xi  (Varga, 1965;Young, 1971). In the first example we present the solution values and the graphical representation of the absolute values of the eigenvalues of the SOR and KSOR iteration matrices In the second example we present the eigenvalues of the Jacobi µ i ,I = 1,2,3 and 4 and Gauss Seidel v i ,i = 1,2,3 and 4 iteration matrices and obtained the eigenvalues λ i ,i = 1,2,3 and 4 of the SOR iteration matrix as functions in ω and the eigenvalues β i ,i = 1,2,3 and 4 of the KSOR iteration as functions in ω*.
The main difficulty in the efficient use of iterative methods which depends on some parameters like the SOR method, the AOR method lies in making a good estimate of the optimum relaxation parameters which maximizes the rate of convergence of the method. In the following we consider two well known examples with known optimum relaxation parameter ω opt . Determining the optimum value of the relaxation parameters is a very important task and it will be considered later in a separt work.
Example 1: Consider a system with data Eq. 18: Whose exact solution is x 1 =1, x 2 =1, (Young page 96) and (Varga, 1965). It is well known that, for this system we have. The eigenvalues of the Jacobi iteration matrix T j in this example are: The eigenvalues of the Gauss Seidel iteration matrix T G are:          Figure 2 illustrates the behavior of the absolute value of the eigenvalues of the KSOR iteration matrix T KSOR agains the relaxation parameter.
It is well known that, for this system we have. The eigenvalues of the Jacobi iteration matrix T j are the roots of the equation: It is clear from, Table 6, that corresponding to µ i = 0,i = 1,2, i * 1 β ,i 1, 2 1 ω = = + and the relation, between the eigenvalues µ i ,β i and the relaxation parameter ω*, theorem(4), µ i ω*β i 1/2 =β i +β i ω*-1 is completely satisfied all calculations and graphs are performed with the help of the computer algebra system Mathematica 7.0.

RESULTS
• The KSOR updates the residue simultaneously with the solution in addition to the use of the most recent calculated solution which reflects the the rapied convergence at the begainning appeared in the numerical examples • The domain of the relaxation parameter in the KSOR is ω*∈R-[-2,0] instead of ω∈ (2,0) in the SOR • The iteration matrix of the proposed method, the consistency and convergence analysis of the method are well established • Afunctional eigenvalue relation between the eigenvalues of the iteration matrices and the relaxation parameters (theorem (4)) is well established • Numerical examples illustrating and confirming the theoritical eigenvalue functional relation are considered • From Table 5-7, we see that the spectral radius ρ(T SOR ) changes from 0.072-0.092 while ρ(T KSOR ) changes from 0.072-0.073 in an interval of length 0.005 around the minimum value i.e., the change in ρ(T SOR ) is 20 times the change in ρ(T KSOR ) along an interval of the same length which illustrates relaxation of the sensitivity around the minimum value • Further extensions are mentioned

DISCUSSION
Although the problem of solving large sparse linear systems of algebraic equations is one of the old problems (Saad and Vorst, 2000;Hackbusch, 1994) it is still has an important role in many modern areas of science. The SOR is one of most used iterative methods espcially when a good estimation of the optimum value of the relaxation parameter ω opt is avaliable. Even if ω opt is avaliable, it is sensitive as illustrated in the results of the numerical examples.
In comparison with the SOR method, with known optimal value of the relaxation parameter, the KSOR method has the same advantages of the SOR. Even from the point of view of the splitting of the coefficient matrix, one can see that the SOR uses the splitting 1 ω D ω L D ω U ω = + − − + in addition to the possibility of the use of the philosophy of the prediction correction techniques, which we will consider in a subsequent work. It remains to introduce an effective procedure for the estimation of the optimum value of the relaxation parameter ω * opt.
which maximizes the rate of convergence of the proposed KSOR method and this will be the objective of a subsequent work. Also the KSOR can be used with more relaxation parameters as well as combinations of the SOR and the KSOR can be considered.

CONCLUSION
The KSOR has the same simple structure as the SOR method so its implementation is an easy task. The theoretical properties, the convergence as well as the consistency of the proposed method are proved. Comparison with other iterative methods especially with the SOR, with known optimal value of the relaxation parameter is discussed.
From the the computational point of view the method has the advantage of updating the first component from the first step unlike the other iterative methods which reflects the rapid convergence at the beginning.
The study of the spectral radius of the iteration matrices, which is a measure of the convergence rate of the linear iterative methods, have proved that there is a value of the relaxation parameter ω* for which ρ(T KSOR ) is comparable with that of the SOR corresponding to ω opt . The numerical examples have confirmed the theoretical eigenvalue functional relation (theorem 4) and illustrated that the extension of the domain of the relaxation parameter has the the effect of relaxing the sensitivity ρ(T SOR ) around its minimum value.