Minimization of l2-Norm of the KSOR Operator

We consider the problem of minimizing the l2-norm of the KSOR operator when solving a linear systems of the form AX = b where, A = I +B (TJ = -B, is the Jacobi iteration matrix), B is skew symmetric matrix. Based on the eigenvalue functional relations given for the KSOR method, we find optimal values of the relaxation parameter which minimize the l2-norm of the KSOR operators. Use the Singular Value Decomposition (SVD) techniques to find an easy computable matrix unitary equivalent to the iteration matrix TKSOR. The optimum value of the relaxation parameter in the KSOR method is accurately approximated through the minimization of the l2-norm of an associated matrix ∆(ω * ) which has the same spectrum as the iteration matrix. Numerical example illustrating and confirming the theoretical relations are considered. Using SVD is an easy and effective approach in proving the eigenvalue functional relations and in determining the appropriate value of the relaxation parameter. All calculations are performed with the help of the computer algebra system “Mathematica 8.0”.


INTRODUCTION
We consider linear systems of the form Equation 1: With a ij = -a ji for i ≠ j and the system admits a unique solution. This system of equations can be written as Equation 2: Such linear systems arise in many different applications for example in the finite difference treatment of the Korteweg de Vries equation, Buckley (1977). Also, simi-lar linear systems appears in the treatment of coupled harmonic equations, Ehrlich (1972).
In the iterative treatment of linear systems, we use the splitting, A = D-L-U, where D = d×I m is the diag-onal part of A, for some non-zero constant d, -L is the strictly lower-triangular part of A and -U is the strictly uppertriangular part of A, Woznicki (2001).

Jacobi Method Equation 3:
T J is the Jacobi iteration matrix, it is clear that T J in this case is a skew symmetric matrix.

The SOR Method is Equation 5:
where, T SOR is the SOR iteration matrix. The choice of the relaxation parameter ω is very important for the convergence rate of the SOR method. For certain classes of matrices (2-cyclic consistently ordered) with property A, in the sense of Young (2003), for such systems there is a functional eigenvalue relation of the form Equation 7: where, λ is an eigenvalue of the T SOR and µ is a corresponding eigenvalue of the T J . Most work on the choice of ω is to minimize ρ(T SOR ) which is only an asymptotic criteria of the convergence rate of linear stationary iterative method, Hadjidimos and Neumann (1998). In real computations, we have to consider average convergence rate Milleo et al. (2006). The determination of the optimal value of the relaxation parameter ω opt can be obtained with the help of the eigenvalue functional relation (7). Young (2003), determined ω opt when T J has only real eigenvalues and ρ(T J ) 〈1. In this case we have: where the optimality is understood in the sense of the minimization of ρ(T SOR ). Maleev (2006) determined ω opt when T J has only pure imaginary eigenvalues and ρ (T J ) 〈 1. In this case we have: opt 2 J 2 1 1 ( (T )) ω = + + ρ Golub and Pillis (1990) introduced a simple proof for the eigenvalue functional relation (7) by the use of the Singular Value Decomposition (SVD) approach for real symmetric matrices. Yin and Yuan (2002) considered the skew symmetric case as well as the symmetric case. Milleo et al. (2006) considered the minimization of 2 lnorms of the SOR and MSOR operators for the skew symmetric case.

The KSOR Method is
In a recent work Youssef (2012) The KSOR Method in matrix notation is Equation 10: where, T KSOR is the KSOR iteration matrix (operator).
As it was in the SOR the rate of convergence of the KSOR method depends on the choice of the relaxation parameter ω*. For certain classes of matrices (2-cyclic consistently ordered with property A), Youssef (2012) established the eigenvalue functional relation Equation 11: where, β I 's are the eigenvalues of the T KSOR and µ i 's are the eigenvalues of the Jacobi iteration matrix T J . The eigenvalue functional relation (11) can be proved by the use of the SVD technique. where, s 1 ≥s 2 ≥…≥s q ≥0, U and V are orthogonal matrices such that:

Singular Value Decoposition
We consider in this study the case studied by Yin and Yuan (2002) also by Milleo et al. (2006) in which the coefficient matrix take the form Equation 12: where, F × ∈ p q R with p + q = m and p ≥ q. In this case the Jacobi iteration matrix becomes Equation 13: It is clear that T J is skew symmetric and accordingly admits pure imaginary eigenvalues and the KSOR iteration matrix T KSOR becomes Equation 14: Usually, researchers work on obtaining the optimum value of the relaxation parameter in the sense of minimizing the spectral radius of the iteration matrix or an equivalent quantity. We use the SVD approach in proving the eigenvalue functional relation for the KSOR method. Also, we use the same argument of Golub and Pillis (1990) to define a matrix ∆(ω * ) which has the same spectrum as the iteration matrix T KSOR .
Our objective is to find the optimal value of the relaxation parameter ω * which minimizes the 2 l -norm of the KSOR operator and illustrate the theoretical results through applications to a numerical example.

MATERIALS AND METHODS
We use the SVD in proving the relation between the eigenvalues of the skew symmetric Jacobi iteration matrix T J and the singular values of a block sub-matrix F, theorem (1). We will prove the relation between the eigenvalue functional relation between the eigenvalues of T J and T KSOR by using SVD, theorem (2). We will find the spectal raduis of ((T KSOR ) T T KSOR ), theorem (3). We will find the optimal value of the relaxation parameter ω * to minimize the 2 l -norms of the T KSOR theorem (4).

Theorem 1
Let Abe the matrix given by (12), then Equation 15: where, 2 i µ are the eigenvalues of 2 2 J i T , S are the squares of the singular values of F and (σ(T J )) 2 is the set of squares of the eigenvalues of T J .

Proof
Using the SVD to decompose the corner block matrix F, we obtain Equation 16: where, p×p matrix U and q×q matrix V are orthogonal, i.e.: and ∑ is the p×q diagonal matrix (of singular values) defined in (17). The eigenvalues of the matrix: Accordingly, the eigenvectors of the matrix FF T equal to the columns of orthogonal matrix U. Similarly, F T F = V∑ T ∑V T has its eigenvectors equal to the columns of orthogonal matrix V. The number of nonzero singular values S i of F is equal to the rank of F.
Substituting the singular value decomposition (16) into the corner elements F, F T of (13), we obtain (18) Equation 17 and 18: Science Publications

JMSS
Now, we will find a relation between the singular values S i (diagonal of ∑) and the eigenvalues µ i of T J where i = 1,2,..,q. For µ i ≠ 0 an eigenvalues of T J , we have Equation 19: So that, the number of non-zero eigenvalues of T J equals 2t that's come in pairs ±µ i . To account for zero eigenvalues, we write Equation 20: We construct the n×n non-singular matrix W whose columns are the orthogonal eigenvectors of (19) and (20): Note that the t columns of p×t matrix X and q×t matrix Y are the t respective eigenvectors of (19), the r columns of p×r matrix Z and q×r matrix Z T come from the r null vectors of (20).
Ordinarily, we would scale the columns of W to produce an orthogonal matrix as a technical convenience, however, we assume that the columns of W are scaled so that Equation 21: Let the matrix I denote the t×t matrix whose diagonal elements are the t positive eigenvalues µ i of (19). Then (19) and (20) can be combined to produce the single matrix equation: Which, when multiplied through on the right by W H we get Equation 22: Comparing the block entries of T J in (18) and (22), we obtain the equalities: And we see Equation 23: Accordingly:

Theorem 2
Let T KSOR and T J be given, respectively, by (14) and (13) Where: Science Publications

JMSS
where, s i are the singular values of F.

Proof
By using the singular value decomposition of the matrix F we have F = U∑V T where U and V orthogonal matrices, then the matrix T KSOR has the form Equation 30: Let the orthogonal matrices U and V be factored out then T KSOR has the form Equation 31: Note that (31) This mean that there is a permutation matrix P which "pulls" the two corner diagonal matrices to the main diagonal, i.e., * T P P ω Γ has only 2×2 or 1×1 matrices along its main diagonal. When * ω Γ of (32) is permuted into the block diagonal form, we obtain Equation 33: where each 2×2 matrix ∆ i (ω * ) is given by Equation 34: where, s i are the singular values of F. We have seen that each member of the ω * family of KSOR iteration matrix T KSOR is unitarily equivalent to a matrix ∆ (ω * ) having only 2×2 or 1×1 matrices on the main diagonal. That is, from (31) and (33) Equation 35: Unitary equivalent (35) implies that both the eigenvalues and the 2-norms agree for both (ω *families of) matricies T KSOR and ∆(ω * ) then we have (25), (26) and (27). From (25)    1 β = ω + is a special case of the right-hand side of (38), namely, when µ i is set to zero. Therefore, (38) is described by the single relation (24).
From the previous theorem we see the 2 l -norm of the KSOR iteration matrix is equivalent to the 2 l -norm of the ∆(ω * ) then, equivalent to the square root of the spectral radius of (∆(ω * )) T ∆(ω * ). Then, the problem of minimizing the 2 l -norm of the KSOR iteration matrix is equivalent to the problem of minimizing the square root of the spectral radius of (∆(ω * )) T ∆(ω * ).

Proof
From the theorem 2 we have Equation 42: Now we go to calculate Equation 43 and 44: It is easy to see Equation 45: Solving this quadratic equation, we find that Equation 49: The largest of the two roots of (51) is given by: The maximum value of each L i is obtained for the maximum value of the corresponding T (ω * , t i ).
Note that: Then: With t = ρ 2 (T J ). The spectral radius of the matrix ∆ i T (ω * ) ∆ i (ω * ) for any given i, is the quantity L i , then from (53) the spectral radius of the matrix ∆ T (ω * ) ∆(ω * ) is L.

Theorem 4
The value of ω * , which has minimum in (39), is the unique real positive root in (0,∞) of the Equation 55:

Example
Consider a system with: For simplicity we adapted the right hand side b as it was in Young (2003) and Youssef (2012) so that the exact solution is x 1 = 1, x 2 = 1, x 3 = 1, x 4 = 1.
It is well known that, for this system we have. The eigenvalues of the Jacobi iteration matrix T J are the roots of the equation:

RESULTS
• We used the SVD in proving the eigenvalue functional relation for the KSOR operator • The minimization of the 2 l -norm is used as a good estimation for determining the optimum value of the relaxation parameter in the KSOR method as well as in the SOR method  Fig. 1 and 2 we see that the calculated results agree with the theoretical results of Milleo et al. (2006)