Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

Problem statement: This study introduced a variable step-size Least M ean-Square (LMS) algorithm in which the step-size is dependent on th e Euclidian vector norm of the system output error. The error vector includes the last L values of the error, where L is a parameter to be chosen properly together with other parameters in the proposed algo rithm to achieve a trade-off between speed of convergence and misadjustment. Approach: The performance of the algorithm was analyzed, simulated and compared to the Normalized LMS (NLMS) algorithm in several input environments. Results: Computer simulation results demonstrated substanti al improvements in the speed of convergence of the proposed algorithms over other a lgorithms in stationary environments for the same small level of misadjustment. In addition, the prop osed algorithm shows superior tracking capability when the system is subjected to an abrupt disturban ce. Conclusion: For nonstationary environments, the algorithm performs as well NLMS and other varia ble step-size algorithms.


INTRODUCTION
The Least Mean-Square (LMS) algorithm is a stochastic gradient algorithm in that it iterates each tap weight of the transversal filter in the direction of the negative instantaneous gradient of the squared error signal with respect to the tap weight in question. The simplicity of the LMS algorithm coupled with its desired properties has made it and its variants an important part of the adaptive techniques. The LMS recursion can be written as: w(n 1) w(n) e(n) x(n) Where: n = The iteration number w = The vector of adaptive filter weights x = The adaptive filter input vector µ = A positive scalar called the step-size Because of the successful use of the LMS algorithm in modeling unknown systems (Bershad et al., 2008;Papoulis and Stathaki, 2004), in CDMA systems (Seo et al., 2010;Aparin et al., 2006), in image processing (Oktem et al., 2001;Costa and Bermudez, 2007), in adaptive noise canceling (Gorriz et al., 2009;Greenberg, 1998;Ramadan, 2008), in channel equalization (Martinez-Ramon et al., 2004;Ikuma et al., 2008) and in many other areas (Yu and Ko, 2003;Haibin et al., 2008), improvements of the algorithm are constantly being sought.
Many variable step-size LMS-based algorithms have been proposed in the literature (Sayed, 2003;Douglas and Meng, 1994;Harrison et al., 1986;Aboulnasr and Mayyas, 1997;Kwong and Johnston, 1992;Kim et al., 1995) with the aim of altering the step-size of the update equation to improve the fundamental trade-off between speed of convergence and minimum Mean-Square Error (MSE). Of particular importance are those algorithms in which the step-size varies based on data or error normalization. The Normalized Least Mean-Square (NLMS) algorithm uses a normalized step-size with respect to the filter input signal. The NLMS algorithm provides a faster rate of convergence than the typical LMS (Haykin, 2001). A modified version of the NLMS (MNLMS) algorithm is proposed in (Douglas and Meng, 1994). In that algorithm, performance improvements over LMS and NLMS algorithms were shown for a small number of filter taps, N and comparable achievements for large N. Other algorithms use different criteria to adjust the step-size for improving system performance. Harrison et al. (1986), the proposed variable step-size LMS (VSLMS) algorithm is adjusted based on the sign of e(n)x(n-i) in the coefficient update term of the LMS algorithm. If the sign of e(n)x(n-i) is changing frequently, then the step-size of that algorithm is decreased by some constant to achieve small MSE since the filter coefficients in this case are close to their optimum values. On the other hand, if the sign is not changing rapidly, then the step-size is increased by some constant to achieve a higher rate of convergence since the solution in this case is not close to its optimum value.
A new time-varying step-size was suggested in (Aboulnasr and Mayyas, 1997) based on the estimate of the square of a time-averaged autocorrelation function between e(n) and e(n-1). Kwong and Johnston (1992), the step-size is adjusted based on the energy of the instantaneous error. The performance of this algorithm degrades in the presence of measurement noise in a system modeling application (Aboulnasr and Mayyas, 1997). The step-size in (Kim et al., 1995) is assumed to vary according to the estimated value of the normalized absolute error. The normalization was made with respect to the desired signal. Most of these algorithms do not perform very well if an abrupt change occurs to the system impulse response.
This study introduces an LMS-type algorithm where the step-size varies according to error vector normalization. The proposed algorithm is analyzed and simulated in both system identification and adaptive noise cancellation for stationary and nonstationary environments. A system identification block diagram set up is shown in Fig. 1. The aim is to estimate the impulse response h, of the unknown system. The adaptive filter adjusts the weights w using one of the LMS-like algorithms, to produce an output, y(n), that is as close as possible to the output of the unknown system, d(n). When the MSE is minimized, the adaptive filter (w) represents the best model of the unknown system. Both the unknown system and the adaptive filter are driven by the same input x(n). The internal plant noise is represented as an additive noise v(n).
A typical Adaptive Noise Canceller (ANC), shown in Fig. 2 is composed of two inputs: primary input and reference input. The primary input d(n) consists of the original speech signal, S(n), corrupted by an additive noise v 1 (n). The input to the adaptive filter is the reference noise signal, v 2 (n), that is correlated with v 1 (n), but uncorrelated with S(n).

Fig. 1: System identification
The noise source is represented by g(n) and the transmission paths from the noise source to the primary and reference inputs are represented by filters which have impulse responses h 1 and h 2 , respectively. The filter weights w are adapted by means of an LMS-based algorithm to minimize the power in e(n). This minimization is achieved by processing v 2 (n) via the adaptive filter to provide an estimate of d(n) and then subtracting this estimate from d(n) to obtain e(n). Thus: Since: v 2 (n) = Uncorrelated with S(n) y(n) = An estimate of v 1 (n) Therefore, the error e(n) is an estimate of the original signal S(n).
The performance of an adaptive algorithm can be measured in terms of a dimensionless quantity called misadjustment M, which is a normalized mean-square error defined as the ratio of the steady state Excess Mean-Square Error (EMSE) to the minimum meansquare error: The EMSE at the nth iteration is given by: The MSE in (4) is estimated by averaging |e(n)| 2 over I independent trials of the experiment. Thus, (4) can be estimated as:  (7) When the coefficients of the unknown system and the filter match, the MSE min is equal to the noise variance 2 v σ , for a zero-mean noise v(n).
Proposed algorithm: Based on regularization Newton's recursion (Sayed, 2003), we can write: Where: n = The iteration number w = An N×1 vector of adaptive filter weights ε(n) = An iteration-dependent regularization parameter µ(n) = An iteration-dependent step-size Where: µ = A positive constant step-size α and γ = Positive constants e(n) = The system output error defined by: Equation 11 is the squared norm of the error vector, e(n) estimated over its entire updated length n and (12) is the squared norm of the error vector, e(n) estimated over its last L values.
Expanding (9) and applying the matrix inversion formula: T 1 e n I x n x n e n I x n e n e n I x n x n e n x n Multiplying both sides of (14) by x(n) from the right and rearranging the equation, we have: Substituting (15) in (9), we obtain a new proposed Robust Variable Step-Size (RVSS) algorithm: w n 1 e n w n x n e n e n (1 ) x n where, γ is replaced by (1−α) ≥0 in (16) without loss of generality. It should be noted that α and µ in this equation are different than those in the preceding equations. However, since these are all constants, α and µ are reused in (16). The fractional quantity in (16) may be viewed as a time-varying step-size, µ(n) of the proposed RVSS algorithm. Clearly, µ(n) is controlled by normalization of both error and input data vectors.
The parameters α, L and µ are appropriately chosen to achieve the best trade-off between rate of convergence and low final mean-square error. The quantity ||e L (n)|| 2 is large at the beginning of adaptation and it decreases as n increases, while ||x(n)|| 2 fluctuates depending on the most N recent values of the input signal. On the other hand, ||e(n)|| 2 is an increasing function of n since e(n) is a vector of increasing length. To compute (11) and (12) with minimum computational complexity, the error value produced in the first iteration is squared and stored. The error value in the second iteration is squared and added to the previous stored value. Then, the result is stored in order to be used in the next iteration and so on.
A sudden change in the system response will slightly increase the denominator of µ(n), but will result in a significantly larger numerator. Thus, the step-size will increase before attempting to converge again. The step-size µ(n) should vary between two predetermined hard limits. The lower value guarantees the capability of the algorithm to respond to an abrupt change that could happen at a very large value of iteration number n, while the maximum value maintains stability of the algorithm.

RESULTS AND DISCUSSION
A comparison of the RVSS with MNLMS (Douglas and Meng, 1994) and NLMS algorithms is demonstrated here for several cases using system identification and adaptive noise cancellation as shown in Fig. 1 and 2 respectively. Several cases of uncorrelated and correlated stationary and nonstationary input data environments are illustrated. In system identification simulations it is assumed that the internal unknown system noise v(n) is white Gaussian with zero-mean and variance 2 v σ equals to 0.01. The impulse response of the unknown system is assumed to be h = [0.1 0.2 0.3 0.4 0.5 0.4 0.3 0.2 0.1], α = 0.5, L = 10 N in the RVSS algorithm and the length of the adaptive filter N = 10 (Walach and Widrow, 1984). The optimum values of µ in the three algorithms are chosen to obtain the same exact value of misadjustment M. Simulation plots are obtained by ensemble averaging of 200 independent simulation runs.
In adaptive noise cancellation, the simulations are carried out using a male native speech saying "sound editing just gets easier and easier" sampled at a frequency of 11.025 kHz. The number of bits per sample is 8 and the total number of samples is 33000 or 3 sec of real time. The simulation results are presented for noise environments in which the noise g(n) was assumed to be zero-mean white Gaussian with three different variances.
Case 1: White Gaussian input: In this case, the adaptive filter and the unknown system are both excited by a zero-mean white Gaussian signal of unit variance.
Case 2: Abrupt change in the plant parameters: This is the same as the previous case but, with an abrupt change in the impulse response of the plant, h. In particular, it is assumed that all the elements of h are multiplied by (−1) at iteration number 1500. Figure 4 shows the superior performance of the proposed algorithm in providing the fastest speed when tracking the statistical changes in the system. Figure 5 shows how the step-size of the RVSS algorithm immediately increases to a large value as a response to the abrupt change of the system parameters to provide the fastest speed of convergence to track changes in the system.
Performance improvement of the RVSS algorithm over other algorithms is confirmed in Fig. 6, which shows the plot of the ensemble average trajectory of one of the adaptive filter coefficients for each algorithm. The actual value of the corresponding unknown system coefficient to be identified is 0.5.
where, g(n) is a zero-mean, unit variance white Gaussian noise process and is independent of the plant internal noise. In physical terms, the input signal in this case, x(n), may be viewed as originating from the noise signal g(n), passing through a one-pole low pass filter which has a transfer function equal to 1/(1-0.9z −1 ), where z −1 is the unit-delay operator. This choice of low pass filter coefficients results in a highly colored input signal with large eigenvalue spread (around 135.8) which makes convergence more difficult. The two noise signals, g(n) and v(n), are assumed to be independent of each other. A misadjustment M = 5% is achieved with µ RVSS = 1.76×10 −2 , µ NLMS = 2.8×10 −2 and µ MNLMS = 1.6×10 −3 . The fastest rate of convergence is attained by the RVSS algorithm as shown in Fig. 7. To produce the same misadjustment value obtained in the uncorrelated input case (i.e., M = 1.5%), a larger number of iterations are required in all algorithms to reach such level of steadystate.

Case 4: Nonstationary environment:
The performance of the RVSS algorithm is compared with the NLMS algorithm in an adaptive noise canceller. The values of step-size were chosen in both algorithms to achieve a compromise between small Excess Mean-Square Error (EMSE) and high initial rate of convergence for a wide range of noise variances.
The simulations are carried out using a male native speech saying "sound editing just gets easier and easier" sampled at a frequency of 11.025 kHz. The number of bits per sample is 8 and the total number of samples is 33000 or 3 sec of real time. In the RVSS algorithm, we used α = 0.7 and L = 20 N. The order of the adaptive filter was assumed to be N = 10. The noise g(n) was assumed to be zero-mean white Gaussian.  . Figure 9 shows plots of the EMSE in dB for that noise level of the two algorithms. While both algorithms have almost the same initial rate of convergence, the average excess mean-square error in RVSS is less than that of the NLMS by 10.6 dB. The values of EMSE were measured in both algorithms over all samples starting from sample number 2000, where the transient response has approximately ended. Figure 10 demonstrates the superiority of the proposed algorithm by plotting the excess (residual) error (e(n)−S(n)) of the two algorithms. This excess error is a measure of how much closer the noise v 1 (n) is to its estimate y(n). As Fig. 10 shows, the excess error in RVSS is much less than that in the NLMS. The superiority of the RVSS algorithm was also confirmed by listening tests which produced a higher quality of the recovered speech with less signal distortion and reverberation than that when the NLMS algorithm was used.

CONCLUSION
In this study, a variable step-size LMS-based algorithm is provided to enhance the trade-off between misadjustment and speed of convergence. The step-size of the proposed algorithm is dependent on mixed normalization of both data and error vectors. Simulation results using system identification setup demonstrated significant improvements of the proposed algorithm in providing fast speed of convergence with a very small value of misadjustment in stationary environments for white and colored Gaussian input noise. In addition, the algorithm shows superior performance in responding to an abrupt change in the unknown system parameters. For nonstationary environment the proposed algorithm, simulated using adaptive noise cancellation, provides less signal distortion and lower value of misadjustment compared to other algorithms with different timevarying step-sizes.