On Bayes Estimation of the First Order Moving Average Model

In this work, Bayes estimation of the first order moving average model (MA(1)) were studied. Theoretical justification of the Bayes estimates based on the estimated innovations is given. The convergence of Bayes and maximum likelihood estimates are examined via simulation using different parameter values. Also, Bayes estimates were determined when the model is invertible using the estimated innovations. For long series lengths, it has been noted that the Bayes estimate of of invertible MA(1) model assuming uniform prior on and inverted gamma prior on 2 equals the Bayes estimate of for noninvertible MA(1) model. Generally, the simulation results showed that the performance of the Bayes estimates using estimated innovations depends on the values of within the invertibility region. As expected, we note that the performance of the maximum likelihood and Bayes estimates are equally likely for long series lengths.


INTRODUCTION
Bayesian inferences on autoregressive moving average (ARMA) models are limited due to the complicated form of the likelihood function which makes it difficult to get analytical tractable results. In addition, the Bayesian estimation of ARMA models is based on the causality and inevitability conditions of the model coefficients. The Bayesian inference on time series goes back to Zellner [14] , when the posterior and predictive distributions were derived for the first and the second order AR models using vague priors and analysis of regression models with autocorrelated errors. Box and Jenkins [3] introduced Bayesian analysis for ARMA models without any restrictions on causality and inevitability conditions. Monahan [12] , used numerical integration technique to implement Bayesian time series. Bromeling and Shaarawy [6] proposed an approximate Bayes estimates based on the estimated innovations for the analysis of MA and ARMA models without any restrictions on the causality and inevitability conditions. The estimated innovations were used later by Chen [9] in Bayesian inference of the bilinear models. Marriot et. al. [11] used Monte Carlo Markov Chain (MCMC) methods to implement the Bayesian inference on the ARMA models. Chen et. al [10] , use MCMC methods for exploration of the joint posterior distribution of threshold autoregressive (TAR) models. Smadi [13] used MCMC methods for the Bayesian inference on the threshold autoregression moving average (TARMA) models using the estimated innovations.
In this work, the Bayes estimation of the first order moving average model (MA(1)) has been investigated using the estimated innovations. Asymptotic justification of Bayes estimation of MA(1) model is provided. The convergence of the Bayes and the maximum likelihood estimates are examined via simulation using different parameter values. Moreover, Bayes estimates of the first order invertible moving average model have been considered. Bayes theorem of estimation is described below [7] . Theorem (1) ( ) be a prior distribution of . A sample X 1 , X 2 ,…, X n is then drawn from a population indexed by . Let ( | ) f x θ be the sampling distribution of X 1 ,X 2 ,…,X n .
The prior distribution is updated by the sample information. The updated prior is called the posterior distribution which is a conditional distribution of for a given sample can be described as: 2) The mean of the posterior distribution can be used as a point estimate of by means of squared loss function. The first order moving average model (MA(1)) is given by: The first order moving average model given in (3) is always stationary. According to Brockwell and Davis [4] a model is judged to be invertible if For first order moving average model (MA(1)) given in (3), the joint probability density function of n X X X ,..., , 2 1 can be written as [3] : where Z 0 is non-observable random variable; assuming Z 0 equals its unconditional expectation zero. The innovations Z t (t = 1,2,…,n) can be calculated recursively. The conditional likelihood function is given by Bromeling [5] derived approximate marginal posterior distributions and posterior means for the parameters of the MA(q) model. Accordingly, the true innovations , and θˆ is the least squares estimate of . Thus, the approximate conditional likelihood function is given by: Bromeling [5] assumed multi-normal-gamma prior density on the model parameters of the MA(q) model. For the MA(1) model, the normal-gamma prior density on and is: (8) Based on the above, Bromeling [5] concluded the following results: The marginal posterior density of has a t-distribution with n+2 -1 degrees of freedom. The mean and precision are given by α + = ′ n (11) and; β α the normal-gamma prior density given in (8) reduces to Jeffrey's prior described by: , ( 1 (13) Therefore, the Bayes estimate of the precision parameter is given by: Asymptotic justification of the estimated innovations in Bayes estimation of the MA(1) model is described below using basic convergence results of the probability theory. The following theorem will be used [2] .
Let θˆ be the least squares estimate of , it has been shown that [3] θ θ →  P Then, according to Theorem (2), part (v), Also, according to Theorem (2), part (v), The convergence of the approximate Bayes estimates using the estimated innovations and the maximum likelihood estimates will be examined via simulation using different parameters and series lengths.

Bayes Estimation of the Invertible MA(1) Model
The first order invertible moving average model, where 1 1 θ − < < , will be considered. Abu-Salih and Abd-alla [1] obtained in a closed form the Bayes estimates of the parameters of the stationary AR(1) model using different informative and noninformative priors. The AR(1) model is given by: N(0, 2 ), -1< <1.
The following joint prior on and 2 is used in the Bayesian estimation: (21) where and 2 are independent, with inverted gamma prior on 2 with hyper parameters d and and uniform prior on .
Assuming squared loss function, Abu-Salih and Abd-alla [1] derived in a closed form the marginal posteriors of and 2 for the stationary AR(1) model. The derivation for the above invertible MA(1) model is similar to the stationary AR(1) model; where X t is regressed on 1 − t Z rather than X t-1 . The final expressions of the marginal posterior means will be followed, details of the derivations and integration results can be found in Abu-Salih and Abd-alla [1] . The Bayes estimates of is given by It is noted that for long series lengths, the approximate Bayes estimate (Eq. 23) of in the invertible MA(1) model, assuming uniform prior on and inverted gamma prior on , equals the approximate Bayes estimate of (Eq. 9) in the noninvertible MA(1) model, assuming Jeffrey's prior. Thus, it can be concluded that both estimates of invertible and noninvertible models are the same. In addition, there is no effect of the inverted gamma prior on on the Bayes estimate of for long record lengths. On the other hand, as one expects, there is an effect of the inverted gamma prior on when the Bayes estimate of when comparing the mean of the marginal posterior density of derived by Bromeling (Eq. 14) and the asymptotic Bayes estimate of the invertible model (Eq. 25)

RESULTS AND DISCUSSION
A simulation was carried out to examine the convergence of the Bayes estimates based on the estimated innovations. The results were compared with the maximum likelihood estimates. The means and root mean square errors (RMSE) of the maximum likelihood estimates θˆ and 2 σ and the Bayes estimates B θˆand      Generally, the simulation results showed that the performance of the Bayes estimates using the estimated innovations depends on the values of within the invertibility region. As we expect, we note that the performance of the maximum likelihood and the Bayes estimates are equally likely for long series lengths.

CONCLUSION
It can be concluded that for long series lengths, the approximate Bayes estimate of of invertible MA(1) model assuming uniform prior on and inverted gamma prior on , equals the Bayes estimate of of noninvertible MA(1) model, assuming Jeffrey's prior. The simulation results showed that the performance of the Bayes estimates using the estimated innovations depends on the values of within the invertibility region. For long series lengths, we note that both maximum likelihood and Bayes estimates performs equally likely.