Estimation of the Mean of Truncated Exponential Distribution

Problem statement: In this study, the researcher considers the problem of estimation of the mean of the truncated exponential distribution. Approach: This study contracted with maximum likelihood and unique minimum variance unbiased estimators and gives a modification for the maximum likelihood estimator, asymptotic variances and asymptotic confidence intervals for the estimators. The properties of these estimators in small, moderate and large samples were investigated via asymptotic theory and computer simulation. Results: It turns out that the modified maximum likelihood estimator was more efficient than the others and exists with probability 1. Conclusion: The modified maximum likelihood estimator was always exist, fast and straightforward to compute and more likely to yield feasible values than the unique minimum variance unbiased estimator. Its variance was well approximated by the large sample variance of the other estimators.


INTRODUCTION
Suppose that X be a random variable with exponential Probability Density Function (PDF) of mean (1 / ) θ , then the PDF of the random variable Y, the truncated version of X truncated on the right at b, is given by: where, b is a known constant.
In practice, the exponential distribution has been widely used as a model in areas ranging from studies on the lifetimes of manufactured items [1,2] to research involving survival or remission times in chronic diseases [3] . But in some situations, an estimate is desired of the mean among the elements of the population belonging to a certain group. For example, in life testing problems from an exponential distribution, separate estimate for the lifetime mean might be required for bulbs whose survival times are limited to be less than a constant b. In this case these survival times might follow a truncated exponential distribution. The families of truncated distributions provide densities that are useful in modeling such populations [4][5][6][7][8] .
The truncated exponential distribution can occur in a variety of ways. It may directly seem to be a good fit as a distribution for a given available data set, or it may result from the type of sampling used when the underlying distribution is assumed to follow the exponential distribution [6,9] .There are different approaches for sampling selection from a subset of a larger population [10,11] .
This study deals with Maximum Likelihood estimator, (ML) and unique minimum variance unbiased estimator, (UM), of the mean of truncated exponential distribution and shows that the maximum likelihood estimator does not always exist, its existence depends upon the value of the mean of the random sample and exists with probability approaching 1 as n . A Modified Maximum Likelihood estimator, (MML), is considered and compared with the other estimators. The results of a large scale simulations indicate that the modified maximum likelihood estimator is more efficient and more likely to satisfy the feasibility condition, namely Before proceeding with the estimation problem, it can be shown that the mean, say ( ) µ θ , of the truncated exponential distribution given in (1) This function is monotonic decreasing and continuous on (0, ) ∞ with possible range (0,b/2).

MATERIALS AND METHODS
Maximum and modified maximum likelihood estimators: Assume that Y 1 ,Y 2 ,…Y n be a random sample of size n taken from the truncated exponential distribution given in (1). The likelihood function, say L(θ) is: where, y is the sample mean. Maximizing this likelihood we get the maximum likelihood estimator for θ. It follows that: It can be shown that the left-hand side of Eq. 4 is monotonic decreasing in θ; as θ tends to 0 it tends to b / 2 y − and as θ tends to infinity it tends to ( ) As n , we have Y converges in probability to the mean ( ) µ θ of the truncated exponential p.d.f. given in (1). Because the density in (1) is monotone decreasing, a simple geometrical argument shows that the mean ( ) µ θ must lie in the left half of the interval (0, b) and hence ( ) µ θ <b/2. Then P(Y b / 2) 1 < → as n , so that the MLE θ * exists with probability approaching 1 as n . Therefore, using the invariance property of the maximum likelihood method [13,14] , the maximum likelihood estimator, 1 µ , of ( ) µ θ is given by: The same argument as before shows that the MLE estimator 1 µ exists with probability approaching 1 as n → θ . Under the regularity conditions [15][16][17] , this estimator possesses the major properties of the maximum likelihood estimator, that is 1 µ is consistent, asymptotic efficient and best asymptotically normal estimator with mean µ(θ) and asymptotic variance, the Cramer Rao lower bound. The modification to the MLE 1 µ , given in (5), is based on finding an estimator which is close as possible to the MLE 1 µ and is more likely to satisfy the feasibility condition 0 ( ) b / 2 < µ θ < than the unique minimum variance unbiased estimator. This suggested estimator, say 2 µ , can be written as: which corresponds to the modified maximum likelihood estimator, say θ ,of θ ,given by: The same argument as before shows that 1 µ = 2 µ and θ = θ with probability approaching 1 as n .
Unique minimum variance unbiased estimator: It is obvious that the distribution in (1) represents a regular case of the exponential class of probability density functions of the continuous type and hence n i i 1 y = is a complete sufficient statistics for θ and . Then by using the theorem of Lehmann and Scheffe [13] , the unique minimum variance unbiased estimator, say 3 µ , of is The variance of 3 µ , say 3 var ( ) µ , is given by: where, MSE is the mean-squared error.
Interval estimation of ( ) µ θ : Approximate 100(1-) percent confidence interval for ( ) µ θ in (5-7) can be constructed by the standard normal limiting distribution and the modification of Slutsky's theorem 6 given by [13] : µ a large scale simulation investigation was made for the exponential p.d.f. truncated on the right. To get the biases, variances and the mean-squared errors numerically, the simulation technique with the help of MATLAB, the language of technical computing version 6.5 is used [18] . These are computed for 50,000 samples of sizes (n = 20, 30, 50, 100, 200) generated from the truncated exponential distribution. Pseudo-random uniform numbers were obtained from the function RAND of the MATLAB. The transformation to the truncated exponential distributed variable is given by: Where: F(.) = The distribution function of the truncated exponential random variable Ui = Uniformly distributed random variable on (0,1) [10] For each combination of (n, ) ζ , 50,000 trials have

RESULTS
The simulation results for estimation of the mean of truncated exponential distribution are shown in Table 1-3.         Table 1 shows that the MML estimator has, consistently, the lowest absolute bias of the two biased estimators of , its advantage being particularly marked in small samples n = 20, 30 and in moderate samples n = 50. Table 2 shows that the MML estimator has slightly larger variance than the ML estimator when 2.5 ζ < , but its variance is small and in most cases relatively insignificant compared to the bias in its contribution to the mean-squared error. The UM estimator has the largest variance of the three estimators of when 2.5 ζ < . The variance of the MML estimator is well approximated by the asymptotic variance of ML and UM estimators given by (9) and the last line of Table 2. Table 3 gives the percentage values of the relative efficiencies of ML and UM estimators defined as the ratio of the means square errors, relative to the MML estimator. It is obvious from this table that, in general, the ML and the UM estimators are less efficient than the MML especially when 2.5 ζ < and their relative efficiencies increase with ζ .

CONCLUSION
Estimations of the mean of truncated exponential distribution have been suggested and their properties are studied. It turns out that the modified maximum likelihood estimator has several advantages over the other estimators. It is always exist, fast and straightforward to compute and more likely to yield feasible values for the estimated mean than the unique minimum variance unbiased estimator. The bias of the estimator is small and decreases rapidly as the sample size increases. The variance of the MML estimator is comparable with those of the ML and UM estimators. The variance of the MML estimator is well approximated by the large sample variance of ML and UM estimators.