Still Image Compression Using Texture and Non Texture Prediction Model

: Problem statement: Existing lossless image compression schemes attempt to do prediction in an image data using their Local Binary Pattern (LBP) and their spatial neighborhood based techniques. In the previous techniques such as Vector Quantization (VQ) and Gradient Adjusted Prediction (GAP) the texture and non-texture regions are not classified separately. Texture and Non-texture images prophecy has been a key factor in efficient lossless image compression. Hence, there is a need to develop a more efficient image prediction scheme to exploit these texture components. Approach: In this research, an efficient visual quality technique for image compression is proposed. The image is classified into texture and non-texture regions by using an Artificial Neural Network (ANN) Classifier. The texture region is encoded with the Similar Block Matching (SBM) encoder and the non-texture region is encoded with SPIHT encoding. Results: The proposed texture prediction based compression is compared with the existing compression techniques such as H.264 and JPEG. From the result it reveals that the Peak Signal to Noise Ratio (PSNR) values of all the test images is higher in the proposed technique as compared to JPEG technique. Similarly PSNR values are low in H.264 for all the images except Boat image when compared to the proposed technique. This result concludes that the increase in PSNR indicates that the output image has less noise as compared to existing techniques. Conclusion: The compression of the proposed algorithm is superior to JPEG and H.264. Our new method of compression algorithm can be used to improve the performance of Compression ratio and Peak Signal to Noise Ratio (PSNR). In future this study can be extended to real time applications for video compression in medical images.


INTRODUCTION
In lossless image compression, the image is compactly represented such that it can be reconstructed without any error. This is important in some applications whereby (for legal or medical reasons), images have to be stored or transported in such a way that they can be later recovered without any change in the image. Lossless image compression schemes either treat the image as a 1-D text sequence, or make use of the 2-D contexts to improve the coding performance. The former approach involves an initial stage of 2-D to 1-D conversion and a text compression algorithm for compressing the 1-D sequence (Omer and Khatatneh, 2010). Some popular lossless image compression methods (such as GIF, TIFF and PNG) are based on Ziv-Lempel coding algorithm after a raster scan of the image. Most successful lossless image compression algorithms are, however, contextbased and they exploit the 2-D spatial redundancy in natural images (Zhang and Adjeroh, 2008). Examples include LJPG (lossless JPEG), Context-based Adaptive Lossless Image Coding (CALIC) (Paul et al., 2011) and SPIHT (Chopra and Pal, 2011). These methods usually involve four basic components: an initial prediction scheme to remove the spatial redundancy between neighboring pixels; a context selection strategy for a given position in the image; a modeling method for the estimation of the conditional probability distribution of the prediction error given the context in which it occurs; and an entropy coding method based on the estimated conditional probabilities. Different lossless image compression schemes vary in the details of one or more of the basic components. Existing work used a prediction method called super-spatial structure prediction (Zhao and He, 2010). It is motivated by motion detection in video coding, attempting to find optimal prediction of structure components within previously encoded image regions. When compared to Vector Quantization (VQ) based image encoders, it has the flexibility to incorporate multiple H.264style prediction modes (Chiang et al., 2011). In neighborhood-based methods, such as Gradient Adjusted Prediction (GAP) (Chiang et al., 2011) and H.264 Intra prediction, it allows the block to find the most possible match from the whole image. In the previous techniques such as Vector Quantization (VQ) and Gradient Adjusted Prediction (GAP) the texture and non-texture regions are not classified separately.
To achieve better image compression and encoding efficiency by relaxing the neighborhood constraint can be traced back to sequential data compression (Chandra et al., 2009) and vector quantization for image compression (Santipach and Mamat, 2011). In sequential data compression, a substring of text is representing by a displacement/length reference to a substring previously seen in the text. A lossless dynamic dictionary compression technique is not competitive with the stateof-the-art such as CALIC (Paul et al., 2011) in terms of encoding efficiency. The image compression techniques such as SPIHT (Set Partitioning in Hierarchical Trees) (Santhi and Banu, 2011) and the embedded zero wavelet tree (Dehkordi et al., 2011), are offered significantly improved quality over VQ.
Existing lossless image compression schemes attempt to do prediction in an image data using their Local Binary Pattern (LBP) (Song et al., 2010) and their spatial neighborhood respectively. LBP just considers the uniform patterns in the images. It discards important pattern information for images whose dominant patterns (e.g., the specific patterns with the largest proportion among all the patterns) are not uniform. In addition, the features of these conventional LBP are the histograms of the uniform patterns in a texture image.

MATERIALS AND METHODS
Texture and non-texture classification: In order to classify the texture-based region, the classifier used either feature-specific or domain-specific. A feature-specific texture classifier assigns a class label to each pixel based on the features corresponding to the pixel, independent of spatial domain or transform domain knowledge.
In domain-specific classifier uses domain knowledge in the form of additional constraints to achieve classification. The performance of any domain-specific approach is expecting to be superior to the feature-specific approaches. One can model domainspecific knowledge as a set of constraints on the image features and on the possible labels for each pixel. A suitable constraint satisfaction model may be use to attain a state such that the constraints are satisfied to a maximum extent. An example of the constraint satisfaction scheme is a rule-based expert system where the constraints described as rules in its knowledge base.
The back propagation algorithm is used in layered feed-forward ANN in which the artificial neurons are arranged in layers (Omaima, 2010). Neurons send their signals "forward" and the errors are propagated backwards. The network receives inputs by neurons in the input layer and the output of the network given by the neurons on an output layer. There may be one or more intermediate hidden layers are presented. The back propagation algorithm is a supervised learning method of training a neural net in which the initial system output is compared to the desired output and the network to compute the error (difference between actual and expected results). The idea of the back propagation algorithm is to reduce this error, until the ANN learns the training data. The training begins with random weights and the system is adjusted until the difference between the two is minimized. In this study, an Efficient Visual Quality technique for image compression is proposed. The image is classified into texture and non-texture regions by using the ANN Classifier. The texture region is encoded with the Similar Block Matching encoder (SBM) and the non-texture region is encoded with SPIHT encoding (Santhi and Banu, 2011).
Methods for texture classifier: An image usually comprises of texture and non-texture pattern. According to the Human Visual System (HVS), even the minor changes in the non-texture regions are noticeable, while the changes in the texture regions are not easily identifiable. SBM method will be suitable for texture region based compression to get better visual quality.
In this proposed approach, texture and non-texture regions are carried out based on texture features, such as, contrast, correlation, energy and homogeneity.
An analysis has been carried out for texture features in non-texture images as well as texture images. The texture images are taken from Brodatz database images. These images are taken individually and are separated into 4×4 blocks. The texture features are found for all these 4×4 blocks and the resultant will be 4×1 vectors.
Consider an image of 256×256 from which the 4×4 blocks are separated, adding up to 4096 blocks. Thus, the final computed values are 4×4096-feature matrix for a single image. Totally 66 images are taken from the database and thus finally computed to 4×4096×66 feature matrices. The non-texture images are collected from coral database. The above-mentioned process is carried out for non-texture images and their corresponding feature matrices are found out. The experimental results are given in Table 1 and 2 Gray Level Co-Occurrence Matrix (GLCM) is created by finding the distance between pixels in four directions Texture features are extracting from the statistics of this matrix. The prominent features of texture pattern in an image are contrast, correlation, energy and homogeneity. The accuracy increases if these features are all trained for more number of images using a traditional classifier. where, R is normalized value of co-occurrence matrix P given by R = Σ Σ P (i, j)    Fig. 1: The Proposed algorithm model Figure 1 represents the proposed approach in which the 4×4 block is taken from the input image and classify using ANN. If texture regions are found, the blockmatching algorithm is carried out along with H.264 encoding (Chiang et al., 2011). If the non-texture region found to be classified, SPIHT encoding process is carried out (Santhi and Banu, 2011).  Consider Barbara image as an input image shown in Fig. 2a. The corresponding texture classification by using ANN classified technique is shown in Fig. 2b. The classified texture and non-texture images are shown in Fig. 2c and 2d.

Similar block matching algorithm:
The similar block matching algorithm involves matching the pre-existing templates with the blocks in the image. The pre-existing blocks are shown in Fig. 3 a-f. The sample texture model patterns are obtained from Brodatz database images shown in Fig. 4. In the proposed SBM encoding technique, ANN is used to classify the texture blocks and non-texture blocks. If a featured block is texture block, it will be considered as a reference block. Around the reference block, a search area of 16×16 bounding block is formed. In neighborhood patterns search operation is performed by considering the neighboring 4×4 block of the reference block and compute the Bi-orthogonal wavelet transform for the two blocks. The resulting transformed matrices take Mean Absolute Difference (MAD) for finding the similar blocks between references and neighboring blocks.
The minimal distance block assumed to be similar to the reference block (R).

Similar Block Matching Encoder (SBM) Algorithm:
Require: Image (512x512) The Similar Block Matching process (SBM) is as shown in Fig. 5. Initially a search block of 16×16 is defined around the 4×4-reference block and the neighboring 4×4 blocks meet the search criteria. If a similar block is found per MAD algorithm, then the corresponding matrix value and the location index are stored separately. After completing the search block, it automatically finds the next reference block and defines a new search block. Within this new search block, if some blocks are similar to reference block store the values as mentioned above. SBM algorithmstore only the location index of similar blocks and discard the corresponding matrix value. Finally, all the matrix values and the location indices of the similar blocks are encoded using H.264 encoder (Chiang et al., 2011).

RESULTS AND DISCUSSION
The proposed texture prediction based compression is compared with the existing compression techniques such as H.264 and JPEG. Table  3 reveals that the PSNR values of all the test images are high in the proposed technique as compared to JPEG as shown in Fig. 6. Similarly PSNR values are low in H.264 for all the images except Boat image when compared to proposed technique. This result concludes that the increase in PSNR indicates that the output image has less noise as compared to existing techniques. Table 3 and 4 shows the comparison of PSNR and compression ratio with existing techniques. The texture features extracted from non-texture block values are as shown in Table 5. That shows that the feature values are minimum for the images taken. Table 6 shows the texture features extracted from texture images. Texture values for texture blocks are Significantly p<0.05) high as compared to texture values of non texture blocks. By this experiment, it is understood that the feature values of texture region are higher than non texture region. From the result it reveals that the Peak Signal to Noise Ratio (PSNR) values of all the test images is higher in the proposed technique as compared to JPEG technique as shown in Fig. 6a    This result concludes that the increase in PSNR indicates that the output image has less noise as compared to existing techniques.  Figure 7a and b shows comparison of compression ratio of various methods for Baboon, Barbara, Gold hill, Lena, Peppers, Boat Couple and Man images.  ---------------------------------------------------------------------    Referring to the above statistics obtained based on the proposed theory, that can assure to achieve a good PSNR and compression ratio provide satisfactorily high quality reconstruction.

CONCLUSION
The proposed algorithm is simple and computationally less complex which is based on texture and non-texture classification. The compression of the proposed algorithm is superior to JPEG and H.264. Our new method of compression algorithm can be used to improve the performance of Compression ratio and Peak Signal to Noise Ratio (PSNR). In future this study can be extended to real time applications for video compression in medical images.