Journal of Computer Science

Thai Speech Coding Based On Conjugate-Structure Algebraic Code Excited Linear Prediction Algorithm

Suphattharachai Chomphan

DOI : 10.3844/jcssp.2011.75.79

Journal of Computer Science

Volume 7, Issue 1

Pages 75-79

Abstract

Problem statement: In mobile communication, speech coding aims at compressing the speech with lowest bitrate and highest quality for standard languages such as English, German and French. As for other languages with different uttering styles, the encoded speech quality is not guaranteed at the same bitrate. The appropriate evaluation should be performed to develop the speech quality by applying some suitable techniques. Approach: This study presents the comparison results of speech quality that is encoded and decoded by CS-ACELP coder according to ITU-G.729 standard. The purpose is to test the performance of CS-ACELP coder between Thai speech and English speech. Results: The study used 2 coding methods; (1) CS-ACELP coder without Voice Activity Detection and (2) CS-ACELP coder with Voice Activity Detection. The objective test was used to measure the speech quality for each case. The results show that both methods give Thai speech quality mostly below than English speech quality, as for methods comparison; both Thai and English, method (2) gives speech quality better than method (1). Eventually, we modified the coder by increasing the order of LP analysis to improve the Thai speech quality. Conclusion: From the finding, by no other modification, the quality of Thai coding is not equivalent to the English Language. After modifying the LP analysis by increasing the LP order from 10-12 or 14, the quality of Thai speech coding are truly improved. But the coding rate also increased for allocating the higher order information.

Copyright

© 2011 Suphattharachai Chomphan. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.