Multimodal Integration (Image and Text) Using Ontology Alignment
Ahmad Adel Abu Shareha, Mandava Rajeswari and Dhanesh Ramachandram
DOI : 10.3844/ajassp.2009.1217.1224
American Journal of Applied Sciences
Volume 6, Issue 6
Problem statement: This study proposed multimodal integration method at the concept level to investigate information from multimodalities. The multimodal data was represented as two separate lists of concepts which were extracted from images and its related text. The concepts extracted from image analysis are often ambiguous, while the concepts extracted from text processing could be sense-ambiguous. The major problems that face the integration of the underlying modalities (image and text) were: The difference in the coverage and the difference in the granularity level. Approach: This study proposed a novel application using ontology alignment to unify the underlying ontologies. The said lists of concepts were represented in a structured form within the corresponding ontologies then the two structural lists are enriched and matched based on the alignment, this matching represent the final knowledge. Results: The difference in the coverage was solved in this study using the alignment process and the difference in the granularity level was solved using the enrichment process. Thus, the proposed integration produced accurate integrated results. Conclusion: Thus, integration of these concepts allows the totality of the knowledge be expressed more precisely.
© 2009 Ahmad Adel Abu Shareha, Mandava Rajeswari and Dhanesh Ramachandram. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.