TY - JOUR AU - Mohamed Toriah, Shaimaa Toriah AU - Ghalwash, Atef Zaki AU - Youssif, Aliaa A.A. PY - 2018 TI - Shots Temporal Prediction Rules for High-Dimensional Data of Semantic Video Retrieval JF - American Journal of Applied Sciences VL - 15 IS - 1 DO - 10.3844/ajassp.2018.60.69 UR - https://thescipub.com/abstract/ajassp.2018.60.69 AB - Temporal consistency stands as a vital property in semantic video retrieval. Few research studies can exploit this useful property. Most of the used methods in those studies depend on rules defined by experts and use ground-truth annotation. The Ground-truth annotation is time-consuming, labor intensive and domain specific. Additionally, it involves a limited number of annotated concepts and a limited number of annotated shots. Video concepts have interrelated relations, so the extracted temporal rules from ground-truth annotation are often inaccurate and incomplete. However, concept detection score data are a huge high-dimensional continuous-valued dataset and generated automatically. Temporal association rules algorithms are efficient methods in revealing the temporal relations, but they have some limitations when applied to high-dimensional and continuous-valued data. These constraints have led to a lack of research used temporal association rules. So, we propose a novel framework to encode the high-dimensional continuous-valued concept detection scores data into a single stream of numbers without loss of important information and to predict the neighbouring shots’ behavior by generating temporal association rules. Experiments on TRECVID 2010 dataset show that the proposed framework is both efficient and effective in encoding the dataset which reduces the dimensionality of the dataset matrix from 130×150000 dimensions to 130×1 dimensions without loss of important information and in predicting the behavior of neighbouring shots, the number of which can be 10 or more, using the extracted temporal rules.