Research Article Open Access

Shots Temporal Prediction Rules for High-Dimensional Data of Semantic Video Retrieval

Shaimaa Toriah Mohamed Toriah1, Atef Zaki Ghalwash2 and Aliaa A.A. Youssif2
  • 1 Benha University, Egypt
  • 2 Helwan University, Egypt
American Journal of Applied Sciences
Volume 15 No. 1, 2018, 60-69


Submitted On: 12 September 2017 Published On: 30 January 2018

How to Cite: Mohamed Toriah, S. T., Ghalwash, A. Z. & Youssif, A. A. (2018). Shots Temporal Prediction Rules for High-Dimensional Data of Semantic Video Retrieval. American Journal of Applied Sciences, 15(1), 60-69.


Temporal consistency stands as a vital property in semantic video retrieval. Few research studies can exploit this useful property. Most of the used methods in those studies depend on rules defined by experts and use ground-truth annotation. The Ground-truth annotation is time-consuming, labor intensive and domain specific. Additionally, it involves a limited number of annotated concepts and a limited number of annotated shots. Video concepts have interrelated relations, so the extracted temporal rules from ground-truth annotation are often inaccurate and incomplete. However, concept detection score data are a huge high-dimensional continuous-valued dataset and generated automatically. Temporal association rules algorithms are efficient methods in revealing the temporal relations, but they have some limitations when applied to high-dimensional and continuous-valued data. These constraints have led to a lack of research used temporal association rules. So, we propose a novel framework to encode the high-dimensional continuous-valued concept detection scores data into a single stream of numbers without loss of important information and to predict the neighbouring shots’ behavior by generating temporal association rules. Experiments on TRECVID 2010 dataset show that the proposed framework is both efficient and effective in encoding the dataset which reduces the dimensionality of the dataset matrix from 130×150000 dimensions to 130×1 dimensions without loss of important information and in predicting the behavior of neighbouring shots, the number of which can be 10 or more, using the extracted temporal rules.

  • 0 Citations



  • Semantic Video Retrieval
  • Temporal Association Rules
  • Principle Component Analysis
  • Gaussian Mixture Model Clustering
  • Expectation Maximization Algorithm
  • Sequential Pattern Discovery Algorithm