American Journal of Applied Sciences

Visual Tracking using Invariant Feature Descriptor

Lee-Yeng Ong, Siong-Hoe Lau and Voon-Chet Koo

DOI : 10.3844/ajassp.2017.886.898

American Journal of Applied Sciences

Volume 14, Issue 9

Pages 886-898

Abstract

The process of identifying the state of an object in a video sequence is referred as visual tracking. It is mainly achieved by using the appearance information from a reference image to recognize the similar characteristics from the other images. Since a digital image is built-up with rows and columns of pixels that are represented with finite set of digital values, the appearance information is measured with a mathematical formulation that is known as image intensity. The problem of distinguishing the intensity of the object of interest from the other objects and the surrounding background is always the main challenge in visual tracking. In this study, a novel invariant feature descriptor model is introduced to address the aforesaid problem. The proposed framework is inspired by the theoretical model of local features that has been widely-used for image recognition. From the large number of diversified scenarios in the surveillance applications, the performance of the proposed model is demonstrated with the benchmarked dataset of single-target tracking. The experiment results shown the advantage of our proposed model for tracking non-rigid object in the changing background as compared to other state-of-the-art visual trackers. In addition, the important aspects of the proposed model are analyzed and highlighted as well in the experimental discussions. 

Copyright

© 2017 Lee-Yeng Ong, Siong-Hoe Lau and Voon-Chet Koo . This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.