A Novel Approach to Head positioning using Fixed Center Interpolation Net with Weight Elimination Algorithm

,


INTRODUCTION
Driver distraction is a prominent cause of automotive collisions. To enable driver-assisted systems to address these problems, we need new algorithm to infer a driver's focus of attention.
Vehicle and driver safety relies on the ability of the driver to be focused on driving rather than using various gadgets to enforce actions. As new vehicles and obstacles move into the vicinity of the car, a driver must be aware of the change and be ready to respond as necessary. When a driver fails to initiate an action, there is an increased potential for a life-threatening collision (Doshi et al., 2009;Junker et al., 2008;Murphy-Chutorian and Trivedi, 2009;2010).
Gesture recognition is a complex task which involves many aspects such as motion modelling, motion analysis, pattern recognition, machine learning and neural fuzzy systems. Gestures are expressive and meaningful body motions used in daily life as means of communication where a computer based automatic recognition system is necessary for interpretation and signal control in an interactive and dynamic environment (Roomi et al., 2010;Srinivasa and Grossberg, 2008;Suk et al., 2010;Iskandarani, 2010;Wu and Trivedi, 2008). . In such an environment body motion can be defined as a sequence of states in a configurable space, which can be modeled, based on the following principles: • Static start and end position • Smooth transition forward and backward There is a need to have on-board systems in vehicles with capability to operate in harsh environments. Such systems are designed to gather and process information in order to carry out the actions necessary to achieve their designated functions. Such systems operate using reaction models. The Brain of the system reacts to impulses, picked up by the sensors and transmits information and orders to the actuators, of the system. All of the components, materials and software for these systems satisfy requirements concerning size, sturdiness, energy consumption and immunity to external disturbances; the key words are: security, reliability, quality, safety, real-time, autonomy and servo assistance.
In this study a novel procedure for static head-pose estimation and a new algorithm for head gesture position are presented. Visual tracking is integrated into the novel FCIN system for measuring the position and orientation of a head (Asamwar et al., 2010;Toure and Beiji, 2010;Bohme et al., 2008;Chan et al., 2010;Attarzadeh and Ow, 2010). . This system consists of interconnected modules that detect head position, provide initial estimates of the head's pose and can be implemented to continuously track head position and orientation. Figure 1 show the grid used to map head movements. The grid is scanned in one-dimension multiple times starting from a certain location, then returning back to the location below the starting point (y+Δy). The used number of nodes will obviously affect accuracy and speed of convergence of the developed algorithm. Figure 2 show a photo used to test the proposed system with Fig   For a unidirectional, two-dimensional (two-layer) interpolation Gaussian network with the ability to predict an output y for a sample of inputs (x 1 ,….,x n ), the output is related to the inputs via a weighted shaping function given by:

MATERIALS AND METHODS
The function in (1)  For (x 1 ,….,x n ) fulfilling the following conditions: • Considered as coordinates of vector x • Associated with the centre vector R Then: This means that each is handling the influence of the reference vector and the input vector.
The Gaussian representation for (2) is given by: From (1): Where, σ controls generalization and function spread, with transition from local (low σ values) to global (high σ values) and w i represents the associated set of weights.
For head positions using MGIN algorithm, the weight matrix is a function of position and orientation and is given by Eq. 5:  Iskandarani used to test the mathematical model and associated algorithms. Figure 9 shows the WEA based neural network structure used for prediction of angles based on driver's head movements, while Table 1 holds  FCIN figures used to train the network with Table 2 showing the predicted FCIN figures of g(x). The effect of σ on accuracy and generalization of g(x) is illustrated in Fig. 10-12 with angle function mapping of image sampling shown in Fig. 13.

DISCUSSION
The used combined FCIN and WEA algorithms made use of the classification property of the developed modified interpolation net function and the weight elimination property of the WEA algorithm. This enabled a fast and more flexible yet accurate simulation of head poses to be achieved with minimum memory requirements (Choi and Kim, 2008;Dornaika and Davoine, 2008;Jenab and Rashidi, 2009;Yogameena et al., 2010;Harkouss et al., 2010).
From Table 1 and 2 and Fig. 10-13, the following is realized: • The effect of adding the extra term and modifying the original RBF function, which resulted in mapping of head poses • The correlation between the input to the actuators and the diver head position detected by the sensing system is achieved • Difference in angled position of driver's head is related to accumulative pixel concentration per coordinate which is mapped by the sensing and processing algorithms • The ability to construct a full spectrum of sequences with each sequence representing a type of image resulted from a different head position • The used FCIN algorithm is based on the Energy Band Model, which assumes that image pixel values and distribution in energy levels is modified and modulated

CONCLUSION
The combined FCIN-WEA algorithms unifies through conversion the extracted information from irrelevant background and correlate obtained data to initiate an action based on a driver head position. Also, the known difficulty in the interpretation of input data is resolved through the algorithm. This very successful approach to head pose classification is further supported by its ability to correlate various input technologies.