Data Extraction from Computer Acquired Images of a Given 3D Environment for Enhanced Computer Vision and its Applications in Kinematic Design of Robos

Problem statement: Literature review was mainly aiming at recognition of objects by the computer and to make explicit the information that is implicit in the attributes of 3D objects and their relative positioning in the 3D Environment (3DE) as seen in the 2D images. However quantitative estimate of position of objects in the 3DE in terms of their x, y and z co-ordinates was not touched upon. This issue assumes important dimension in areas like Kinematic Design of Robos (KDR), while the Robo is negotiating with z field or Depth Field (DF). Approach: The existing methods such as pattern matching used by Robos for Depth Visualization (DV) using a set of external commands, were reviewed in detail. A methodology was developed in this study to enable the Robo to quantify the depth by itself, instead of looking for external commands. Results: The Results are presented and discussed. The major conclusions drawn based on the results were listed. Conclusion: The major contribution of the present study consists of computing the Depth (D1) corresponding to the depth (d) measured from the photographic image of a 3DE. It had been concluded that, there exists an excellent agreement between the computed depth D1 and the corresponding actual Depth (D). The percent deviation of D1 from D (DP) lies between ±2 over the entire region of the (DF). Through suitable interfacing of the developed equation with the kinematic design of Robos, the Robo can generate its own commands for DF negotiations.


INTRODUCTION
CV includes various techniques for making the computer simulate the functions of human vision, through electronically perceiving and understanding the 2D images (Sandyarani and Vaithyanathan, 2009a;Pridmore and Hales, 1995;Keium, 2000).
Giving the computer ability to see is not an easy task. The basic limitations of CV are that, the computer has to analyze the 2D images having enormous loss of information (Sonaka et al., 1999). Further CV basically lacks wide applicable and general modifiable knowledge of the real world (Rao, 1996). Controlling and analyzing the inputs poses yet another problem in CV. Effects such as lighting and shadows, lens focus, make it impossible to guarantee that two digitized images of the same scene will be identical (Rosenfeld, 1988).
However all above techniques enable the computer to understand the 3DE represented by the corresponding 2D images/Photographs. A quantitative estimate of the x, y and z coordinates of a point in a given 3DE remained untouched by the researchers. Some additional data must be extracted from the 2D images that were to be used for comprehending the 3D objects, helping the computer to quantitatively estimate the coordinates (Sandyarani and Vaithyanathan, 2008;2009b). An attempt is made in the present study to address above issue.

MATERIALS AND METHODS
The D F is assumed to be a line AB of length 1 m situated behind the pp, above GP and perpendicular to GP. Line ApBp is the Photographic images of depth field AB. Let D represent the actual depth (Length AB), Let d represent the length as measured on the Photographic image (length ApBp). An attempt is made to establish an equation between D 1 and d in the form: Where: D 1 = The computed depth for a measured d on the photograph c 0 , c 1 , c 2 = Constants obtained by numerical methods for a set of known values of D (actual depth) and the corresponding measured depths (d) on the photograph, for the entire region of depth Present study: In the actual experiment set up, a pure black string of 1000 mm length having knots an intervals of 50 mm is chosen to represent the DF. Its photograph is shown in Fig. 2. The different sets of known D and corresponding d are shown in Table 2. In Fig. 1, Line AB represents the depth field which is perpendicular to the picture plan and parallel to the ground plan. The observer (camera lens) is in front of the picture plan. Length A p , B p as seen on the picture plan is the perspective projection of the depth field. This can be considered as the photographic image of the depth field seen in Fig. 2.  Table 2 that there is an excellent agreement between D 1 and D and the percent deviation of D 1 from D is with in ±2.0. Hence the proposed equation can be used for unknown depth computations. Through suitable interfacing the proposed equation in the kinetic design of Robo, the Robo can generated the commands for depth, by itself (Rao and Vaithyanathan, 2008a;2008b) The Values D, d, c 0 , c 1 , c 2 , D 1 and D P are tabulated and are shown in Table 1 and 2.

RESULTS
• Table 2 reveals that there exist a wide variation between the measured values of depth on the photograph (d) and the actual Depth (D). The above variation is more pronounced for depths nearer to the observer  • Results 1 and 2 justify the need for present research • As seem from Table 2 the percent deviation (p) of computed depth D 1 from the actual depth D for a given d on the photograph is with ±2.0

DISCUSSION
Various methods are reported in the literature for D F computations, like using intensity of pixels, parallel projection methods. The justification for choosing the proposed method for D F computation is that, the photographic techniques offer better accuracy, more ease of handling of computing data: • The proposed equation offers minimum variation (±2.0%) of D 1 from D and hence can be effectively used for depth computations • The existing methods of depth sensing of Robos is through external commands. The current method, through suitable interfacing can help the Robo, to generate its own command for depth sensing

CONCLUSION
The major contribution of the present study lies in proposing an equation between D 1 and D. There exists a good agreement between D 1 and D.