Using Both a Probabilistic Evolutionary Graph and the Evidence Theory for Color Scene Analysis

: In this research, we introduce a new color images segmentation algorithm. The color scene analytic method is based on the progress of a probabilistic evolutionary graph. The strategy consists in making grow an evolutionary graph, which presents the scene elements in an unsupervised segmented image. The graph evolution development is based on the computation of the belonging probabilities to the existing classes of the last built region. The space composition matrix of the areas in each class is then given. A space delimitation map of the regions is established by a new method of contour localization and refinement. At last, the final segmented image is established by classification of the pixels in the conflict region using the Dempster-Shafer evidence theory. The effectiveness of the method is demonstrated on real images.


INTRODUCTION
The image color information is often primordial in the analysis and the decision-making during the robot evolution in its environment. Many tasks ask robots to move safely in scenarios with unknown obstacles. In this case, the motion strategies must rely on sensory information to compute the trajectories of the movements according to unforeseen circumstances. These strategies use the sensor-based motion planning methods. The challenge for these approaches is to deal with very cluttered, dense, and complex scenarios, which are usually the case in most robotic applications.
Image segmentation is a pre-treatment, which improves the state of the information contained in the scene, before the desired treatment and application. The objective is to separate, in the most faithfully possible way, the objects and the background which make the image [1,2] .
Image segmentation has applications in many practical fields, such as in forms recognition, objects detection, analysis of medical images, robotics [3] , in the field of the satellite images and in still many others.
Several developed segmentation techniques are based on a preset number of classes in the initial stage of the algorithm, which ensures the classification of each image pixel in its most probable class. These are segmentation by region based approaches, segmentation by contours detection, segmentation by thresholds, and that based on the k-means method. There is also the classification by the theory of the obviousness, also called the Dempster-Shafer theory or the belief function theory. It makes it possible to process, on the one hand dubious data and on the other hand to combine information coming from several sources, before the use of the decision rules for the assignment class selection [4,5,6] . We note also the classification by hidden Markov chains [7,8] and the Bayesian classification, which is based on the determination of the conditional probabilities to estimate the individual membership to each class [9] and some other methods based on fuzzy classification [10] .
In this research, we present a method for scene analysis which constitutes the first step in trajectory planning for robot environment identification. We present a new image segmentation method based on a probabilistic graph evolution, which traces the image occupant's regroupings.

MATERIALS AND METHODS
This research work rises from a project developed to realize an autonomous mobile robot. The objective of this work is to establish a fast color image analysis algorithm which translates a robot environment scene that can represent a real time intervention space. We implemented our algorithm on a computer PC. The images treated to test the effectiveness of the algorithm are of three different categories. The first image is a pure synthetic image used to check the detected classes' centers which are beforehand known during the creation of the image. The second image is a synthetic image strongly deteriorated by a Gaussian noise, used to introduce no null variances into the classes. The third treated image is a real image which is captured by a video sensor; it is selected to be put under the real conditions for the desired application.
The principle of this analysis is divided into two principal stages. The first phase consists in determining the number of classes contained in the image and to locate spatially the raw areas. The method rests on the course of the scene image and on the set up of a probabilistic graph which evolves along the course according to collected information. At the end of the course, the resulting graph represents the components descriptor of the scene to be analyzed. Each node of the graph represents a class. Edges describe the vicinity between two classes. The second phase consists in using the method of the belief functions to classify the pixels of the areas with strong variance and which constitute the noise zones and the transitions. These zones can deteriorate significantly the result of the processing.

THE GRAPH SET UP
The method consists in detecting the pure areas of the image, which represent a small disturbance and to generate a graph, which evolves according to the presence and the dissimilarity of these areas. The zones of strong variance are omitted and left for the second phase of the processing. The first phase of the segmentation suggested in this article gathers the following stages: Course of the image: In order to avoid jumps in the sweeping, which can generate an area change without transitions detection, a path that scans the image is selected Fig. 1.

Fig.1: Image sweeping
Initial state: The starting region is represented by a single node, which forms the graph in its initial state. At each step, and in order to measure the state of the path, a vector of attributes characterizing the site is calculated. It is composed of the central pixel color, the average and the standard deviation of the 3 by 3 vicinity window.

Detection of a transition:
In the presence of a relatively significant fluctuation of the attribute standard deviation, the membership of the current site vicinity to a contour indicating the crossing of a new region border becomes very probable. A threshold is predetermined to indicate the existence of a transition.

First transition:
In order to minimize the decision error probability, the first transition from a border will be memorized as upstream contour limits. The decisionmaking concerning the assignment of this portion of area, included between two borders, will be made while being based on the information provided by all the pixels visited along that portion, and that after the detection of downstream contour limits.

Second transition:
After the second contour detection, the portion of traversed area is identified and a decision is made either to assign the portion to an already existing class, or to create a new node in the graph, which translates the presence of a new class.

Creation of a new node:
The classification of the traversed area R located between two transitions is done after the membership probability is calculated. The discrimination frame H is composed of the hypothesis H i of class i membership already identified at this stage of the course, and the assumption H 0 of nonmembership. The area membership probability to classify R within the class i is calculated by: where, α i is a constant and β i is the inverse of the maximum distance between the center of the area R and the centers of the existing classes. d i is the distance between the center of R and the class center C i . The probability allotted to the hypothesis H 0 , which measures the distance of the area R to the existing classes is given by: The verdict in the R area assignment obeys to the maximum probability criterion: The area R is merged with the set of the pixels of the elected hypothesis class. If the hypothesis H 0 as the greatest probability, a new class will be created and a new node in the graph appears. The contour Map: The contour chart is a plan on which the separation curves of the various regions which constitute the image are plotted. Contours are the places of significant grey information level variations. Moreover, the transitions being strict, a contour must be a chain of pixels with thickness 1. This restriction in the nature of contours is imposed with an aim of well separating the regions while preserving the forms, which is probably closest to the real scene of the image. The transition from a region to another is detected by a relatively significant variation. The probability of a contour presence increases if the gradient is locally maximal or if the second derivative presents a passage by zero. Initially, two plans are built. First, the standard plan of the gradient is derived, which expresses the rate of variation in the image. Its amplitude is given by: and i c c c c c c d

G (s ) max(abs(s s ) abs(s s ))
The second matrix memorizes the direction of the variation local maximum. This direction is determined by the localization of the minimum of variation in the vector of variation V that represents the direction of passage of the contour Fig. 2. The required direction is perpendicular to the direction of contour.

CONFLICT REGIONS CLASSIFICATION
In this section, we review in a succinct manner a few basic concepts from the belief functions theory and the belief partition.
Theory of the obviousness: Introduced by Dempster [11] , the obviousness theory was retaken by Shafer [12] who has shown the interest of the belief functions for modeling uncertain acquaintances. The belief functions usefulness, as alternative to subjective probabilities, has been demonstrated hereafter in axiomatic manner by Smets [14,15] through the model of Transferable Beliefs. This theory can be considered as a generalization of the probability theory. Two processing levels of the information are used, the belief level where beliefs are manipulated and the pignistic level for decision-making.

Creedal level (The acquaintance modeling):
Within the context of the obviousness theory, there is a fixed set = { 1,…, k,…, K} of mutually exclusive and exhaustive elements, called the frame of discernment. The representation scheme, , defines the working space for the desired application since it consists of all propositions for which the information sources can provide evidence. Information sources can distribute mass values on subsets of the frame of discernment. We define the mass of elementary probability, called mass of belief, which characterizes the truthfulness of the hypothesis Ai for an information source S. An information source assign mass m is then defined by: And it has to fulfill the conditions: This function is different from a probability by the fact that the totality of the belief mass is assessed no only on singletons hypotheses q, but also on the combined hypotheses. The modeling coming from the m function is called set of masses. From the m function, we define respectively functions of credibility Cr and of plausibility Pl by: where, presents the opposite event of the proposition A. The credibility Cr (A) measures the power of believes in the propositional truthfulness. The plausibility Pl (A), dual function of the credibility, measures the intensity of no doubt about A. Two functions with initial mass m 1 and m 2 , which represent respective information of two distinct information sources, can be merged using the Dempster's orthogonal combination rule. This commutative and associative rule is defined by: Where k is defined by : The coefficient k reflects the conflict existing between the two sources S1 and S2 and the quotient 1/(1-k) is a term of normalization.

Pignistic Level (Decision-making):
For the decision taking, the step of aggregation previously defined permits to obtain an exhaustive resume of the information in the form of a unique belief function m that is used for the decision-making.
After combination, it remains to take a decision as to the more probable hypothesis of . The rule of decision among the mostly used ones remains the maximum of probability pignistic rule. This decision-taking rule presented by Smets [15] uses the pignistic transformation which transforms m into a function of probability P Bet on , called function of pignistic probability, defined for all ω∈Ω as: where, A is the cardinal of A⊆Ω. In this transformation the belief mass m(A) is uniformly distributed between A elements. The decision leans then to the element of which has the maximum of pignistic probability.

APPLICATION TO CLASSIFICATION IN THE CONFLICT ZONES
The objective of this stage is to refine the segmented image by classifying the pixels in the classes limiting areas which are blurred by noise and the transition zones. For that, we use the theory of the functions of belief.
The classification decision for the pixel P is established by combining n functions m i . The vicinity V(p) of the pixel P is taken as source of information; each mass m I relates to a pixel Pi∈V(P) defined on Ω = {ω1,…, ωk,…, ωK} This frame of discernment reflects the classes of the pixels which make the vicinity V(P). To each subset ω k of corresponds a class C k . We define the function of belief relating to the pixel P i of the class C k as follow: where, n k is the number of pixels belonging to the class C k at the vicinity of the considered pixel P. If the mass allotted to the empty set m(φ)is not negligible, it is necessary to analyze this distribution of the conflict.

RESULTS AND DISCUSSION
First, we tested our algorithm of segmentation on a strongly Gaussian noised synthetic image shown in Fig. 3 and 4. We illustrate in Fig. 5 the results of the conflict zones and edges detection. Figure 6 shows the segmented images after the conflict zones pixels classification. To be under the real application conditions, we tested our algorithm of segmentation on real images Fig. 7. Figure 8 shows the results of the conflict zones and edges detection. Figure 9 shows the yellow color class regions map in the reaI image, we can see clearly the shaps of the regions and then we can obtain the subclasses. Figure 10 presents the segmented images before the conflict zones pixels classification. Figure 11 presents the segmented images after the conflict zones pixels classification. These results show a good segmentation representing the shapes of the parts in the scene.  The results obtained are satisfactory, starting from the image of an unknown scene, even strongly noised, our algorithm detects the number of classes in the image and reproduces the same forms with a certain degree of refinement after the first step of segmentation, in which the conflict zones are omitted and treated at the final step to avoid the bad decisions in the regions detection and classification.

CONCLUSUON
In this work, we describe a new approach of an unsupervised segmentation method. We use an original strategy, which consists in making progress a probabilistic evolutionary graph, which composes the segmented image to detect the number of the classes and there localisation. In the final phase of the segmentation, the zones of conflict are classified to improve the image segmentation. This research is a part of an application in the field of robotics to help a robot equipped with a camera in its evolution. The aim of this research is to locate the mobile robot and to plan the trajectory to be followed to reach a predetermined destination. The information source is a scene filmed by a camera, and from the segmented image and the obstacles localization in a completely unknown environment, a trajectory is planed.