IMPLEMENTATION OF AUTONOMOUS NAVIGATION ALGORITHMS ON TWO-WHEELED GROUND MOBILE ROBOT

This study presents an effective navigation architecture that combines ‘go-to-goal’, ‘avoid-obstacle’ and ‘follow-wall’ controllers into a full navigation system. A MATLAB robot simulator is used to implement this navigation control algorithm. The robot in the simulator moves to a goal in the presence of convex and non-convex obstacles. Experiments are carried out using a ground mobile robot, Dr Robot X80SV, in a typical office environment to verify successful implementation of the navigation architecture algorithm programmed in MATLAB. The research paper also demonstrates algorithms to achieve tasks such as ‘move to a point’, ‘move to a pose’, ‘follow a line’, ‘move in a circle’ and ‘avoid obstacles’. These control algorithms are simulated using Simulink models


INTRODUCTION
The field of mobile robot control has attracted considerable attention of researchers in the areas of robotics and autonomous systems in the past decades. One of the goals in the field of mobile robotics is the development of mobile platforms that robustly operate in populated environments and offer various services to humans. Autonomous mobile robots need to be equipped with appropriate control systems to achieve the goals. Such control systems are supposed to have navigation control algorithms that will make mobile robots successfully 'move to a point', 'move to a pose', 'follow a path', 'follow a wall' and 'avoid obstacles (stationary or moving)'. Also, robust visual tracking algorithms to detect objects and obstacles in real-time have to be integrated with the navigation control algorithms.
A mobile robot is an automatic machine that is capable of movement in given environments; they are not fixed to one physical location. Wheeled Mobile Robots (WMRs) are increasingly present in industrial and service robotics, particularly when flexible motion capabilities are required on reasonably smooth grounds and surfaces. Several mobility configurations (wheel number and type, their location and actuation and singleor multi-body vehicle structure) can be found in different applications (De Luca et al., 2001). The most common for single-body robots are differential drive and synchro drive (both kinematically equivalent to a unicycle), tricycle or car-like drive and omnidirectional steering (De Luca et al., 2001).
The main focus of the research is the navigation control algorithm that has been developed to enable Differential Drive Wheeled Mobile Robot (DDWMR) to accomplish its assigned task of moving to a goal free from any risk of collision with obstacles. In order to develop this navigation system a low-level planning is used, based on a simple model whose input can be calculated using a PID controller or transform into actual robot input. The research also presents control algorithms that make mobile Science Publications AJEAS robots 'move to a point', 'move to a pose', 'follow a line', 'follow a circle' and 'avoid obstacles' taken from the literature (Corke, 2011;Egerstedt, 2013).
A MATLAB robot simulator is used to implement the navigation control algorithm and the individual control algorithms were simulated using Simulink models. For the navigation control algorithm, the robot simulator is able to move to a goal in the presence of convex and non-convex obstacles. Also, several experiments are performed using a ground robot, Dr Robot X80SV, in a typical office environment to verify successful implementation of the navigation architecture algorithm programmed in MATLAB.
Possible applications of WMR include security robots, land mine detectors, planetary exploration missions, Google autonomous car, autonomous vacuum cleaners and lawn mowers, toxic cleansing, tour guiding, personal assistants to humans, etc. (Jones and Flynn, 1993).
The remainder of this study is organized as follows: In section II the kinematic model of the DDWMR is shown. The control algorithms are presented in section III. In section IV simulations and experiments performed in MATLAB/Simulink are explained. Simulation and experimental results are summarized in section V. Concluding remarks and future work is presented in section VI.

KINEMATIC MODEL OF THE DDWMR
The DDWMR setup used for the presented study is shown in Fig. 1 (top view). The mobile robot is made up of a rigid body and non-deforming wheels and it is assumed that the vehicle moves on a plane without slipping, i.e., there is a pure rolling contact between the wheels and the ground.
The configuration of the vehicle is represented by the generalized coordinates q = (x,y,θ), where (x,y) is the position and θ is the orientation (heading) of the center of the axis of the wheels, C, with respect to a global inertial frame {O,X,Y}. Let {O V , X V , Y V } be the vehicle frame. The vehicle's velocity is by definition ν in the vehicle's x-direction, L is distance between the wheels, R is radius of the wheels, ν r is the right wheel angular velocity, ν l is the left wheel angular velocity and ω is the heading rate. The kinematic model of the DDWMR based on the stated coordinate is given by: For the purpose of implementation the kinematic model of a unicycle is used, which corresponds to a single upright wheel rolling on the plane, with the equation of motion given as: The inputs in (Equation 1 and 2) are ν r , ν l , ν and ω. These inputs are related as Equation 3: A Simulink model shown in Fig. 2 have been developed that implements the unicycle kinematic model. The velocity input has a rate limiter to model finite acceleration and limiters on the velocity and the heading or turn rate.

CONTROL ALGORITHMS
Control of the unicycle model inputs is about selecting the appropriate input, u = (νω) T and applying the traditional PID-feedback controller, given by Equation 4: where, e, define for each task below, is the error between the desired value and the output value, K p is the proportional gain, K l is the integrator gain, K D is the derivative gain and t is time. The control gains used in this research are obtained by tweaking the various values to obtain satisfactory responses. If the vehicle is driven at a constant velocity, ν = ν 0 then the control input will only vary with the angular velocity, ω, thus:

Developing Individual Controllers
This section presents control algorithms that make mobile robots 'move to a point', 'move to a pose', 'follow a line', 'follow a circle' and 'avoid obstacles'.

Moving to a Point
Consider a robot moving toward a goal point, (x g , y g ), from a current position, (x, y), in the xy-plane, as depicted in Fig. 3 below. The desired heading (robot's relative angle), θ g , is determined as: And the error, e, is defined Equation 7: To ensure e∈[-π,π], a corrected error, e' is used instead of e as shown below Equation 8: Thus ω can be controlled using (Equation 5). If the robot's velocity is to be controlled, a proportional Science Publications AJEAS controller gain, K ν , is applied to the distance from the goal, shown below (Corke, 2011) Equation 9:

Moving to a Pose
The above controller could drive the robot to a goal position but the final orientation depends on the starting position. In order to control the final orientation (Equation 5) is rewritten in matrix form as (Corke, 2011): Equation 10 is then transformed into the polar coordinate form using the notation shown in Fig. 4 below. Applying a change of variables, we have Equation 11: Which results in Equation 12: And assumes the goal {G} is in front of the vehicle. The linear control law Equation 13: Drives the robot to unique equilibrium at (ρ,α,β) = (0,0,0). The intuition behind this controller is that the terms K ρ ρ and K α α drive the robot along a line toward {G} while the term K β β rotates the line so that β→0 (Corke, 2011). The closed-loop system Equation 14: Is stable so long as K ρ >0, K β >0, K α -K ρ >0 (Corke, 2011). For the case where the goal is behind the robot, that

Obstacle Avoidance
In a real environment robots must avoid obstacles in order to go to a goal. Depending on the positions of the goal and the obstacle (s) relative to the robot, the robot need move to the goal using θ g from a 'pure go-to-goal' behavior or blending the 'avoid obstacle' and the 'go-togoal' behaviors. In pure obstacle avoidance the robot drives away from the obstacle and move in the opposite direction. The possible θ g that can be used in the control law discussed in section III-A.1 are shown in Fig. 5 below, where θ obst is the obstacle heading.

Following a Line
Another useful task for a mobile robot is to follow a line on a plane defined by a x +b y +c = 0. This requires two controllers to adjust the heading. One controller steers the robot to minimize the robot's normal distance from the line given by Equation 15: The proportional controller Equation 16: Turns the robot toward the line. The second controller adjust the heading angle to be parallel to the line Equation 17: Using the proportional controller: The combined control law Equation 19: Turns the wheel so as to drive the robot toward the line and moves along it (Corke, 2011).

Following a Circle
Instead of a straight line the robot can follow a defined path on the xy-plane and in this section the robot follows a circle. This problem is very similar to the control problem presented in section III-A.1, except that this time the point is moving. The robot maintains a distance d d behind the pursuit point and an error, e, can be formulated as (Corke, 2011) Equation 20: That will be regulated to zero by controlling the robot's velocity using a PI controller Equation 21: The integral term is required to provide a finite velocity demand ν d when the following error is zero. The second controller steers the robot toward the target which Science Publications AJEAS is at the relative angle given by (Equation 6) and a controller given by (Equation 18).

Developing Navigation Control Algorithm
This section introduces how the navigation architecture, that consist of go-to-goal, follow-wall and avoid obstacle behaviors, was developed. In order to develop the navigation system a low-level planning was used, by starting with a simple model whose input can be calculated by using a PID controller or transform into actual robot input, depicted in Fig. 6 (Egerstedt, 2013). For this simple planning a desired motion vector, x, is picked and set equal to the input, u, (Equation 22).
This selected system is controllable as compared to the unicycle system which is non-linear and not controllable even after it has been linearized. This layered architecture makes the DDWMR act like the point mass model shown in (Equation 22) (Egerstedt, 2013).

Go-To-Goal (GTG) Mode
Consider the point mass moving toward a goal point, x g , with current position as x in the xy-plane. The error, e = x gx, is controlled by the input u = Ke, where K is gain matrix.

= − &
the system is asymptotically stable if K>0. An appropriate K is selected to obey the function shown in Fig. 7a above such that , where a and b are constants to be selected; in this way the robot will not go faster further away from the goal (Egerstedt, 2013).

Obstacle Avoidance (AO) Mode
Let the obstacle position be x 0 , then e = x 0 -x is controlled by the input u = Ke and since e Ke

Blending AO and GTG Modes
In a 'pure GTG' mode, u GTG , or 'pure AO' mode, u AO , or what is termed as hard switches, performance can be guaranteed but the ride can be bumpy and the robot can encounter the zeno phenomenon (Egerstedt, 2013). A control algorithm for blending the u GTG and u AO modes is given by (Equation 23). This algorithm ensures smooth ride but does not guarantee performance (Egerstedt, 2013

AJEAS
where, ∆ is a constant distance to the obstacle/boundary and α is the blending function to be selected, giving appropriately as an exponential function by: where, β is a constant to be selected.

Follow-Wall (FW) Mode
As pointed out in section III-B.2, in a pure obstacle avoidance mode the robot drives away from the obstacle and move in the opposite direction, but this is overly cautious in a real environment where the task is to go to a goal. The robot should be able to avoid obstacles by going around its boundary and this situation leads to what is termed as the follow-wall or an induced or sliding mode, u FW , between the u GTG and u AO modes; this is needed for the robot to negotiate complex environments (Egerstedt, 2013).
The FW mode maintains ∆ to the obstacle/boundary as if it is following it and the robot can clearly move in two different directions, clockwise (c) and counterclockwise (cc), along the boundary, Fig. 8. This is achieved by rotating u AO by π/2 and -π/2 to obtain cc AW u and c AW u respectively and then scaled by δ to obtain a suitable induced mode, (Equation 25-27), where R(∅) is a rotation matrix (Egerstedt, 2013): The direction the robot selects to follow the boundary is determined by the direction of u GTG and it is determined using the dot product of u GTG and u FW , as shown in (Equation 28 and 29) (Egerstedt, 2013): Another issue to be addressed is when the robot releases u FW , that is when to stop sliding. The robot stops sliding when "enough progress" has been made and there is a "clear shot" to the goal, as shown in (Equation 30-32), where τ is the time of last switch (Egerstedt, 2013

Implementation of the Navigation Algorithms
The behaviors or modes discussed above are put together to form the navigation architecture shown in Fig. 9 below. The robot started at the state x 0 and arrived at the goal x g , switching between the three different operation modes; this system of navigation is termed the hybrid automata where the navigation system has been described using both the continuous dynamics and the discrete switch logic (Egerstedt, 2013).
An illustration of this navigation system is shown in Fig. 10, where the robot avoids a rectangular block as it moves to a goal.

Tracking and Transformation of the 'Simple' Model Input
The simple planning model input, u = (u 1 u 2 ) T can be tracked using a PID controller or clever transformation can be used to transform it into the unicycle model input, u = (νω) T (Egerstedt, 2013). These two approaches are discussed below.

Method 1: Tracking Using a PID Controller
Let the output from the planning model be u = (u 1 u 2 ) T and the current position of the point mass be x = (x y) T , Fig. 11a below, then the input, u = (ν ω) T , to the unicycle model can be determined as shown below (Egerstedt, 2013)

Method 2: Transformation
In this clever approach a new point (x n , y n ), of interest is selected on the robot at a distance k from the center of mass, (x, y), as shown in Fig. 11b (Egerstedt, 2013), where x n = (x n y n ) T

Simulations of Individual Controllers
Simulink models developed, by modifying similar models in (Corke, 1993(Corke, -2011, that implement the control algorithms discussed in section III are presented in Fig. 12-15. These models are based on the unicycle model in Fig. 2.

Simulations of the Navigation System
A MATLAB robot simulator introduced in (MATLAB Robot Simulator (Software), 2013) was used to simulate the navigation architecture control algorithms presented in section III; the control algorithm combines the GTG, AO and FW controllers into a full navigation system for the robot. The robot simulator mimics the Khepera III (K3) mobile robot, whose model is based on the unicycle model presented in section II. The K3 is equipped with 11 Infrared (IR) range sensors, of which nine are located in a ring around it and two are located on the underside of the robot. The IR sensors are complemented by a set of five ultrasonic sensors (Corke, 2011). The K3 has a two-wheel differential drive with a wheel encoder for each wheel.
The MATLAB algorithm that controls the simulator implements Finite State Machine (FSM) to solve the full navigation problem. The FSM uses a set of if/elseif/else statements that first check which state (or behavior) the robot is in and then based on whether an event (condition) is satisfied, the FSM switches to another state or stays in the same state, until the robot reaches its goal (Egerstedt, 2013).  (Corke, 1993(Corke, -2011 Science Publications

Fig. 16. (a-h) Sequence showing the MATLAB robot simulator implementing the navigation system
Science Publications Figure 16a-h below shows a sequence of movement of the MATLAB robot simulator implementing the navigation system. The robot navigates around a cluttered, complex environment without colliding with any obstacles and reaching its goal location successfully.

Experiments Using Dr Robot X80SV
The control algorithm built for the navigation architecture presented in Section III has been experimented on Dr Robot X80SV, programmed using MATLAB GUI, in an office environment. The Dr Robot X80SV can be made to move to a goal whiles avoiding obstacles in front of it.
The Dr Robot X80SV, shown in Fig. 17a below, is a fully wireless networked that uses two quadrature encoders on each wheel for measuring its position and seven IR and three ultrasonic range sensors for collision detection. It has 2.6× high resolution Pan-Tilt-Zoom CCD camera with two-way audio capability, two 12V motors with over 22 kg.cm torque each and two pyroelectric human motion sensors.
The Dr Robot X80SV has a dimension of 38cm (length) ×35 cm (width) ×28 cm (height), maximum payload of 10kg (optional 40 kg) with robot weight of 3 kg. Its 12V 3700 mAh battery pack has three hours nominal operation time for each recharging and can drive up to a maximum speed of 1.0 m s −1 . The distance between the wheels is 26cm and the radius of the wheels is 8.5cm.
The PID-feedback system depicted in Fig. 17b above shows how the DC motor system of the robot is controlled. Figure 18 shows the setup used for the experiments. After a connection is established between the host PC and the robot through the wireless router the MATLAB program receives and sends the motion/sensors signals using ActiveX control. The program directly exchange multimedia data with the Pan-Tilt-Zoom camera also through an ActiveX control.
A screenshot of the main MATLAB interface used for the Dr Robot X80SV control is shown in Fig. 19 below. The interface was developed by mimicking a similar interface developed in C# by Dr Robot Inc. The motivation for using MATLAB instead of building upon the provided C# interface is to take advantage of the ease of simulation, quick and ease of developing GUI and making use of the in-built control strategies libraries in MATLAB for this research and future studies.
The main GUI interface has three sections: Information about the robot settings and sensors, multimedia and the vision and control. The robot settings information includes the IP addresses of the robot and the camera, the robot wheel radius, distance between the robot wheels and the encoder count per revolution. The sensors information, updated in real-time, includes the IR, ultrasonic, motor, human, temperature, battery and the position of the robot.
The multimedia section include real-time video stream from the robot, which can be controlled using a pan and tilt tools. The section also has tools for capturing images and recording the video stream. In addition, the section also has a tool to capture live audio from the robot.
The vision and control section has tools for performing 'Basic Motion Control', 'Individual Motion Control (PID and MPC)', 'Navigation System (PID and MPC) and 'Object Recognition and Tracking'.

Simulation of Individual Controllers Results
The time domain Simulink simulations were carried out over a 10 sec duration for each model (refer to the simulation setups in Fig. 12-15). The trajectories in Fig.  21 were obtained by using proportional gains of 0.5 and 4.0 for K ν and K h respectively; the final goal point was (4.9920, 5.0036), compared to the desired goal of (5,5) for the (5,9,π) initial state.
The trajectories in Fig. 23 were obtained with K d = 0.5 and K h = 1.0, driving the robot at a constant speed of 1.0. The trajectory in Fig. 24 was obtained using K h = 5, PID controller gains of K p = 1.0, K l = 0.5 and K D = 0.0 and the goal was a point generated to move around the unit circle with a frequency of 0.2 Hz.

Simulation of Navigation System Results
The trajectory shown in Fig. 25a (refer to the simulation setup in Fig. 16) was obtained by using PID Science Publications AJEAS controller gains of K p = 5.0, K l = 0.01 and K D = 0.1, α = 0.6, ∈ = 0.05, ν = 0.1 m s −1 , initial state of (0,0,0) and desired goal of (1, 1,π/2). The final goal point associated with the simulation was (1.0077, 0.9677, 1.1051) and the average stabilization time was about 35s.

Experimental Results
The trajectory shown in Fig. 25b (refer to a similar experimental setup in Fig. 20) was obtained by using PID controller gains of K p = 1000, K l = 1000 and K D = 5 for the position control and K p = 10, K l = 0 and K D = 1 for the velocity control, ∈ = 0.01, ν = 0.5 m s −1 , initial state of (0,0,0) and desired pose of (2,0,0). The final pose associated with the experiment was (200197, 0.0266,-0.0096) and the average stabilization time was about 50s.
Even though there was steady-state errors in the values obtained, the result was encouraging. Possible causes of the errors are friction between the robot wheels and the floor, imperfection in the sensors and/or unmodeled factors (e.g., friction and backlash) in the mechanical parts of the DC motor. Moreover, despite the apparent simplicity of the kinematic model of a WMR, the existence of nonholonomic constraints (due to state or input limitations) turns the PIDfeedback stabilizing control laws into a considerable challenge; due to Brockett's conditions (Brockett, 1983), a continuously differentiable, time-invariant stabilizing feedback control law cannot be obtained.
Note that during the experiments the robot sometimes got lost or wandered around before arriving at the desired pose. This is because the navigation system is not robust. It was built using a lowlevel planning based on a simple model of a point mass and the application of a linear PID controller.