EFFECTIVENESS OF SECOND BEST PARTICLE INFORMATION FOR PARTICLE SWARM OPTIMIZATION

Particle Swarm Optimization (PSO) represents the potential solutions of the optimization problem as the particles and then, the particles move in order to find the better solution. The particle positions are updated from the personal best and the global best particle positions which have been ever found. This research focuses on the use of the second personal best and the second global best particle positions in order to improve the search performance of the original PSO algorithm. In the present algorithm, the second global best or the second personal best particle position is randomly used for updating all particle positions. The algorithms are compared with the original PSO algorithm in five test functions. The results reveal that the use of the second global best and the second personal best particle positions can improve the search performance of the original PSO although the basic idea is simple


INTRODUCTION
The gradient-type algorithms such as the Newton and Steepest gradient methods are very popular algorithms for obtaining the solution of the optimization problem. They sometimes reach not global optimum but local optimum. Therefore, some researchers have been studying other algorithms without the gradient data of the function such as Genetic Algorithm (GA) by Holland (1975) and Goldberg (1989), Simulated Annealing (SA) by Kirkpatrick et al. (1983), Particle Swarm Optimization (PSO) by Kennedy and Eberhart (1995); Kennedy (1997) and Shi and Eberhart (1998) and so on. PSO, which has been presented in 1995 by Kennedy and Eberhart (1995), is based on a metaphor of social interaction such as bird flocking and fish schooling. PSO, which is also a population-based optimization algorithm, is available for solving various function optimizations problem and industrial applications (He et al., 2009;Lapizco-Encinas et al., 2009;Liu et al., 2006;Poli, 2008;Qarouni-Fard et al., 2007;Zhao et al., 2008). Qarouni-Fard et al. (2007) presented the timetable design by usign Particle Swarm Optimization. Poli (2008) applied the Particle Swarm Optimization for analysis of publications. He et al. (2009);Lapizco-Encinas et al. (2009) ;Liu et al. (2006); Qarouni-Fard et al. (2007) and Zhao et al. (2008) presented the application of the Particel Swarm Optimization for packing problems.
In the original PSO algorithm, the potential solutions of the optimization problem are defined as the particles whose position vector denotes the design vector of the candidate solution. The particle positions are updated from the personal and the global best particle positions. The personal and global best particles denote the best position which each particle has ever found and the best position which all particles have ever found, respectively. One of the basic drawbacks of PSO is the premature convergence problem. The premature convergence means too early convergence of a population of potential solutions, resulting in being not global optimal solution but local (sub-) optimal solution.
This study focuses on the use of second global and second personal best particle positions for improving the search performance of the original PSO algorithm. The

JCS
PSO algorithms employing second global and personal best particle positions are named as present algorithm 1 and 2, respectively. The present algorithms are compared with the original PSO algorithm in five test functions.
The remaining part of this study is organized as follows. The PSO algorithms and the numerical results are explained in section 2 and 3, respectively. Finally, the conclusions are summarized again in section 4.

Optimization Problem
The optimization problem is defined by the objective function and the design variables if the constraint conditions are negligible.
The design variable vector is defined as follows Equation (1): The parameter x i and D d denote the design variable and the total number of design variables, respectively.
The objective function to be minimized is defined as the function of the design variables Equation (2): In the evolutionary computation, the satisfaction of the particle for the design objective is estimated by the fitness function f(x), which is maximized as follows Equation (3):

Search Process
In the PSO algorithm, the particles represent potential solutions of the optimization problem and then, the swarm of the particles moves on the solution space in order to find the better solution. A particle in the swarm has a position vector x i (t) and a velocity vector v i (t) in the search space at time. Each particle has memory and hence, can remember the best position which it ever visited in search space. When each particle takes the best fitness function, the position vector is known as the personal best particle position vector and x i p (t) the overall best out of all particles in the swarm is as global best particle position vector x g (t). The particle position vector x i (t) and the velocity vector v i (t) are updated by the personal and global best particle position vectors.
The original PSO algorithm is summarized as follows ( Fig. 1): • Initialize iteration number: The iteration t number is initialized as t←0 • Initialize particle position and velocity vectors: For i = 1, …, N, the particle position vector x i (t) and velocity vector v i (t) are initialized with uniformly distributed random vectors • Initialize best particle position vectors: The global best particle position vector x g (t) and the personal best particle position vector x i p (t) of the particle are initialized with zero vectors; x g (t) = 0 and x i p (t) = 0 • Evaluate fitness function: For i = 1,…,N, fitness function f(x i (t)) is evaluated • Check the convergence criterion: If the criterion is satisfied, the process goes to next step. Otherwise, the process goes to the step 7 • Output results: The results are output and the process is terminated • Update particle position vectors: The particle velocity vector v i (t) is updated and then, the particle position vector x i (t) is updated. (Update algorithm is described in the next section) • Update iteration number: The iteration number is updated so that and then, the process goes to step 3

Update Algorithm
In the original PSO algorithm, the position and the velocity vectors of the particle i(i = 1,…,N) are updated according to the following rules Equation (4 and 5): The parameter w is the inertia weight. The parameter c 1 and c 2 are acceleration coefficient and is the iteration time-step. The variable r 1 and r 2 are random numbers in the range of [0,1]. The parameter N is the swarm size or the total number of particles in the swarm. The inertia weight governs how much percentage of the velocity should be retained from the previous time step to the next time step. The inertia weight is updated by the following self-adapting formula Equation (6): The parameter w max and w min denote the maximum and minimum inertia weights, respectively. The parameter t and t max are the iteration step and the maximum iteration steps in the simulation, respectively.
The parameters c 1 and c 2 determine the relative pull of x i p (t) and x g (t). According to the recent work done by Clerc (1999), the parameters are given as follows: The update algorithm of the particle position is summarized as follows: • Update the particle velocity vector: The particle velocity vector v i (t+1) is calculated by Equation (4) • Update the particle position vector: The particle position vector x i (t+1) is calculated by Equation (5) • Update global best particle position vector: The set is defined as follows: The global best particle position vector is updated as follows: x t 1 arg max f S + ← • Update personal best particle position vector: For i = 1,…,N, the set is defined as follows: The personal best particle position vector is updated as follows:

Present Algorithm 1 2.3.1. Search Process
The search process of the present algorithm 1 is similar except for the uses of the second global best particle position vector x g (t). This algorithm uses the personal best particle position vector x i p (t), the first global best particle position vector x g2 (t) and the second Science Publications JCS global best particle position vector x g2 (t) for updating the particle position and velocity vectors.
The present algorithm 1 is summarized as follows: • Initialize iteration number: The iteration number t is initialized as t←0 • Initialize particle position and velocity vectors: For i = 1,…,N, the particle position vector x i (t) and velocity vector v i (t) are initialized with uniformly distributed random vectors • Initialize best particle position vectors: The global best particle position vector x g (t) and the personal best particle position vector x i p (t) of the particle are initialized with zero vectors; x g (t) = 0 and x i p (t) = 0 • Initialize second global best particle position vectors: The global best particle position vector x g2 (t) is initialized with zero vectors; x g2 (t) = 0 • Evaluate fitness function: For i = 1,…,N, fitness function f(x i (t)) is evaluated • Check the convergence criterion: If the criterion is satisfied, the process goes to next step. Otherwise, the process goes to the step 8 • Output results: The results are output and the process is terminated • Update particle position vectors: The particle velocity vector v i (t) is updated and then, the particle position vector x i (t) is updated. (Update algorithm is described in the next section.) • Update iteration number: The iteration number is updated so that t←t+1 and then, the process goes to step 3

Update Algorithm with Second Global Best Particle
The original PSO have no handling mechanism for avoiding the local optimization except for the use of x i p (t). In the present algorithm 1, each particle can remember the second global best particle position vector x g2 (t) in addition to the global best particle position vector x g (t) and the personal best particle position vector x i p (t). The use of x g2 (t) can reduce the chance of local optimum convergence of PSO. In this algorithm, the particle velocity vector is updated by the following equation: The parameter is the inertia weight. The parameter c 1 , c 2 and c 3 are the acceleration coefficient and the parameter is the iteration time. Besides, r 1 ,r 2 and r 3 are random numbers uniformly distributed in the range of The parameter c 1 and c 2 are taken as the same values in the original PSO; c 1 = c 2 = 1.5. Effect of the parameter c 3 to the search performance is discussed in the numerical examples.
The update rule (8) has been already presented in the paper. The numerical discussions and the applications were not described in the reference. Therefore, in this study, it is discussed in numerical examples.
The update algorithm of the particle position vector is summarized as follows: • Generate uniformly distributed random number: The uniformly distributed random number r is generated in the range of [0,1] • Update particle velocity vector: If r≥0.5, the particle velocity vector v i (t+1) is calculated by Equation (4). Otherwise, the vector v i (t+1) is calculated by Equation (8) • Update the particle position vector: The particle position vector x i (t+1) is calculated by Equation (5) • Update global best particle position vector: The set is defined as follows: x t , x t ,..., x t = = The global best particle position vector is updated as follows.
( ) ( ) g g g i S x t 1 arg max f S + ← • Update second global best particle position vector: The set S g2 is defined the set S g from which x g (t+1) is excluded as follows: ( )

. Search Process
The search process of the present algorithm 2 is almost same as that of the original PSO algorithm except for the use of the second personal best particle position vector x i p2 (t). This algorithm uses the personal best particle position vector x i p (t), the global best particle position vector x g (t) and the second personal best particle position vector x i p2 (t) for updating the particle position and velocity vectors.
The present algorithm 2 is summarized as follows: • Initialize iteration number: The iteration number t is initialized as t←0 • Initialize particle position and velocity vectors: For i = 1,…N, the particle position vector x i (t) and velocity vector v i (t) are initialized with uniformly distributed random vectors • Initialize best particle position vectors: The global best particle position vector x g (t) and the personal best particle position vector x i p (t) of the particle are initialized with zero vectors; x g (t) = 0 and x i p (t) = 0 • Initialize second personal best particle position vectors: For i = 1,…,N, the second personal best particle position vector x i p (t) is initialized with zero vectors; x i p2 (t) = 0 • Evaluate fitness function: For i = 1,…,N, fitness function f(x i (t)) is evaluated • Check the convergence criterion: If the criterion is satisfied, the process goes to next step. Otherwise, the process goes to the step 8 • Output results: The results are output and the process is terminated • Update particle position vectors: The particle velocity vector v i (t) is updated and then, the particle position vector x i (t) is updated. (Update algorithm is described in the next section) • Update iteration number: The iteration number is updated so that t←t+1 and then, the process goes to step 3

Update Algorithm with Second Personal Best Particle
The present algorithm 1 uses the second global best particle position vector x g2 (t) for avoiding the local optimization. On the other hand, the present algorithm 2 uses the second personal best particle position vector x i p2 (t) instead of the second global best particle position vector x g2 (t).
In the present algorithm 1, the second global best particle position vector x g2 (t) makes an identical effect on all particles. The second personal best particle position vector x i p2 (t) makes the different effect on each particle. Therefore, particles in the present algorithm 2 tend to search wider region than them in the present algorithm 1.
In the present algorithm 2, each particle can remember the positions of the global best particle position vector x g (g), the personal best particle position vector x i p (t) and the second personal best particle position vector x i p2 (t). In this algorithm, the particle velocity vector is updated by the following equation: The parameter is the inertia weight. The parameter c 1 ,c 2 and c 4 are the acceleration coefficient and the parameter t is the iteration time. Besides, r 1 , r 2 and r 4 are random numbers uniformly distributed in the range of [0,1]. The parameter c 1 and c 2 are taken as the same values in the original PSO; c 1 = c 2 = 1.5. Effect of the parameter c 4 to the search performance is also discussed in the numerical examples.
The present algorithm 2 shares the information of x i p (t), x g (t) and x i p2 (t). Obviously, x i p2 (t) is worse than x i p (t). If only Equation (7) is used for updating particle velocity vector, the result must be worse than that of original PSO. Therefore, the update rules (4) and (8) are employed alternately. The update algorithm of the present algorithm 2 is summarized as follows: • Generate uniformly distributed random number: The uniformly distributed random number r is generated in the range of [0,1] • Update particle velocity vector: If r≥0.5, the particle velocity vector v i (t+1) is calculated by Equation (4). Otherwise, the vector v i (t+1) is calculated by Equation (9) Science Publications

JCS
• Update particle position vector: The particle position vector x i (t+1) is calculated by Equation (5) • Update global best particle position vector: The set is defined as follows: The global best particle position vector is updated as follows: x t 1 arg max f S + ← • Update personal best particle position vector: For i =1,…,N, the set is defined as follows: The personal best particle position vector is updated as follows:

Sphere function
Sphere function is defined as follows Equation (10): The vector x is defined as follows Equation (11): The sphere function of n = 2is shown in Fig. 2a.

Rosenbrock function
Rosenbrock function is defined as follows Equation (12): The Rosenbrock function of n = 2 is shown in Fig. 2b.

Rastrigin Function
Rastrigin function is a multi-modal function defined as follows Equation (13) The Rastrigin function of n = 2is shown in Fig. 2c. A lot of local optimal solutions exist around a global optimal solution.

Griewank Function
Griewank function is defined as follows Equation (14) The Griewank function of n = 2 is shown in Fig. 2d.

Schaffer's F6 Function
Schaffer's f6 function is defined as follows Equation (15) The function |x| denotes the absolute value of the vector x. The Schaffer's f6 function of is n = 2 shown in Fig. 2e.
The dimension of functions is n = 2 for Schaffer's f6 function or n = 30 for the other functions. The threshold for function optimization is also shown in the same table.
In minimization of the function, it is concluded that the global minimum of the function can be found when the function value is smaller than the threshold value.  Table 1. According to the work done by Clerc (1999), the parameters c 1 and c 2 are specified as c 1 = c 2 = 1.5.
The results are compared in the estimation value, which is defined as the quotient of the average search time and the success rate as follows Equation (16) The average search time denotes the average iteration number at which the global optimum could be found. The success rate denotes, in total number of simulations, the percentage of the number of simulations at which the minimum solution can be found. The threshold for finding the optimal solution is shown in Table 2. When a smaller solution than the threshold can be found, it is concluded that the simulation is terminated successfully.

Effect of c 3 on Present Algorithm 1
Simulations are performed 20 times from different initial populations by the present algorithm 1. The results are shown in Table 3. The results show that the best value of the parameter c 3 depends on the function to be solved. Comparison of the estimation values shows that the best values of the parameter c 3 are c 3 = 5 for Sphere, Rosenbrock, Griewank and Schaffer's f6 functions and c 3 = 2.5 or 5 for Rastrigin function. It is concluded that c 3 = 5 is good for all functions.

Effect of c 4 on Present Algorithm 2
Simulations are performed 20 times from different initial populations by the present algorithm 2. The results are shown in Table 4. The results show that the best parameter c 3 depends on the function. The best values of the parameter are for Sphere, Rosenbrock, Rastrigin and Griewank functions c 4 and c 4 = 5 for Schaffer's f6 functions. It is concluded that is c 4 = 5.5 good for all functions.

Comparison with Other Studies
Swarm size and maximum iteration number are 30 and 10000, respectively. According to the work done by Clerc (1999), the parameters c 1 and c 2 are specified as c 1 = c 2 = 1.5. The best results by present algorithms are compared with the results in the study by Eberhart and Shi (2000) and Trelea (2003). The results are shown in Table 5. The results by the present algorithms are better than them in the references. Comparison of the present algorithm 1 and 2 shows that the present algorithm 1 is better for Rosenbrock and Griewank functions and the present algorithm 2 is for other functions.

CONCLUSION
This study describes the use of the second best particle position for improving the original PSO. In the original PSO, the particle position vectors are updated from the personal best and the global best position vectors which particles have ever found. This research focuses on the use of the second global best and the second personal best particle positions in order to improve the search performance of the original PSO. In the present algorithms, the second global best and the second personal best particle positions are randomly used for updating the particle position vectors.
Present algorithms are compared with the original PSO algorithm in five test functions. The results revealed that the use of the second best positions can improve the search performance of the original PSO. In all cases, the success rate is bigger than or equal to 0.9 and the estimation, which is defined as the quotient of the average search time and the success rate, is also better than the previous studies. The present results were compared with the previous study. The results by the present algorithms were better than them in the references.
In the near future, we would like to discuss the applicability of the present algorithms to actual engineering applications.