NEW BINARY PARTICLE SWARM OPTIMIZATION WITH IMMUNITY-CLONAL ALGORITHM

Particle Swarm Optimization used to solve a continuous problem and has been shown to perform well however, binary version still has some problems. In order to solve these problems a new technique called New Binary Particle Swarm Optimization using Immunity-Clonal Algorithm (NPSOCLA) is proposed This Algorithm proposes a new updating strategy to update the position vector in Binary Particle Swarm Optimization (BPSO), which further combined with Immunity-Clonal Algorithm to improve the optimization ability. To investigate the performance of the new algorithm, the multidimensional 0/1 knapsack problems are used as a test benchmarks. The experiment results demonstrate that the New Binary Particle Swarm Optimization with Immunity Clonal Algorithm, found the optimum solution for 53 of the 58 multidimensional 0/1knapsack problems.


INTRODUCTION
This James Kennedy and Russell Eberhart introduced a Particle Swarm Optimization (PSO) in 1995 (Eberhart and Kennedy, 1995;Kennedy et al., 2001) by simulate a bird swarm. PSO depending on three steps which are repeated until some stopping condition is met, the first step is to Evaluate the fitness of each particle then specify the individual best position and global position ending with update velocity and position of each particle using the following equations: where, (i) is the index of the particle and (t) is the time.
In Equation (1), the velocity (v) of particle (i) at a time (t + l) is calculated by using three terms. The first term (ωv i (t)) called inertia effect which is responsible for keeping the particle to fly in the same direction, where (ω) is the inertia factor usually decreases linearly during run (Shi and Eberhart, 1998), the higher value of (ω) encourages the exploration while the lower value encourages the exploitation. v i (y) is the velocity of particle (i) at time (t).
The second term (c 1 r 1 (pbest i (t)-p i (t))) called cognitive effect. It allows the particle to return to the best position achieved by itself by calculating the distance between the current position (P i (t)) and the best position pbest i (t) where, (c 1 ) is a cognitive coefficient that usually close to 2 and affects the size of step the particle takes toward the (pbest) and (r 1 ) is a random value between 0 and 1 cause the particle to move in semi direction toward (pbest) (Eberhart and Kennedy, 1995;Kennedy et al., 2001).

JCS
The third term (c 2 r 2 (Gbest i (t)-p i (t))) called social effect; it is responsible for allowing the particle to follow (Gbest) the best position the swarm has found so far where (c 2 ) is a social coefficient that usually close to 2 and affects the size of step the particle takes toward (Gbest) and (r 2 ) is a random value between 0 and 1 cause the particle to move in semi direction toward (Gbest) once the velocity is calculated, the position updated by Equation (2).

Related Work
PSO was designed for continuous problem, but can't deal with discrete problems. A new version of PSO Called Binary Particle Swarm Optimization is introduced by Kennedy and Eberhart (1997) to be applied to discrete Binary Variables, because there are many optimization problems occur in a space featuring discrete. The Position in BPSO is represented as a binary vector and the velocity is still floating-point vector however; velocity is used to determine the probability to change from 0 to 1 or from 1 to 0 when updating the position of particle.
There are some differences between PSO and BPSO, which may lead to the following problems.
Firstly, the behavior of velocity clamping in BPSO differ from it in PSO. The velocity in PSO is responsible for exploration where the velocity in BPSO encourages the exploitation (Engelbrecht, 2005). This problem lead to the phenomenon of premature convergence in which the search process will likely trapped in region containing a non-global optimum simply its mean loss of diversity.
Secondly, the value of (ω) in PSO usually decreases linearly however; In BPSO there are some difficulties to choose a proper value for (ω) to control the exploration and exploitation as discuss in (Engelbrecht, 2005).
Thirdly, the position in BPSO is updated using velocity, so the new position seems to be independent from current position but the position in PSO is updated using current position and the velocity determines only the movement of particle in the space (Khanesar et al., 2007). Because of these difficulties, many researches have been devoted to solve these problems (Khanesar et al., 2007;Mohamad et al., 2011;Gherboudj et al., 2012;Gherboudj and Chikhi, 2011). Ye et al. (2006) introduced a new technique of binary Particle Swarm Optimization in (Ye et al., 2006) by introducing some new operators to be used in updating velocity and position equations.
In this technique, each potential solution (particle) is represented with position and velocity of n-bit binary string and updating according to the following equations: A particle moves to nearer or farther corners of hypercube Depending on the perspective of flipping bits in the position vector. The cognitive term in Equation (1) is exchanged by (pbest i (t) xor p i (t)) where (xor) operator is used to set 1 when the bits in (p i ) and (pbest i ) are different otherwise set to 0; for example if p i = 10011 and pbest i = 00011 the distance between (p i ) and (pbest i ) will look like that 10000. The social term in velocity equation and the equation of updating position are calculated in the same way. In Equation (3), the three terms: inertia term, cognitive term and social term are combined together using or operator to be united in one vector and the parameters (α) and (β) are used to control the convergence speed of the algorithm.
In this approach the velocity and position for each particle are generated randomly at first iteration as n-bit binary string then the best position of each particle and the global best position are obtained by evaluating the fitness of each one, after that the particle velocity and position update using Equation (3) and (4). As it is obvious it does not clear how (ω), (α) and (β) in Equation (3) actually work as they represented in (Ye et al., 2006) as a parameters where the value of (ω) is generally set to less than 1.0 and the value of α equal the value of (β) equal 1.94.
As mentioned above the (or) operator is used to combine the three terms of Equation (3) in one vector. One of these term is the inertia effect which essentially depending on the value of the velocity. For example if the velocity value at iteration (t) has ones more than zeros it may lead to make the new value of velocity has only ones and thus make the velocity has a constant value in all coming iteration and has no effect. For example: Let v = a+b+c, a = 1110, b = 1010 and c = 1001 Then v(t + l) = 1111 Also, let p(t) = 0001 Then p(t+l) = 0001 xor 1111 = 1110 When calculating v(t + 2) it remains equal to 1111 because of (or) operator, So p(t + 2) = 1110 xor 1111 = 0001 that is mean the new value of the new position has two choices only if to be 0001 or to be 1110 simply it Science Publications JCS means loss of diversity and the new position doesn't actually depending on all terms in the velocity equation.

Immune Clonal Selection
Artificial Immune System inspired by natural immune system in which the human beings and animals are protected (using antibodies) from intrusions by substance (antigens). Clonal Selection is a type of adaptive immune system which is directed against specific antigen and consists of two major types of lymphocytes; B-cells (white blood cells which are responsible for producing antibodies) and T-cells (white blood cells also called cellsreceptors, they are responsible for detecting antigens) which are involved in process of identify and removing antigen. The basic idea of Clonal Selection as shown in Fig. (1) (De Castro and Timmis, 2002) based on the proliferation of activated B-cells that have better matching with specific antigen. Those B-cells can be changed in order to achieve a better matching. Clonal selection algorithm take into consideration, the memory set maintenance, death of cells that can't recognize antigen or have a bad matching and the ratio between re-selection of the clones and their affinity. The main features of the Clonal Selection theory are (Burnet, 1978): • The new cells are copies of their parents exposed to a mutation mechanism of high rates • Newly differentiated lymphocytes which carry selfreactive receptors are selected • When the mature cells contact with antigens, Proliferation and differentiation occurs

NEW PARTICLE SWARM OPTIMIZATION WITH IMMUNITY CLONAL SELECTION ALGORITHM
This section presents the new Binary Particle Swarm Optimization with immunity Clonal Algorithm (NPSOCLA). The algorithm combines a modified Binary Particle Swarm Optimization algorithm, the clonal selection algorithm and subset of random population in the aim to achieve a balance between exploration and exploitation. The proposed algorithm is explained in two parts as follows.

New Binary Particle Swarm Optimization (NBPSO)
In the NBPSO, the Position is updated without using the velocity. The particle's step size toward the best position and the global best position is controlled by using logical operators. The NBPSO works as follows.

Representation
The population is initialized randomly where each particle (p i ) in it represented as a binary position vector.

Position Update Equation
Particle's position is updating by following equation: p t +1 = c r diff1 or / and c r diff2 (5) Where: diff1(pbest i xor p i ) = The different bits between the particle's best position and particle's position that obtained by xor operator. diff1(gbest i xor p i ) = The different bits between the global best position and particle's position that obtained by xor operator. r 1 = A random number generated between zero and one. r 1 is used to apply a single point mutation to diff1. r 2 = a random number generated between zero and one. r 2 is used to apply a single point mutation to diff2 c 1 (No of ones in diff1/n1) = The step size the particle takes toward its best position. n1 is the number between zero and no of ones in diff1.

Science Publications
JCS c 2 (No of ones in diff2/n2) = The step size the particle takes toward its best position. n2 is the number between zero and no of ones in diff2. or/and = A logical operator used to combine the two terms of Equation (5) in one binary vector (new position) Choosing between use (or) or (and) depends on the type of the problem.
Pseudo Code of new binary particle swarm algorithm:

1.
Initialize the position for each particle in the swarm 2.
While stopping criteria not met do 3.
Calculate fitness value 7.
Choose the best of all best positions as gbest 13.
For i=1

Clonal Selection Algorithms
In the new proposed Binary Particle Swarm Optimization with immunity Clonal Algorithm in this study, Clonal Selection Algorithm (CSA) is applied on the best-fit particles when the global best position does not change for (m) times. If the initial population size is P then CSA is applied on N = 10*p/100 best fit particles (Pbests). The number of clones generated is given by the following Equation (6): Where: N C = Total number of particles to be cloned from the current particle α = Fitness ratio of each particle β = Cloning index By varying this parameter, number of Clones can be regulated.
The new set C of N C number of cloned particles are then put through a mutation process in such a way that the best fit clone will have least mutation. This is done by the following Equation (7): Where: M = Number of bits to be flipped in the cloned particle α = Fitness ratio of each particle C = Mutating index.
By varying this parameter, number of Mutates can be regulated.
The cloning and mutation applied on the best-fit particles of NBPSO to increase the exploration potential of the algorithm near the vicinity of the fittest particles and distant regions from the less fit particles in the search space.

Pseudo Code of Clonal Selection Algorithm:
Science Publications

1.
Create a population of the best pbests (best particles) in the swarm population.

2.
Create n clones from each particle, where n is proportional to the fitness of the particle. 3.
Mutate each clone inversely proportionally to its fitness.

4.
Calculate the fitness of the cloned particles. 5.
Pick the best of them to be nominated in the next generation without redundancy.

New Particle Swarm Optimization with Clonal Selection Algorithm Outline (NPSOCLA)
This section shows how new particle swarm optimization, clonal selection algorithm and a subset of new random population are combined together.
Pseudo Code of NPSOCLA: 1. Initialize each particle with random position (Initialize population of P size) 2. Initialize max-n iterations 3. While n<max-n iterations 4. { 5. For i=1: population size 6. { 7. Calculate fitness value 8. } 9. Obtain pbests and gbest 10. If gbest doesn't change for n times 11. { 12. Go to clonal selection algorithm with the best pbests in the swarm population without redundancy 13. New population(P size) = select the best individuals from (swarm population + New random generating population + the clonal selection population) 14. Go to the new binary particle swarm algorithm 15. } 16. Else 17. Updating each particle position according to equation (5) 18. }

EXPERIMENTAL RESULTS
To validate the feasibility and effectiveness of the proposed approach, the proposed algorithm was applied on several instances of 0/1 Multidimensional Knapsack Problem (0/1MKP) found in (Beasley, 2012). The 0/1 Multidimensional Knapsack Problem is an NP-Hard problem (Garey and Johnson, 1979). It can be defined as follows: there are m knapsacks with maximum C m capacities. All Knapsacks have to be filled with the same x objects. Each object has P profit and w weight. The weight of the object differs from one knapsack to another. The goal is to maximize the profit without violating constraints. The 0/1 MKP can be formulated as Equation (8 and 9): The solutions to the 0/1 MKP that are represented as binary vectors may be infeasible because one of the knapsack constraints may be violated in the following two cases: • When initializing the population with random solutions (Random positions) • When updating the solutions with Equation (5) So, each solution must verify the m constrained of the knapsack to be accepted as a feasible solution. In our new algorithm, the following technique is used to convert the infeasible solution to feasible solution based on some ideas from greedy algorithm (Kohli et al., 2004) and Check and Repair Operator (CRO) (Labed et al., 2011) as follows: 1. Calculate the profit ratio Rij = Pi/Wij for every item in every knapsack. 2. Compute the max value of the profit ratio Ri = max {Pj/Wij} for every item. 3. Sort items according to the ascending order of Ri 4. Remove the corresponding item with lowest values of Ri from the item set. (i.e., the value of corresponding bit which is 1 becomes 0).

Repeat
Step 4 until a feasible solution is achieved.
The Proposed algorithm was implemented using Visual studio 2010 (.NET4) with the following parameters: 100 particles, 100 iterations, n1 = n2 = 2, cloning index (β) is equal to either 55 or 22 and Mutating index (C) is equal to either 22 or 7 respectively with the values of (β).  Two parts of experiments were performed. First, the proposed algorithm was tested when the logical operator in Equation (5) is (or) and when it is (and). In the second part of experiments, the obtained results were compared with the obtained solutions in (Khuri et al., 1994;Hembecker et al., 2007). Table 1 and 2 show the experiment result of the proposed algorithm with some instances taken from ORlib (Beasley, 2012). The first column indicates the name of problem. The second column indicates the number of knapsacks (M). The third column indicates the number of objects (N). The fourth column indicates the best-known solution. The fifth column indicates the best result obtained by the New Particle Swarm Optimization with Clonal Selection algorithm. The sixth column indicates the number of times that the New Particle Swarm Optimization with Clonal Selection algorithm reaches the best-known solution (#max). The seventh column indicates the average obtained over all 100 runs by (NPSOCLA).

JCS
We can deduce from Table 1 and 2 that, the NPSOCLA found the optimum solution for 53 of the 58 test cases. It should be noted that there are five problems (sento2, knap50, weish22, weing7, weing8) that do not reach the optimum solution but are very close to it. Table 3 and Fig. 2 show a comparison in terms of best solution between the exact solutions (optimal), proposed algorithm and PSO algorithm (Hembecker et al., 2007). It is show that the NPSOCLA outperforms the PSO algorithm. Table 4 shows a comparison between a GA in (Khuri et al., 1994) and the New Particle Swarm Optimization with Clonal Selection Algorithm (NPSOCLA). The first two columns (problem instance) report the name of the problem and the maximum obtainable benefit. The following groups of columns report the results archived by GA in (Khuri et al., 1994) and by NPSOCLA, respectively. We show the average profit obtained over all 100 runs and, in the column #max, the number of times the best solution is reached. It is show that the NPSOCLA outperforms the GA in kanp15, knap20 and, knap28 .The GA outperforms the proposed algorithm in knap39. In knap50 the GA reach the optimal solution one time but its average is less than the NPSOCLA's average, which doesn't reach the optimal solution.

CONCLUSION
In this study, a new binary particle swarm optimization method using a clonal selection algorithm is proposed. The performance of the proposed algorithm is evaluated and compared with PSO and GA on a number of the benchmark multidimensional knapsack problem instances. The experimental result shows that the proposed algorithm

JCS
(NPSOCLA) has a good performance; on the other hand the difficult task in the proposed algorithm is to choose the proper parameters because the best setting for parameters can be different from problem to another So, our fundamental outlook moving towards design a selfadaptive method to control parameters setting.