Hybridization of Genetic Algorithm with Parallel Implementation of Simulated Annealing for Job Shop Scheduling

: Problem statement: The Job Shop Scheduling Problem (JSSP) is observed as one of the most difficult NP-hard, combinatorial problem. The problem consists of determining the most efficient schedule for jobs that are processed on several machines. Approach: In this study Genetic Algorithm (GA) is integrated with the parallel version of Simulated Annealing Algorithm (SA) is applied to the job shop scheduling problem. The proposed algorithm is implemented in a distributed environment using Remote Method Invocation concept. The new genetic operator and a parallel simulated annealing algorithm are developed for solving job shop scheduling. Results: The implementation is done successfully to examine the convergence and effectiveness of the proposed hybrid algorithm. The JSS problems tested with very well-known benchmark problems, which are considered to measure the quality of proposed system. Conclusion/Recommendations: The empirical results show that the proposed genetic algorithm with simulated annealing is quite successful to achieve better solution than the individual genetic or simulated annealing algorithm.


INTRODUCTION
Meta-heuristics is used to solve with the computationally hard optimization problems. Metaheuristics consist of a high level algorithm that guides the search using other particular methods. Metaheuristics can be used as a standalone approach for solving hard combinatorial optimization problems. But now the standalone approach is drastically changed and attention of researchers has shifted to consider another type of high level algorithms, namely hybrid algorithms. There are at least two issues has to be considered while combining more than one metaheuristics: (a) how to choose the meta-heuristic methods and (b) how to combine the chosen heuristic methods into new hybrid approaches. Unfortunately, there are no theoretical foundations for these issues. For the former, different classes of search algorithms can be considered for the purposes of hybridization, such as exact methods, simple heuristic methods and metaheuristics. Moreover, meta-heuristics themselves are classified into local search based methods, population based methods and other classes of nature inspired meta-heuristics. Therefore, in principle, one could combine any methods from the same class or methods from different classes. Our hybrid approach combines Genetic Algorithms (GAs) and Simulated Annealing (SA) methods. Roughly, our hybrid algorithm runs the GA as the main algorithm and calls SA procedure to improve individuals of the population.
The rest of the paper is organized as follows. The description of JSSP problem is followed by the introduction. Followed by there is a discussion about the literature review. In the fourth part, GA and SA methodologies are given for job shop scheduling. Finally the implementation of the HGAPSA to the JSSP is given with the algorithm using the proposed method with the experimental results and a discussion of the proposed method and a conclusion and future enhancement is also given. the last operation in job shop. On O define A, a binary relation representing precedence between operations. If (v, u)∈A then u has to be performed before v. A schedule is a function S: 0→IN∪{0} that for each operation u defines a start time S(u). A schedule S is feasible if Eq. 1-3: The length of a schedule S is Eq. 4: The goal is to find an optimal schedule, a feasible schedule of minimum length, min (len(S)).
An instance of the JSS problem can be represented by means of a disjunctive graph G = (O, A, E). Here O is the vertex which represents the operations and A represents the conjunctive arc which represents the priority between the operations and the edge in E = {(u, v)|u, v∈O, u≠v, M (u) = M(v)} represent the machine capacity constraints. Each vertex u has a weight, equal to the processing time ʎ(u). Let us consider the bench mark problem of the JSSP with four jobs, each has three different operations and there are three different machines. Operation sequence, machine assignment and processing time are given in Table 1.
Based on the above bench mark problem, we create a matrix G, in which rows represent the processing order of operation and the column represents the processing order of jobs. Also we create a matrix P, in which row i represents the processing time of J i for different operations: The processing time of operation i on machine j is represented by O ij. Let ʎ ij be the processing time of O ij in the relation O ij → O ij . C ij represents the completion of the operation O ij . So that the value C ij = C ik + ʎ ij represents the completion time of O ij . The main objective is to minimize of C max . It can be calculated as Eq. 5: The distinctive graph of the above bench mark job scheduling problem is shown in Fig. 1, in which vertices represents the operation. Precedence among the operation of the same job is represented by Conjunctive arc, which are doted directed lines. Precedence among the operation of different job is represented by Disjunctive arc, which areundirected solid lines. Two additional vertices S and E represents the start and end of the schedule.
The Gantt Chart of the above bench mark job scheduling problem is shown in Fig. 2. Gantt Chart is the simple graphical representation technique for job scheduling. It simply represents a graphical chart for display schedule; evaluate makespan, idle time, waiting time and machine utilization.
Literature review: Many researchers are working in job shop scheduling problem. Garey et al. (1976) were the first who introduced job shop scheduling problems. Some researchers like Brandimart (1993) and Paulli (1995) have used dispatching rules for solving flexible job shop scheduling problems. Attention to size proved that job shop scheduling problems are NP-Hard (Garey et al., 1976) and with added flexibility increase complexity more than job shop. Ram et al. (1996) have applied a parallel simulated annealing for job shop scheduling, but the same temperature is maintained in all the machines. Bozejko et al. (2009) have proposed the parallel simulated annealing for the job shop scheduling. But the same sequential algorithm is implemented more than one machine in a parallel order. Ramkumar et al. (2012) proposed real time fuzzy logic for job shop scheduling problem.  Objective of JSP problem is to find the optimal schedule with minimum makespan, but this result is not clearly shown by author. Thamilselvan and Balasubramanie (2011; have used the various crossover strategies for genetic algorithm for JSSP and integration of Genetic algorithm with Tabu Search for the JSSP. The above two methods were efficient for the small size JSP problems. Mohamed (2011) proposed a genetic algorithm for JSSP, but this algorithm is efficient only for less number of jobs. The ratio scheduling algorithm to solve the allocation of jobs in the shop floor was proposed by Hemamalini et al. (2010). This algorithm is more efficient when the result for the bench mark instances when the due date is less than half of the total processing time.

Genetic algorithm:
Genetic algorithms are probabilistic meta-heuristic technique, which may be used to solve computationally hard optimization problems. They are based on the genetic process of chromosome. Over many generations, natural populations evolve according to the principles of natural selection of genes, i.e., survival of the fittest, first clearly stated by Charles Darwin in The Origin of Species. There is a initial solution as a Population to start the process and it filled with different order of chromosome. The chromosome consists of collection genes. Job is represented by each gene in chromosome and the job sequence in a schedule based on the position of the gene. GA uses Crossover and Mutation operation to generate a new population. By crossover operation, GA generates the neighborhood to explore new feasible solution.
A typical genetic algorithm is illustrated in Fig. 3. It first creates an initial solution as a population consisting of randomly generated collection of genes. After applying genetic operations like crossover, mutation and selection, the new solutions are generated.After generating the new solutions, evaluate each individual in the population. The optimal solutions are used to carry the next generation. The above steps are repeated until the termination condition is satisfied. A GA is terminated after a certain number of iterations or if a certain level of fitness value has been reached. The structure of a genetic algorithm for the scheduling problem can be divided into four parts: the choice of representation of individual in the population; the determination of the fitness function; the design of genetic operators; the determination of probabilities controlling the genetic operators. Yamada and Nakano (1997) was implemented the GA for the job shop scheduling problems.

Sequential Simulated Annealing (SSA):
SSA belongs to the type of local search algorithms (Eglese, 1990). SA algorithm is inspired metal cooling process. In this process, the temperature is gradually reduced to reach the optimal solutions. SA algorithm searches current solution neighborhoods for a better solution and uses it for many complementary problems. Some researchers like Fattahi et al. (2007; and Zandieh et al. (2008) used SA algorithm in flexible job shop environment. SA algorithm generates an initial solution randomly. A neighbor of this solution is then generated by a suitable mechanism and the change in the cost function is calculated. If a decrease in the cost function is obtained, the current solution is replaced by the generated neighbor. If the cost function fun of the neighbor is greater, the newly generated neighbor replaces the current solution with an acceptance probability function given in Eq. 6: Where: S[j] are the cost function generated state and the present state respectively. T is the temperature to control the annealing process. The above equation implies that a small increase in fun are more likely to be accepted than large increases in the fun and also that when T is high, most of the newly generated neighbors are accepted. However, most of the cost increasing transactions are rejected if T approaches zero. Initially the temperature of the SA algorithm is kept as high so that the algorithm proceeds by generating a certain number of neighbors at each temperature, while the temperature parameter is gradually dropped. This algorithm leads to an optimal solution. The typical procedure for SSA algorithm is shown in Fig. 4.
For SSA the initial schedule is generated from a disjunctive graph G for solving job shop scheduling problem. The Giffer and Thompson (1960) algorithm is used to find the initial schedule. This algorithm obtains the schedule with all the operations (n) and all the machines (m) with the criteria employed being the earliest starting time and the processing time of each of the operations. The operation not yet included in the partial schedule at each stage, if the minimum time is chosen. If all the operations are included in the schedule, then the partial schedule becomes a complete schedule. The generated complete schedule can be represented in a diagraph.
The earliest and the latest start time of each operations in a diagraph are calculated after obtain the diagraph with all the operations. The critical path is used to find the earliest and latest start time. The earliest start time or the latest start time of the last operation is known as the makespan. This is the cost of the schedule (Krishna et al., 1995).
The critical path in a diagraph is obtained after evaluate the cost of the schedule. The critical path can be defined as a set of edges from the first vertex to the last vertex which satisfy the following properties. Neighborhood of a schedule S can be defined as a set of schedules that can be obtained by applying the transition operator on the given schedule S (Li-Ning et al., 2009). The transition operator permutes the order of operations in a critical block by moving an operation to the beginning or end of the critical block, thus forming the CB neighborhood. In this neighborhood, the distance between S and any element in N(S) can vary depending on the position of the moving operation. It has been experimentally shown in (Yamada et al., 1994) that SA using the CB neighborhood is more powerful than SA using the AS neighborhood. Thus, the CB neighborhood may as well be investigated in the GA context. Figure 5 illustrates how the two transition operators work. Parallel Simulated Annealing: For solving the job shop scheduling problem, there are two approaches adapted in the SA algorithm. The first approach is to assign the operations to machines in a sequential order. In the second approach, the operations are assigned to machines and processed in two levels to reduce the complexity of the problem. In the first level, the operations are assigned to machines and in the second level, the operations are scheduled in machines. The second approach is known as the parallel implementation of job shop scheduling.
The main objective function of the proposed algorithm is to minimize the makespan. We use the second approach for solving job shop scheduling problem. The procedure SA_Parallel() generate an initial schedule S[i] and then the algorithm is parallelly running on different machines. Let N be the total number of iterations in each SA algorithm temperature, S[j] is the neighbourhood of S[i], B c is best known solution cost, B s is the best known schedule of JSSP. As mentioned earlier, the given scheduling algorithm to schedule operations on machines. The generated S[j] is the input to the scheduling algorithm and then the algorithm compute the cost of S[j] as C S[j]. The cost of the new schedule is compared with the cost of the initial schedule to process the algorithm. The implementation of the above algorithm is done by a server machine and a set of client machines. The server node generates the initial schedule S[i], the processing times of all the operations and the machine order for each job. This server machine then sends the initial schedule and different range of temperature parameters to each of the client machines on the network. Each client machine has its maximum temperature T s and minimum temperature T e . The client machines execute the above algorithm with different range of temperature and send the solution to the server machine. After receiving the solutions from the client machines, the server machine selects the best solution with the minimal makespan.

Hybrid Genetic Algorithm with Parallel Simulated
Annealing (HGAPSA): Parallel implementation of SA generates a better solution with faster convergence. Initially, n number of client machines processes the SA algorithm with different initial schedule. After the fixed number of iterations, the client machines are exchange the results with the server machine to get the best schedule.
In genetic algorithm, an initial population consisting of a set of schedule is selected and then the schedules are evaluated. Relatively more effective schedule are selected to have more off springs, where are in some way, related to the original solutions. The performance of the genetic algorithm depends on the crossover operation. If it is properly selected, the final population will produce the better solution. Simulated annealing algorithm aims to produce such a solution. For the parallel SA implementation, we need good initial solutions for the fast convergence of SA. GA will produce a good number of initial solutions. The operator used for generating off springs in job shop scheduling is related to the processing order of jobs on different machines of the two parent solutions. We introduce new cross over strategy named as Unordered Subsequence Exchange Crossover (USXX) that children inherit subsequences on each machine as far as possible from parents. Unordered Subsequence exchange crossover creates a new children's even the subsequence of parent1 is not in the same order in parent 2. The algorithm for USXX is as follows: Step 1: Generate two random parent individual namely P1 and P2 with a sequence of all operations.
Step 2: Generate two child individual namely C1 and C2.
Step 3: Select random subset of operations (genes) from P1 and copy it into C1.
Step 4: Starting from the first crossover point from P1, look for elements in P2 that have been copied as in the same order.
Step 5: The remaining operations of P2 that are not in the subset can be filled in C1 so as to maintain their relative ordering. The above GA and PSA algorithms are implemented in a network of one server and five workstations. The server node use GA to generates n number of initial schedule and assigns those schedules to the n number of client machines as an initial solution. The genetic algorithm starts with an initial schedule and then it performs USXX crossover operation to update the population. This process has to be repeated number of times. The client machine use SA_Parallel()procedure to find the best schedule and then the best schedule is send to the server machine.

RESULTS AND DISCUSSION
The performance of the proposed HGAPSA algorithm is compared with the Genetic Algorithm (GA), Sequential Simulated Annealing (SSA) Algorithm and Parallel Simulated Annealing (PSA) for standard JSP test instances of Lawrence (1984) instances from LA30 to LA40 and Storer et al. (1992) instances SWV11-SWV20. Table 2 shows comparison of makespan value produced from different algorithms for problem instances LA30-LA40 (Lawrence, 1984) Column 1 specifies the problem instances, Column 2 specifies the number of jobs, Column 3 shows the number of machines, Column 4 specify the optimal value for each problem. Column 5, 6, 7 and 8 specify results from SA, SSA, PSA and HGAPSA respectively. It shows that the proposed hybrid algorithm has succeeded in getting the optimal solutions for all the problems. Figure 6 shows average makespan value generated by GA, SSA, PSA and HGAPSA for different problem instances of Lawrence (1984). It also shows that SSA produce the worst result compare to other two algorithms and the HGAPSA algorithm is better than the other two algorithms. Figure 7 shows the comparison of Average Relative Error for all the three methods. It clearly shows that the Average Relative Error for HGAPSA is 0.13 Table 3 shows comparison of makespan value produced from different algorithms for problem instances SWV11-SWV20 (Storer et al., 1992) Column 1 specifies the problem instances, Column 2 specifies the number of jobs, Column 3 shows the number of machines, Column 4 specify the optimal value for each problem. Column 5, 6, 7 and 8 specify results from SA, SSA, PSA and HGAPSA respectively. It shows that the proposed hybrid algorithm has succeeded in getting the optimal solutions for all the problems. Figure 8 shows average makespan value generated by GA, SSA, PSA and HGAPSA for different problem instances of Storer et al. (1992). It also shows that SSA produce the worst result compare to other two algorithms and the HGAPSA algorithm is better than the other two algorithms. Figure 9 shows the comparison of Average Relative Error for all the three methods. It clearly shows that the Average Relative Error for HGAPSA is 0.17.
Typical runs of problem instances LA30 (Lawrence, 1984) are illustrated in Fig. 10 by the GA, SSA, PSA and HGAPSA. The graph shows that the proposed HGAPSA reach the optimal solution faster than other two methods. For LA30, GA, SSA and PSA never produces the best known solution. But HGAPSA produced the optimal solution with 2000 iterations. We have tested with 5000 iterations, but other algorithms does not produce a optimal solution.
Typical runs of problem instances SWV15 (Storer et al., 1992) are illustrated in Fig. 11 by the GA, SSA, PSA and HGAPSA. The graph shows that the proposed HGAPSA reach the optimal solution faster than other two methods. For SWV15, GA, SSA and PSA never produces the best known solution. But HGAPSA produced the optimal solution with 2500 iterations. We have tested with 6000 iterations, but other algorithms does not produce a optimal solution. Table 4 shows the computational time of all the above mentioned problems. It is given in the brackets with the makespan. For all the problems, proposed algorithm took a minimum time to reach the optimal value. The average makespan and computational time for LA30-LA40 and SWV11-SWV20 are shown in Table 5. It clearly shows that the proposed algorithm produce a minimum makespan with less computational time. It is shown in the Fig. 12.  Lawrence (1984) Problem

CONCLUSION
In this study integration of genetic algorithm with parallel simulated annealing algorithm is implemented for job shop scheduling. The implementation has been done in a client server environment with the wellknown benchmark problems. The results of the proposed algorithm are compared with the other standard meta-heuristic algorithms. It shows that the proposed algorithm is quite successful for the large size problems compare to other algorithms. Even the proposed algorithm produce the better result, there is a ambiguity in the population size. This problem needs to be addressed in the future. Also more than two meta-heuristic algorithms may be interpreted to improve the solution space.