Typical swarm Intelligence schemes Include Particle Swarm Optimization SO), Ant Colony System (ACS), Stochastic Diffusion Search (SD), Bacteria Foraging (BE), the Artificial Bee Colony (BBC), and so on. Besides the applications to conventional optimization problems, SSL can be used in controlling robots and unmanned vehicles, predicting social behaviors, enhancing the telecommunication and computer networks, etc. Indeed, the use of swarm optimization can be applied to a variety of fields in engineering and social sciences. In this paper, we review some popular algorithms in the field of swarm Intelligence for problems of optimization.The overview and experiments of SO, ACS, and BBC are given.
Enhanced versions of these are also introduced. In addition, some comparisons are made between these algorithms. Keywords. Swarm intelligence (SSL), Particle Swarm Optimization (SO), Ant Colony System (ACS), Artificial Bee Colony (BBC).
Introduction People learn a lot from Mother Nature. Applying the analogy to biological systems with lots of individuals, or swarms, we are able to handle the challenges in the algorithm and application with optimization techniques.In this paper, we focus on the overview of several popular swarm intelligence algorithms, pointing out their incepts, and proposing some enhancements of the algorithms with the results of our research group. Swarm intelligence, according to [1 1, is the emergent collective intelligence of groups of simple agents. With swarm intelligence, the developed algorithms need to be flexible to internal and external changes, to be robust when some individuals fail, to be decentralized and self-organized [2]. In the rest of the paper, we will address several popular algorithms based on these concepts, Including Particle Swarm algorithms in Sec.
, and we present the improvements of these algorithms based on our existing works in Sec. 3. Selected simulation results and comparisons are also provided in Sec. 4. Finally, we conclude this paper in Sec. 5.
P. Judicatories et al. (Des. ): ICC 2011, part l, LENS 6922, up.
28-41, 2011. Sponger-average Berlin Heidelberg 2011 29 Swarm Intelligence Algorithms In this section, we introduce the concept and implementation of several popular algorithm for swarm intelligence optimization, including particle swarm optimization (SO), ant colony system (ACS), and Artificial Bees Colony (BBC) algorithms. 2. 1 Particle Swarm Optimization (SO)Particle swarm optimization (SO) was first introduced by Kennedy and Bertha [3,4]. It is a relatively new stochastic optimization technique that can simulate the swarm behavior of birds flocking.
In SO, an individual in the swarm, called a particle, represents a potential solution. Each particle has a fitness value and a velocity, and it learns the experiences of the swarm to search for the global optima [5]. Traditional SO can be depicted in Fig. 1.
They include (1) particle initialization, (2) velocity updating, (3) particle position updating, (4) memory updating, and (5) termination checking.These steps are described as follows. Fig. 1. Procedures for particle swarm optimization Initialization. We first decide how many particles used to solve the problem.
Every particle has its own position, velocity and best solution. If we use M particles, their best solutions, and their velocities can be represented as: , BMW -1}, Velocity updating. This step is shown in CEQ. (4), where CLC and co are constants, RL and re are random variables in the range from O to 1, bi (t) is the best solution of the I-the particle for the iteration number up to the t-the iteration and the G (t) is the best solution of all particles:S.
-C. Chug teal. Vi (t + 1) = vi (t)+CLC To prevent the velocity from becoming too large, we set a maximum value to limit the range of velocity as - VAMP V VAMP Position updating, which is processed by CEQ. (5): Memory updating. If we find a better solution than G (t) in G(t+ 1) , G(t) will be replaced by G (t + 1) .
Otherwise, there will be no change for G (t) . These recursive steps continue unless we reach the termination condition. With the descriptions above, we can observe that the solution in SO can be influenced by both the global and the local characteristics in the training procedure.Any Colony Systems Inspired by the food-seeking behavior of real ants, the ant system [6,7] is a cooperative population-based search algorithm.
As each ant constructs a route from nest to food by stochastically following the quantities of pheromone level, the intensity of laying pheromone would bias the path-choosing, decision-making of subsequent ants. The operation of ant system can be illustrated by the classical traveling salesman problem (TTS). The TTS seeks for a round route covering all cities with minimal total distance. Suppose there are n cities and m ants.
The entire algorithm starts with initial pheromone intensity set to so on all edges. In every subsequent ant system cycle, or the episode, each ant begins its tour from a randomly selected starting city and is required to visit every city once and only once. The experience gained in this phase is then used to update the pheromone intensity on all edges. The algorithm of the ant system for the TTS is depicted as follows [7,8]: (1) Randomly select the initial city for each ant.
The initial pheromone level between any two cities is set to be a small positive constant. Set the cycle counter to be O.Calculate the transition probability from city r to city s for the k-the ant as 0 [r (r , s ; PC(r (r , u)] juke(r) 00, otherwise, where r is the current city, s is the next city, (r, s ) is the pheromone level between cities r and s, n (r , s 6 (r , s )-1 is the inverse of the distance 6 (r, s ) between cities r and s, J k (r ) is the set of cities that remain to be visited by the k-the ant positioned on city r, and b is a parameter determining the relative importance of pheromone level versus distance. Select the next visited city s for the k-the ant with the probability PC (r , s ) .
Repeat Step (2) for each ant until the ants have toured all cities. Update the pheromone level between cities as (1 (r, s )+EAI k (r, s), (7) 01 if (r , s) route done by ant k, where k (r , s ) is the pheromone level laid down between cities r and s by the k-the ant, Elk is the length of the route visited by the k-the ant, m is the (4) number of ants and O < a < 1 is a pheromone decay parameter. Increment cycle counter. Move ants to originally selected cities and continue Steps (2)-(4) until the behavior stagnates or the maximum number of cycles has reached. A stagnation is indicated when all ants take the same route.From CEQ.
(6) it is clear ant system (AS) needs a high level of computation to find the next visited city for each ant. In order to improve the search efficiency and lower computational complexity, the ant colony system (ACS) was proposed [8,9]. ACS is based on AS but updates the pheromone level before moving to the next city (local updating rule) and updating the pheromone level for the shortest route only after completing the route for each ant (global updating rule) as , if (r , s) e global best route, (r , s ) = Lag 10) where Lag is the length of the shortest route and a is a pheromone decay parameter. . 3 Artificial Bee Colony (BBC) Optimization Saratoga proposed Artificial Bee Colony (BBC) [10] in 2005 based on inspecting the behaviors of real bees on finding nectar and sharing the information of food sources to the bees in the nest. Three kinds of bees are defined as the artificial agents.
Every kind of bee plays different and important roles in the optimization process. The artificial agents are called the employed bee, the onlooker, and the scout with distinct responsibilities.The employed bee stays on a food source, which represents a spot in the solution space, and provides the coordinate for the onlookers in the hive for reference. The onlooker bee receives the locations of food sources and selects one of the food sources to gather the nectar.
The scout bee moves in the solution space to discover new food sources. The process of the BBC optimization is described as follows: (1) Initialization. Spray en percentage of the populations into the solution space randomly, and then calculate their fitness values, called the nectar amounts, where e represents the ratio of employed bees to the total population.Once these populations are positioned into the solution space, they are called the 32 employed bees. Evaluate the fitness of the employed bees and take the fitness to be their amount of nectar. Move the onlookers.
We calculate the probability of selecting a food source by CEQ. (9) first, where B I denotes the position of the I-the employed bee, F (; ) is the fitness function, S represents the number of employed bees, and Pi is the probability of selecting the I-the employed bee. Then we select a food source to move to by roulette onlookers are moved by CEQ. 0), where xi denotes the position of the I-the onlooker bee, t denotes the titer- Zion number, B k is the randomly chosen employed bee, J represents the dimension of the solution, and CPA (; ) produces a series of random variable in the range 1, 1] . F(Bi) s 2.
4 (12) -Back(t)). Update the best food source found so far. We record the best fitness value and the position, which are found by the bees. Move the scouts.
If the fitness values of the employed bees are not improved by a consecutive number of iterations, called "Limit," those food sources are abandoned, and these employed bees become the scouts.The scouts are moved by CEQ. (1 1), where r is a random number and r [O, 1] . Big=BC Max -BC min) (13) Termination checking. If the amount of the iterations satisfies the termination condition, we terminate the program and output the results; otherwise, go back to Relating Algorithms with SSL Due to the limited space in this paper, readers who are interested in relating algorithms are suggested to refer to the followings: Bacteria Foraging (BE) [1 1], Cat Swarm Optimization (CSS) [14,13], and Stochastic Diffusion Search (SD) [14].
3 3. The Enhanced Swarm Intelligence Algorithms parallel so A parallel computer consists of a large number of processing elements which can be dedicated to solving a single problem at a time. Pipeline processing and data parallelism are two popular parallel processing methods. The function of the pipeline processing is to separate the problem into a cascade of tasks where each task is executed by an individual processor, while data parallelism involves distributing the data to be processed amongst all processors which then executes the same procedure on each subset of the data.And this is the motivation for the parallel SO. Computation time of SO in Sec.
2. Can be reduced with the parallel structure. Parallel processing aims at producing the same results achievable using multiple processors with the goal of reducing the run time. We also have the same steps in describing parallel SO (IPSO) in Sec. 2.
1 . But in Step (1), users must define how many groups need be in the process because it can be designed to be 2 n sets. To decide the number of groups, it should be carefully refer to the size of the problem to solve.In general, we use four groups to evolve, but we use eight groups only if the problem is quite large. After several iterations, each group exchanges information in accordance with an explicit rule chosen by users.
The parallel particle swarm optimization (IPSO) method [1 5] gives every group of particles the chance to have the global best and the local best solutions of other groups. It increases the chance of particles to find a better solution and to leap the local optimal solutions. Correlation. (b) Strategy #2, with strong correlation.
C) Strategy #3, with general purpose usage. Three communication strategies for IPSO are presented. The first one is to deal with the loosely correlated parameters in functions. When all the particles evolved a erred of time (Here we described as RI iterations), communication strategy 1 would exchange the local best solution from one group to others. The way of communication strategy 1 is shown in Fig.
2(a). The second communication strategy is the self-adjustment in each group due to the strong correlations of parameters.In each group, the best particle would migrate to its neighborhood to supplant some particles by replacing a deteriorated solution with the group's best one. For the ease of implementation, the number of groups for the parallel structure is set to a power of two.
Thus, neighborhoods are defined as one it difference in the binary format. The second communication strategy would be applied every RE iterations for replacing the poorly performing particles. The way of communication strategy 2 is shown in Fig. 2(b). The third communication strategy of IPSO is the combination of communication strategies 1 and 2.
The particles must be separate into at least four groups. In communication strategy 3, all particles are separated into two subgroups. Under the subgroups, communication strategy 1 and 2 are imitated. Thus, communication strategy 3 is a general communication strategy for IPSO. The process is shown in Fig.
(c). Based on the observation, if the parameters are loosely correlated or independent, communication strategy 1 will obtain good results quite quickly. On the contrary, communication strategy 2 obtains higher performance if the parameters are tightly correlated.However, these communication strategies may perform poorly if they have been applied in the wrong situation.
Consequently, when the correlation of parameters is unknown, communication strategy 3 is the better choice to apply. 3. 2 parallel ACS Similar to IPSO, we apply the idea of data parallelism to ant colony system (ACS) in Sec. 2.
To reduce running time and obtain a better solution. The parallel ant colony system (PASS) based on traveling salesman problem is described as follows: (1) Initialization. Generate N J artificial ants for the J-the group,] = O, 1, , G -1 , and G is the number of groups.