Abstract

Artificial bee colony (ABC) algorithm has good performance in discovering the optimal solutions to difficult optimization problems, but it has weak local search ability and easily plunges into local optimum. In this paper, we introduce the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopt the principle of moving the particles toward the best solutions in the particle swarm optimization to improve the global search ability of onlooker bees and gain a hybrid artificial bee colony (HABC) algorithm. To obtain a global optimal solution efficiently, we make HABC algorithm converge rapidly in the early stages of the search process, and the search range contracts dynamically during the late stages. Our experimental results on 16 benchmark functions of CEC 2014 show that HABC achieves significant improvement at accuracy and convergence rate, compared with the standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC algorithms.

1. Introduction

In recent years, facing optimization problems, people have put forward a series of traditional solving methods, such as linear programming and dynamic planning. Limited by the huge time complexity, these methods are not applied to solve these large-scale problems. With the development of biotechnology, people found that when individuals work together for a complex job the ability of individuals is not a simple addition of every individual but a very complex behavioral feature. So, many swarm intelligence based optimization methods have been proposed. Inspired by the law of survival of the fittest, Tang et al. [1] proposed the genetic algorithm (GA). Inspired by the foraging behavior of ant colonies, Dorigo et al. [2] proposed the ant colony optimization (ACO). Inspired by the social behavior of bird flocking, Kennedy and Eberhart [3] proposed the particle swarm optimization (PSO). Li et al. [4] proposed artificial fish swarm algorithm (AFSA). Yang [5] proposed firefly algorithm (FA) and Fister et al. [6] summarized some improvement of chaos-based firefly algorithms. Motivated by the behaviors of honeybee swarms, artificial bee colony (ABC) algorithm was first proposed by Karaboga in 2005 [7]. ABC algorithm has been widely used in many fields, for example, determining the optimal size and selecting optimum locations of shunt capacitors by El-Fergany and Abdelaziz [8], solving the economic lot scheduling problem by Bulut and Tasgetiren [9], segmenting SAR image by Ma et al. [10], enhancing image contrast by Draa and Bouaziz [11], and solving the Leaf-Constrained Minimum Spanning Tree (LCMST) problem by Singh [12].

Despite the success of the ABC algorithm in many applications, it still has numerous drawbacks. As we all know, strong capabilities in both exploration and exploitation are important for population-based optimization algorithms. Although the standard ABC algorithm works fine with exploration, it performs poorly in exploitation. To improve the performance, researchers have been working on modifying ABC algorithm and integrating ABC algorithm with other evolutionary computation based optimization methods. For example, Bansal et al. proposed a self-adaptive ABC algorithm, which updates step size and parameters for searching solutions adaptively according to the current fitness values; therefore it gives the solutions more chances to update themselves [13]. Wang et al. presented multistrategy ensemble ABC algorithm by utilizing different characteristics of solution search to construct a strategy pool. During the search process, the strategy for each food source is dynamically changed in order to achieve better performance [14]. Kang et al. presented Rosenbrock ABC algorithm [15] by combing standard ABC and RM-based local search techniques. To improve global searching capability by escaping the local solutions, Alatas [16] adopted a method to adjust parameters for ABC algorithm using random numbers generated from different chaotic systems. Xiang et al. proposed a particle swarm inspired multielitist ABC algorithm [17] which updates the parameters of the solutions using global best solution and an elitist randomly selected from an elitist archive. To efficiently solve the constraint optimization problems, Li and Yin [18] presented a self-adaptive constrained ABC algorithm by introducing feasible rule and multiobjective optimization methods.

In this paper, we propose a hybrid ABC algorithm to improve the performance of standard ABC algorithm. To enhance the ability of local searching and exploitation, we applied chemotactic behavior in Bacterial Foraging Optimization algorithm [19] into employed bees and adopted the global best solution search equation in PSO [20, 21] algorithm into onlooker bees. Moreover, we also use inertia weight [22, 23] like contraction-expansion coefficient in PSO algorithm to balance exploration and exploitation dynamically. Finally, our algorithms are evaluated by the average values and standard deviation (SD) of 16 CEC 2014 benchmark functions.

This paper is organized as follows. Section 2 describes the standard ABC algorithm. Section 3 introduces the HABC algorithm. The experimental results are shown and discussed in Section 4. Section 5 presents the conclusions.

2. Standard ABC Algorithm

When solving optimization problems, ABC algorithm abstracts the food sources as feasible solutions. The process of artificial bee colony seeking for quality food sources is used to simulate the process of finding the global optimal solution. The honeybee swarms are categorized into three groups: employed, onlooker, and scout bees, where the number of employed bees equals the number of high quality food sources. The job of employed bees is to find quality food sources and then share food location information with onlooker bees. Once onlooker bees obtain the food source information, they search closer toward the selected food sources according to the probability distribution functions. Higher quality food sources have a higher possibility to be selected. When food sources found by employed bees are identified as low quality ones, the corresponding employed bees turn into scout bees which search for new food sources randomly. The algorithm can be divided into four steps: initialization, behavior of employed bees, behavior of onlooker bees, and behavior of scout bees.

2.1. Initialization

In the initialization step, food sources are initialized with random positions given by the following equation:

Every solution is a -dimensional vector, is the number of food sources which is equal to the number of employed bees or onlooker bees, and denotes the number of optimization parameters. are the lower and upper bounds of food source positions at dimension , respectively.

2.2. Behavior of Employed Bees

Starting from the initial locations, employed bees search around the surrounding areas for better food sources. The algorithm assumes that employed bees can record all the food source locations that the colony has reached; thus employed bees can move a random distance toward another food source. The updated location is calculated as follows: where is the new candidate food source location, ( is the population of swarm), ( represents the dimension); is the previous food source location; is a random number between []; is another food source location, ; and .

The fitness of a solution can be calculated from the objective function value by using the following equation:

2.3. Behavior of Onlooker Bees

When bee colony foraging for food, onlooker bees stay around the hive and search locally for high quality food sources by observing the information of food sources carried by employed bees. The information interaction is an important reflection of the intelligent behavior of the bee colonies in searching for food. The employed bees provide the information of food sources to onlooker bees after returning to the hive. The onlooker bees decide whether to update the food source using the greedy algorithm when searching for food and the probability of updating the food source iswhere is the fitness function of th onlooker bees. If the new value of fitness function is better, the onlooker bee will update its position using (2); thus, the bee colony can move closer to better-quality food sources gradually.

2.4. Behavior of Scout Bees

If food sources cannot be improved after a predefined number of iterations, they will be abandoned. The corresponding employed bees will change to scout bees which search for a new feasible food source randomly across a wide range using same (1) for initialization.

3. Hybrid Artificial Bee Colony Algorithm

3.1. For Employed Bees

For employed bees, we adopt the chemotactic behavior in Bacterial Foraging Optimization (BFO) to help employed bees to escape local optimum trap and enter a relatively large searching space gradually. Chemotactic behavior is that bacteria gather together at a more favorable environment instead of a noxious one, which includes two operators: tumble and swim. A unit walk with random direction represents tumble operator. After finishing a tumble operator, if the fitness value is not improved, then bacteria move to another random direction with a unit walk; if the fitness value improved, then bacteria move to the same direction with a few unit walks until the fitness value reaches the maximum swim steps, where the process represents swim operator. In iterative procedure, standard ABC algorithm implement global search by employed bees. As shown in (2), is a random dimension of an individual, and standard ABC algorithm updates only one randomly selected dimension of the solutions in employed bees’ process, which causes some redundancy in searching for solutions. Compared with the standard ABC algorithm, in our proposed algorithm the frequency and range of the neighborhood search are increased when processing the employed bees. Once the employed bees forage, the fitness value of the solutions will be calculated. If the fitness value improves, the old position is substituted for the new position. Otherwise, the employed bees remain in the old position. When the number of swim steps (called ) reaches the limit (called ) or the fitness value is compounded, employed bees stop foraging at current dimension and tumble to another dimension. These operations can ameliorate the convergence speed of employed bees and increase potential solutions. After all the employed bees have tumbled in all dimensions, the process finishes. Benefitting from these, the bee colony gets into a larger search space and avoids plunging into local optimum. In HABC algorithm, the positions of employed bees can be updated as follows:where , , , and are the same as (2) and is the number of advances, like swim steps in Bacterial Foraging Optimization algorithm. The improvement is shown in Figures 1 and 2.

The pseudocode of the behavior of employed bees in HABC algorithm is as given in Pseudocode 1.

(1) Set the source position , produce new solution .
(2) for (as a counter) from 1 to colony size
(3)   = rand() (as a random dimension of source position)
(4)   = rand(1, colonysize) &   (as another random dimension of source position)
(5)  set temp = rand
(6)  for dim (as a counter from 1 to max dimension)
(7)   while
(8)    new solution
(9)    if new fitness value is better than fitness value
(10)    then
(11)    else
(12)    set
(13)   end if
(14)  end while
(15) end for
(16) end for
3.2. For Onlooker Bees

In the standard ABC algorithm, employed bees and onlooker bees update their position by searching for a candidate solution in a randomly selected dimension of current position toward a random position. The purpose of employed bees is for global searching across a relatively large space. While the purpose of onlooker bees is for local searching in a neighboring area, which means that employed bees have the fastest convergence rate and onlooker bees have the best exploration ability. Eventually, we can locate the global optimization solution. In standard ABC algorithm, step size and dimension are randomly chosen for onlooker bees’ process, so the probabilities of choosing good and bad quality food sources are the same, which lead to its low robustness.

As we all know, the distance of global optimal solution and suboptimal solution is short, and then the current best solution can drive the new candidate solution step towards the best direction, so we adopt the idea of tracing the current best particle from PSO algorithm to enhance the global search capability of the standard ABC algorithm. At the same time, to strike a balance between exploration and exploitation, we introduce the inertia weight into our algorithm. In PSO algorithm, inertia weight represents the ability that particles inherit from their previous velocity; it was first introduced into PSO algorithm by Shi and Eberhart in 1998 [22]. Analysis states that a relatively larger inertia weight is good for global searching and a relatively smaller inertia weight is good for local searching. In HABC algorithm, we adopt linear decreasing inertia weight (called LDIW), which contains smaller step size and difficult trap into local optimum. At the initial stages, in order to avoid trapping into a local optimal solution, a relatively large inertia weight is used to make onlooker bees spread into a larger search space. At the end stages, a relatively small inertia weight is used to protect onlooker bees from disturbing the current best solution. This approach enhances the overall convergence speed and makes the algorithm more efficient for obtaining the global optimization solution. The equation of LDIW is given as follows:

Among all parameters, is the initial inertia weight, is the inertia weight when iteration reach the maximum; respect the current iteration, and means the maximum number of iterations. In our algorithm, according to the experience, we set as 0.9 and as 0.4.

In HABC algorithm, the positions of onlooker bees can be updated as follows:where , , , and are the same as (2), is dimension of current best solution, and is the LDIW.

The pseudocode of onlooker bees is as given in Pseudocode 2.

(1) Set the source position , produce new solution , maximum convergence iterations , current iteration .
(2) for (as a counter) from 1 to colony size
(3) calculate selective probability
(4)   = rand() (as a random dimension of source position)
(5)   = rand(1, colonysize) &   (as another random dimension of source position)
(6)  set
(7)  if selective probability p > rand()
(8)   for dim (as a counter) from 1 to max dimension
(9)    new solution
(10)  end for
(11)   if new fitness value is better than fitness value
(12)   then
(13)  end if
(14) end if
(15) end for

4. Experimental Comparison and Analysis

4.1. Benchmark Functions Used

In this section, algorithms are used to find the global optimum values of 16 benchmark functions from the CEC 2014 competition [24]. The details of the functions are as follows.

Unimodal functions include the following.(1)Rotated High Conditioned Elliptic Function.(2)Rotated Bent Cigar Function.(3)Rotated Discus Function.

Multimodal functions include the following.(4)Shifted and Rotated Rosenbrock’s Function.(5)Shifted and Rotated Ackley’s Function.(6)Shifted and Rotated Weierstrass Function.(7)Shifted and Rotated Griewank’s Function.(8)Shifted Rastrigin’s Function.(9)Shifted and Rotated Rastrigin’s Function.(10)Shifted Schwefel’s Function.(11)Shifted and Rotated Schwefel’s Function.(12)Shifted and Rotated Katsuura Function.(13)Shifted and Rotated HappyCat Function.(14)Shifted and Rotated HGBat Function.(15)Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Functions.(16)Shifted and Rotated Expanded Scaffer’s F6 Function.

4.2. Parameter Settings

To test the performance of HABC, the experimental results are compared with those of the standard ABC, best-so-far ABC (BSABC)[25], directed ABC (dABC) [26], Gaussian ABC (GABC) [27], improved ABC (IABC) [28], and memetic ABC [29] (MABC). In HABC algorithm, we set 3 as the value of . For all algorithms, the colony size is set to be 50, and the stopping criterion was to run for up to 10000D FEs. Moreover, search space for all functions are . Each experiment is repeated 51 times independently with random seeds, and then we record the mean and standard deviations of benchmark functions.

4.3. Comparisons between HABC and ABC Variants

All algorithms are coded in MATLAB 7.9.0, and all experiments were running on a Windows XP operation system with an Intel Pentium Dual-Core CPU E5300 2.6 GHZ and 2 GB RAM. Experimental results are shown in Tables 13, which also include the results by Wilcoxon signed-rank test.

It can be seen that all algorithms cannot get the global optimum value of F1 function and have equal performance for F8 function. From Table 1, our proposed HABC is better than the other algorithms for F3, F4, F5, F6, F7, F9, F10, and F11 functions, and the MABC algorithm is slightly better than the other algorithms for F2, F6, F7, and F12 functions. According to Table 2, our proposed HABC algorithm shows the best performance on most of the functions, except for functions F10 and F5. From Table 3, our proposed HABC also has better performance than the other algorithms on most of the benchmark functions, except for F2, F5, and F10 functions. The MABC algorithm is slightly better than the other algorithms in F2 and F5 functions, and the IABC algorithm is slightly better than the other algorithms in F10 function.

In the last column of Tables 13, we give the Wilcoxon signed-rank test results. A value of “+” in the table indicates that HABC algorithm is statically superior to the compared algorithm, whereas a “−” indicates that the HABC algorithm is statistically worse than the compared algorithm. A value of “=” indicates that the HABC algorithm is statistically equivalent with the compared algorithm and a value of “NA” indicates that two algorithms are statistically indistinguishable. We can conclude that the HABC algorithm is statistically better than the other algorithms on most of the benchmark functions.

4.4. Time Complexity Analysis

The improvement of computation precision in HABC algorithm usually needs the sacrifice of time complexity. Because the limit of neighbor searching in employed bees’ stage is not constant, in order to analyze time complexity properly, we calculated the computational complex of each algorithm by recording the average time to reach the given precision. All algorithms ran 51 times independently with random seeds. The mean time is measured in seconds. Results are listed in Tables 46, and the best is shown in bold font.

It can be seen from Tables 46 that HABC can find the global optimal solution with less time on most of the benchmark functions, because of its faster converge ability.

5. Conclusion

In this paper, hybrid ABC (HABC) is proposed by introducing the chemotactic behavior of Bacterial Foraging Optimization into employed bees and adopting the principle of moving the particles toward the best solutions in PSO to improve the global search ability of onlooker bees. Experiment result shows that HABC algorithm has better solution accuracy and higher evolution speed and reaches the global optimal solution with fewer time, compared with the current reported standard ABC, best-so-far ABC, directed ABC, Gaussian ABC, improved ABC, and memetic ABC.

In our future work, we will find a better way for searching for the best to improve the performance of HABC.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research is supported in part by the Program for New Century Excellent Talents in University (NCET-12-0881), National Science and Technology Support Program of China (no. 2015 BAD17B02), China Agriculture Research System (CARS-49), and the Fundamental Research Funds for the Central Universities (JUSRP51410B).