Abstract

Aiming at the shortcomings of the Harris hawks optimization algorithm (HHO), such as poor initial population diversity, slow convergence speed, poor local optimization ability, and easily falling into local optimum, a Harris hawks optimization algorithm (CCCHHO) integrating multiple mechanisms is proposed. First, the population diversity is enhanced by the initialization of the chaotic method. Second, the cosine function is used to better simulate the characteristics of the periodic change of the energy of the prey in the repeated contests with the group of hawks, to better balance the exploration and exploitation of the algorithm. Third, Cauchy mutation on the optimal individual in the exploration phase is performed, and the characteristics of the Cauchy distribution to enhance the diversity of the population are used, which can effectively prevent the algorithm from falling into the local optimum. Fourth, the local optimization ability of the algorithm by using the ergodicity of the chaotic system in the exploitation phase to perform a chaotic local search for the optimal individual is enhanced, which can effectively jump out after the algorithm falls into the local optimum. Finally, we use the elite individuals of the population to guide the position update of the population’s individuals, fully communicate with the dominant individuals, and speed up the convergence speed of the algorithm. Through the simulation experiments on CCCHHO with 11 different benchmark functions, CCCHHO is better than the gray wolf optimization algorithm (GWO), the Salp swarm algorithm (SSA), the ant lion optimization algorithm (ALO), and three improved HHO algorithms in terms of convergence speed and optimization accuracy, whether it is a unimodal benchmark function or a multimodal benchmark function. The experimental results show that CCCHHO has excellent algorithm efficiency and robustness.

1. Introduction

In recent years, meta-heuristic algorithms have attracted more and more scholars’ attention. Because meta-heuristic algorithms are simple to implement, have no layer dependencies, and can jump out of local optima, they are used to solve different problems [14].

Meta-heuristic algorithms can be divided into four categories: physics-based, evolution-based, population-based, and human-based. The physics-based aspect mainly simulates the physical rules in the universe. Common physics-based meta-heuristic algorithms include gravitational search algorithm (GSA) [5], central force optimization (CFO) [6], and Black Hole optimization algorithm (BH) [7]. The evolution-based methods are inspired by biological evolution and classical evolutionary algorithms such as genetic algorithm (GA) [8], differential evolution algorithm (DE) [9], and biogeography-based optimizer(BBO) [10]. The population-based algorithms are inspired by the collective behavior of biological populations and classical swarm intelligence algorithms such as ant colony optimization(ACO) [11], particle swarm optimization algorithm (PSO) [12], artificial bee colony optimization algorithm (ABC) [13], and Monarch butterfly optimization algorithm [14]. More novel population-based algorithms are gray wolf optimization algorithm (GWO) [15], wolf pack optimization algorithm (WPA) [16], dragonfly algorithm (DA) [17], whale optimization algorithm (WOA) [18], ant lion optimization algorithm (ALO) [19], and Salp swarm algorithm (SSA) [20]. The human-based algorithms are inspired by human behavior, such as brain storm optimization (BSO) [21].

Harris hawks optimization algorithm (HHO) [22]is a swarm intelligence optimization algorithm proposed in recent years, which is derived from the group hunting behavior of Harris hawks. Because HHO has a simple structure, easy implementation, and high performance, it has attracted the attention of a large number of researchers since the algorithm was proposed. The researchers have improved the basic HHO algorithm in different aspects, and some of the improved algorithms have been applied to different fields. ElSayed et al. [23] used HHO combined with sequential quadratic programming (SQP) to the optimal coordination problem of directional overcurrent relays incorporating distributed generation. Abbasi et al. [24]used the chaos method, Gaussian mutation, differential evolution, and other methods to improve the basic HHO and apply the improved algorithm to the fatigue life analysis of tapered roller bearings. Jouhari et al. [25]introduced the Salp swarm algorithm (SSA) into the basic HHO, and the position update mechanism is selected through a parameter, and the improved algorithm is applied to the scheduling problem. Chen et al. [26]introduced chaotic drift mechanism into the basic HHO to improve the algorithm, and the improved algorithm is applied to the parameter identification problem of photovoltaic cells and modules. Jia et al. [27]used dynamic parameters to adjust prey escape energy factor and mutation mechanism to improve the basic HHO, and used the improved algorithm for the satellite image segmentation problem. Qu et al. [28] enhanced the information exchange between population individuals and introduced escape energy factors with chaotic disturbances to improve the basic HHO algorithm.

Although different scholars have proposed different improved HHO algorithms, they are not suitable for all optimization problems. Based on this, it makes sense to develop more efficient and accurate algorithms. In this article, a CCCHHO algorithm that integrates multiple strategies is proposed. The algorithm initializes the population through the chaotic method and uses the ergodicity of the chaotic system to enhance the diversity of the population, and uses the characteristics of the cosine function to periodically trigger the update of the prey escape energy that more efficiently balances the exploration and exploitation of the algorithm, and introduces a mutation strategy to enhance the global exploration ability that can prevent the algorithm from falling into the local optimum. The use of chaotic local search can effectively jump out after the algorithm falls into the local optimum and improve the local optimization ability of the algorithm at the same time. Finally, the elite individual guidance mechanism is used to update the population’s individual position and use the greedy mechanism to obtain the optimal population as the initial population of the next iteration, which enhances the diversity of the population to accelerate the convergence speed of the algorithm. In order to verify the performance of the proposed algorithm, 11 benchmark test functions are used for simulation analysis and compared with other meta-heuristic algorithms and some improved HHO algorithms.

The structure of this article is as follows. Section 2 introduces the basic HHO algorithm. Section 3 introduces the proposed CCCHHO algorithm. The fourth section is a simulation experiment, and the results are analyzed at the same time. Section 5 expounds on the conclusions and future work.

2. Harris Hawks Optimization Algorithm (HHO)

The Harris hawks optimization algorithm is a meta-heuristic algorithm inspired by the Harris hawks hunting. HHO contains multiple search mechanisms depending on the different strategies adopted by the hawk at different phases. A detailed description of these search mechanisms is given below.

2.1. Exploration Phase

At this phase, the Harris hawks update position with two different strategies based on the probability . The formulas are as follows:where and represent the position vector of hawks in iteration and , respectively, is the position of rabbit, is the average position of the current population of hawks, represents randomly selected individuals in the current population, , and are random numbers between 0 and 1, and LB and UB are the upper and lower bounds of the population, respectively, and N is the population number.

2.2. Transition from Exploration to Exploitation

HHO realizes the transformation of exploration and exploitation through the escape energy factor . The formula as follows:where is a random number between −1 and 1, is the maximum number of iterations, and is the current number of iterations. When , HHO performs the global search. Otherwise, local exploitation is performed.

2.3. Exploitation Phase

During the exploitation phase, HHO uses four different strategies to update location and decides which strategy to use by the escape energy factor and the random number , and indicates the chance of the prey to escape before the surprise pounce. When , it cannot escape. When , it can.

2.4. Soft Besiege

When and , the prey has enough energy to try to escape by jumping, but ultimately, it cannot. The hawk’s position update formula is as follows:where represents the random jump intensity when the prey escapes, and is a random number between 0 and 1.

2.5. Hard Besiege

When and , the prey is captured by the hawks with lower energy. The hawk’s position update formula is as follows:

2.6. Soft Besiege with Progressive Rapid Dives

When and , the prey has enough energy to ensure a successful escape, so the soft besiege strategy of fast dive is implemented, and the position update formulas are as follows:where is the problem dimension, is the dimension random row vector, and is the function. The formula is as follows:where and are random numbers between 0 and 1, and is 1.5.

2.7. Hard Besiege with Progressive Rapid Dives

When and , prey energy is lower, but escape is still possible. A hard besiege strategy of rapid dive is implemented to reduce the average distance from the prey, and the position update formulas are as follows:where is the problem dimension, is the dimension random row vector, is the function (Equation (9)), is the position of rabbit, and is the average position of the current population of hawks.

3. Harris Hawks Optimization Algorithm with Multiple Strategies (CCCHHO)

Aiming at the problems existing in the HHO algorithm, the article improves the HHO through various aspects. By introducing the method of map to replace the random initialization method of the basic HHO algorithm, the characteristics of the chaotic system are used to help the algorithm generate more diverse populations. In the exploration phase, the Cauchy mutation strategy is used to enhance the global search ability that helps the algorithm jump out of the local optimum. A new formula is used to update the energy factor to better balance the global search and local exploitation. In the exploitation phase, the chaotic local search strategy is used to enhance the local search ability. Finally, introduce the elite guidance strategy, which uses the dominant group to guide the update of the population individuals, and then, use the greedy strategy to save the better individuals to speed up the convergence speed of the algorithm.

3.1. Chaotic Maps

Chaotic systems have the characteristics of randomness and ergodicity. More diverse populations can be generated by using these characteristics, thereby improving the performance and speeding up the convergence speed of the algorithm. Kaur et al. [29] used the chaotic map, which generates the initialization population instead of the randomly generated population, which makes the improved algorithm easy to jump out of the local optimum and improves the optimization performance of the whale algorithm. At present, there are many different chaotic maps in the optimization field [30], mainly including map, map, and map. This article uses the map to generate the initialization population. The map is defined as follows:where is 4, is a random number between 0 and 1, and N is the number of population’s individuals.

The initial population position generated by using the map is more uniform distribution of population position compared to the randomly generated, which increases the diversity of the population and expands the search range of the hawks in space. To a certain extent, it improves the shortcomings of HHO that it is easy to fall into local optimum.

3.2. Cauchy Mutation

Cauchy mutation originates from the Cauchy distribution, and the standard Cauchy distribution probability density is as follows:

Figure 1 shows that the probability distribution of the Cauchy distribution function is closer to the horizontal axis in the horizontal direction, and the slower it changes. Therefore, the Cauchy distribution can be regarded as infinite. So in terms of probabilities, the Cauchy distribution has a wider distribution range [31]. This means that by using the random numbers generated by the Cauchy distribution as the perturbation factor in the optimization process, one can obtain a relatively broad search space, which can prevent the algorithm from falling into the local optimum in the exploration phase in a way, and after falling into the local optimum, it is easier to jump out the optimum. After obtaining the optimal solution of the current population, use the following formula to update the current global optimal solution:Here, is the standard Cauchy distribution.

3.3. Nonlinear Energy Factor E Based on Cosine Strategy

In the HHO algorithm, the energy factor is an important parameter to balance the exploration phase and the exploitation phase. The larger , the more inclined the HHO algorithm is to perform the exploration. Conversely, the more inclined it is to perform the exploitation. In the HHO algorithm, the energy factor decreases linearly from large to small, which cannot effectively describe the real situation of Harris hawks rounding up prey in nature. In the multiround game of the Harris hawks and the prey, the energy of the prey cannot be simply reflected by linear changes. The energy of the prey should change periodically and eventually reach zero to be captured by the hawks. During each round of rounding up, the prey will get a short rest to recover a small amount of energy, but over time, the energy recovered by the prey will gradually decrease until the energy reaches zero. In this article, the cosine function is used to describe the periodic change of prey energy. The formula is as follows:

Figure 2 shows the change of energy escape factor during iteration. As we can see, the energy escape factor changes periodically until it becomes zero, which well describes the energy change of prey in the process of being rounded up.

3.4. Chaotic Local Search

Local search is one of the effective methods to prevent the algorithm from falling into the local optimum. In most cases, the solution is near the local optimum, and the algorithm cannot obtain it. After the algorithm falls into the local optimum, by using the local search method to search for the vicinity of the local optimal solution, one can effectively jump out of the local optimum and improve the performance of the algorithm. However, sometimes, the local method does not produce ideal results. The ergodic characteristics of chaotic systems can be considered, and the system starts from an initial state and follows its own motion law and experiences all state points in its attraction space without repetition for a long enough time. If a chaotic sequence of length is superimposed on the optimal individual of the current population, it is equivalent to carrying out times local search without repetition in the neighbourhood of the optimal individual of the current population. Introduce the chaotic local search strategy [32], and the ergodicity of the chaotic system can effectively prevent the algorithm from falling into the local area, and at the same time, the search efficiency and search range are enhanced compared with local search. The pseudocode of the chaotic local search algorithm is given in Algorithm 1.

Initialization chaotic search times
Generate a chaotic sequence of length by using a chaotic map
Get the best individual in the current population
Setting up the chaotic search counter
While
 Superimpose an item of the chaotic sequence on any dimension in to form a new individual
 Calculate the fitness value of new individual
if
 Update the optimal individual and fitness value of the current population
end if
end while

In this article, Equation (13) is used in the chaotic search algorithm that the map produces a chaotic sequence. The chaotic sequence value generated by the map is between 0 and 1. If the value is directly superimposed on any dimension of the optimal individual, the search will only be carried out in one direction, so that the local search performance will be greatly reduced. To solve this problem, this article uses the number that chaotic search counter to periodically adjust the search direction. The position update formula is as follows:where is the optimal individual in the current population, is the new individual after a chaotic local search, is the number that chaotic search counter, and is the chaotic sequence value generated by the map.

3.5. Elite Individual Guidance Mechanism

In the gray wolf optimization algorithm, it is proposed that the position of the wolf pack follows the guidance of three individuals in the group, wolf, wolf, and wolf, which can effectively improve the search efficiency. Based on this, this article introduces an elite individual guidance mechanism. The top three better individuals in the current population are selected as leaders to guide the location update of other individuals, so that the information exchange between the population individuals and the better individual in the population is strengthened, thereby enhancing the diversity of the population. The hawk’s position update formulas are as follows:where is the current individual position, is the updated position, is the optimal solution in the current population, is the suboptimal solution in the current population, is the third optimal solution in the current population, is linearly decreased from 2 to 0 in the iterative process, and is a random number between 0 and 1.

Based on the greedy mechanism, the better one is selected as a new individual, and the rabbit position and rabbit energy are updated at the same time.

3.6. The Pseudocode of CCCHHO

Algorithm 2 pseudocode describes the details of CCCHHO.

Initialize population size N, number of iterations T
Use Equation (13) to perform chaotic strategy initialization population ()
Set the current number of iterations
while t < T
 Calculate the fitness value of hawks
 Set the best position as the position of the prey
for each individual
  Update escape energy and jump strength by using Equation (16)
  if
   Update location by using Equation (1)
   Get the optimum of the current population and use Equation (15) to carry out the Cauchy mutation
  end if
  if
   if
    Update location by using Equation (4)
   else if
    Update location by using Equation (5)
   else if
    Update location by using Equation (8)
   else if
    Update location by using Equation (10)
   end if
  Carry out chaotic local search
  end if
end for
 Carry out elite individual guidance
end while

4. Results and Discussion

In order to verify the performance of the proposed CCCHHO algorithm, this article tests the algorithm through 11 benchmark functions and gives the experimental results and analysis. The test functions can be divided into two groups: F1∼ F5 are unimodal benchmark functions, in which the characteristics with the unique global optimal value can evaluate the local exploitation capabilities of different optimization algorithms, and F6∼ F11 are multimodal benchmark functions, which can evaluate the global exploration of different algorithms and the ability to avoid local optimum. According to the best, mean, worst, and standard deviation of the results, the proposed CCCHHO is compared with other optimization algorithms, such as GWO, SSA, and ALO. At the same time, it is compared with the other improved HHO algorithms. The information of the benchmark functions F1–F11 is shown in Table 1.

4.1. Experimental Settings

All algorithms run using MATLAB R2019b on a computer with 16G memory, AMD4800U. The population size of all algorithms is set to 30, the maximum number of iterations is set to 500, and each algorithm loop executes 30 times.

4.2. Comparison of CCCHHO and Other Optimization Algorithms

This section compares CCCHHO with four optimization algorithms: HHO, GWO, SSA, and ALO. Record the optimal value (best), mean value (avg), worst value (worst), and standard deviation (std) of each algorithm. The results of each algorithm are shown in Table 2.

From Table 2, it can be seen that the performance of the CCCHHO algorithm proposed in this article is greatly improved compared with the performance of other swarm intelligence optimization algorithms. For the F1∼ F5 function, the results of the four dimensions of CCCHHO are much higher than the other four algorithms, especially the results of F1, F4, and F5 all reach the minimum value of the function. At the same time, the stability of CCCHHO is also far better than the other four algorithms. For the six functions of F6∼ F11, the results of F8 and F9 are better than the other four algorithms. And the results of F7 are the same as the HHO algorithm, better than GWO, SSA, and ALO. The mean value of F6 is the same as that of HHO, but the stability is better. It is also better than GWO, SSA, and ALO. The mean values of F10 and F11 are slightly superior, and the stability is also better.

4.3. Comparison of CCCHHO and Other Improved HHO Algorithms

This section compares CCCHHO with THHO [33], MHHO [34], and OBLHHO [35]. We record the optimal value (best), mean value (avg), worst value (worst), and standard deviation (std) of each algorithm, and the results of each algorithm are shown in Table 3.

As can be seen from Table 3, the results of the CCCHHO algorithm in 7 benchmark functions that F1∼ F5, F8, and F9 are higher than other improved HHO algorithms in 4 dimensions, among which F1, F4, and F5 have reached the minimum value of the function. The mean values of the three functions F6, F10, and F11 are slightly better than other improved algorithms; the results of F7 are the same as those of HHO and the improved HHO algorithm. At the same time, the CCCHHO algorithm has better advantages in the standard deviation of all functions except F7. The results show that the CCCHHO algorithm outperforms the other improved algorithms in terms of accuracy and stability.

The convergence curves of CCCHHO, HHO, and other improved HHO algorithms in each benchmark function are shown in Figures 3 and 4.

Figure 3 shows the convergence curves of each algorithm on the unimodal benchmark function. Among the five improved algorithms, the CCCHHO proposed in this article has the best performance in optimization. In F1∼ F3, CCCHHO converges the fastest and has the highest accuracy. In F4 and F5, it can be seen that CCCHHO has fallen into the local optimum many times in the iterative process. In the early phase, it jumped out of the local optimum through the Cauchy mutation strategy, successfully jumped out of the local optimum through the chaotic local search strategy in the later phase of the iteration, and finally found a better solution.

Figure 4 shows the convergence curve of each algorithm on the multimodal benchmark function. For the F6 function, the convergence speed of CCCHHO is relatively fast, and the optimization results are basically the same. In the F7 function, the convergence speed of CCCHHO is only second to OBLHHO. For the F8 function and the F9 function, compared with the other improved HHO algorithms, they fell into local optimum very early and have not jumped out of it. CCCHHO jumped out the local optimal value many times through Cauchy mutation, chaotic local search, and elite individual guidance strategy and finally found a better solution. In F10∼ F11, the final results of CCCHHO and the other improved algorithms are better, and the convergence speed of CCCHHO is relatively fast.

5. Conclusions

In this article, a new Harris hawks optimization algorithm with multistrategy (CCCHHO) is proposed by introducing chaotic method initialization population, Cauchy mutation, nonlinear escape energy factor based on cosine function, chaotic local search, and elite individual guidance mechanism to improve the optimization performance of the basic HHO algorithm. In order to verify the performance of the proposed algorithm, 11 different types of benchmark functions were tested to analyze the exploration ability, exploitation ability, and convergence behavior of CCCHHO. The experimental results show that the exploration, exploitation, and convergence speed of CCCHHO is better than that of the basic HHO, three improved HHO algorithms, and three other swarm intelligence optimization algorithms. In future work, higher-dimensional problems will be tested and evaluated, and CCCHHO will be applied to practical engineering problems, such as parameter optimization and shop scheduling.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Acknowledgments

This work was supported by the National Key R&D Program of China (2019YFB1706302).