Abstract

Multimodal multiobjective optimization problem (MMOP) is a special kind of multiobjective optimization problem (MOP) with multimodal characteristics, where multiple different Pareto optimal sets (PSs) map to the same Pareto optimal front (PF). To handle MMOPs, a decomposition-based harmony search algorithm (called MOEA/D-HSA) is devised. In MOEA/D-HSA, multiple individuals who are assigned to the same weight vector form a subpopulation for finding multiple different PSs. Then, an environmental selection method based on greedy selection is designed to dynamically adjust the subpopulation scale for keeping the population diversity. Finally, the modified harmony search algorithm and elite learning strategy are utilized to balance the diversity and convergence of the population. Experimental results on the CEC 2019 test suite reveal that MOEA/D-HSA has superior performance than a few state-of-the-art algorithms.

1. Introduction

MOPs with multiple objectives that require to be optimized simultaneously are frequently occurring in real-world applications [1, 2]. In general, an MOP is defined as follows [3]:where refers to the search space and denotes an n-dimensional decision variable; indicates the j-th () objective, and refers to the amount of objectives; and are the upper and lower bounds of , respectively. In an MOP, a solution is said to dominate another solution , if and only if , , and , . If there exists no solution that dominates , then is called a Pareto optimal solution [4, 5]. The set of all Pareto optimal solutions is called PS [6], which is denoted as . The image of in the objective space is termed as PF [7].

A large number of MOPs with multimodal properties exist in practical applications [8], such as scheduling problem [9], epsilon-efficient solutions problem [10], rocket engine problem [11], and 0/1 knapsack problem [12]. This category of problems is termed MMOPs in [13]. In MMOPs, multiple different PSs map to the same PF. As depicted in Figure 1, an MMOP with two different Pareto regions (i.e., Region1 and Region2) in the decision space is presented. In Figure 1, all Pareto regions map to the same Pareto front (i.e., ) in the objective space. For example, A1 Region1 and A2 Region2 correspond to the same point A and A1A2. All PSs should be discovered in handling MMOPs.

In the past few years, numerous multimodal multiobjective evolutionary algorithms (MMOEAs) have been developed to tackle MMOPs, which can be grouped into the following types:(i)Pareto-based MMOEAs: DN-NSGA-II [13], Niching-CMA [14], and Omni-optimizer [15] are the three typical algorithms in this class. DN-NSGA-II proposed a niching method to deal with MMOPs, in which the crowding distance is adopted to find multiple PSs. Unlike DN-NSGA-II, an omni-optimizer calculated the crowding distance in both the decision and objective spaces to locate the equivalent PSs. In Niching-CMA, a niche function is constructed to replace the crowding distance in the omni-optimizer by summing the distances of the individuals in the two spaces. Experimental results indicated that Niching-CMA can improve the diversity of decision space. Subsequently, a method similar to the omni-optimizer is proposed to deal with MMOPs (named MO_Ring_PSO_SCD) [16], where the ring topology and diversity measure are utilized to keeping the population diversity. In addition, Liu et al. [17] proposed a penalty density strategy (called CPDEA) for solving MMOPs. In CPDEA, the individual density is calculated by the transformed Euclidean distance and used as the selection criteria to search for the equivalent PSs. Then, they proposed an MMOEA based on archives and recombination strategy to find multiple PSs (called TriMOEA-TA&R) [18]. Fan et al. [19] devised a zoning search approach (ZS) to tackle MMOPs, where the whole space is classified into multiple subspaces and existing MMOEAs are executed in each subspace. The results revealed that ZS helps to increase the property of MMOEAs in solving MMOPs. Inspired by this, Fan et al. proposed a zoning search approach with adaptive resource allocating [20] for balanced and imbalanced MMOPs. Lin et al. [21] presented a clustering-based MMOEA that is able to efficiently locate local PSs. Zhang et al. [22] devised a knee-based approach to deal with MMOPs, where a multicriteria decision strategy is introduced to search for equivalent PS regions. Afterwards, the repulsive force between isotropic magnetic individuals is introduced into MMOPs to locate different PSs by Zhang et al. [23, 24]. In addition, inspired by online learning [25, 26]. Li et al. [27] devised a differential evolution based on reinforcement learning to handle MMOPs. However, these algorithms generally adopt Pareto-based method to search for equivalent PSs, so they cannot deal with large-scale MMOPs.(ii)Decomposition-based MMOEAs: like MOEA/D [28], an MMOP is decomposed into a few simple subproblems through a group of evenly distributed vectors in decomposition-based MMOEAs. In [29], the diversity maintenance strategy combining penalty boundary intersection technique [28] and two distance metrics is introduced into the decision space for locating equivalent PSs. Later, a similar concept is suggested to handle the large-scale optimization problems in [30]. Tanabe et al. [31] designed a modified MOEA/D for MMOPs, which employs the addition and deletion operators to search for equivalent PSs. In [32], this concept was expanded as a framework to increase the diversity of MMOEAs in the decision space. In [33], an evolutionary algorithm based on the graph Laplacian is presented to tackle MMOPs, which employs the decomposition technique in two spaces to obtain equivalent Pareto solutions. Although the above algorithms can obtain multiple different PSs, they are unable to balance the population diversity in the decision and objective spaces well.(iii)Indicator-based MMOEAs: the indicator-based approaches utilize the chosen indicators to guide population converge to PF. Li et al. [34] proposed a weighted indicator-based MMOEA (termed MMEA-WI), which devised an indicator to keep the diversity in the decision space and introduced a convergence archive for approximating the true PF. In [35], a Niching indicator is designed to solve MMOPs, where individuals’ fitness values are calculated to keep the diversity of PSs. In summary, the indicator-based MMOEAs usually adopt fitness indicator to keep the diversity in the decision space, but the calculation of indicator wastes a large amount of computing resources, especially for large-scale MMOPs:

To sum up, although the decomposition-based MMOEAs can find multiple equivalent PSs in dealing with MMOPs, they are unable to well balance the diversity in the decision and objective spaces. In other words, they increase the diversity in the decision space at the cost of the performance in the objective space. To deal with this issue, a decomposition-based harmony search algorithm (named MOEA/D-HSA) is presented. In MOEA/D-HSA, multiple individuals who are assigned to the same weight vector form a subpopulation after an MMOP is decomposed. Then, an environmental selection method is used for guiding the individuals in the subpopulation to find the equivalent PSs. Finally, the modified harmony search algorithm is utilized to generate the offspring, which helps to balance the relationship of diversity and convergence. At the same time, an elite learning strategy is employed to prevent premature convergence. The test results on CEC 2019 validate the property of MOEA/D-HSA.

2. Materials and Methods

2.1. Motivation

The diversity of decision and objective spaces is crucial in solving MMOPs. In the following, we take SYM-PART rotated with nine distinct PSs as an example to explain the importance of the diversity in these two spaces. As depicted in Figures 2(a) and 2(b), all PSs are located in the decision space, but only three points are found on PF. The reason is that the algorithm pays more attention to the diversity in the decision space. For Figures 2(c) and 2(d), although the obtained PF has well-diversity and well-convergence, only one PS is located. When an algorithm is only concerned about the diversity in the objective space, the situation in Figures 2(c) and 2(d) may appear. Therefore, the following issues need to be considered for preventing situations in Figure 2.(1)How to preserve the diversity in the objective space? This problem has been widely studied in multiobjective evolutionary algorithms (MOEAs). The classic methods include the weighted Tchebycheff approach [28], nondominated sorting method [36], and the fitness indicator technology [37, 38]. The weighted Tchebycheff approach is selected to deal with this problem. The reason is that the whole objective space is classified into multiple subspaces by a group of evenly distributed weight vectors in the weighted Tchebycheff approach. Thus, it can keep the diversity in this space. Besides, for balancing the exploration and exploitation of the algorithm, a modified harmony search algorithm is adopted to generate the offspring, where the algorithm parameters such as harmony memory size (HMS), harmony memory consideration rate (HMCR), and pitch adjusting rate (PAR) are dynamically adjusted.(2)How to preserve the diversity in the decision space? For handling this problem, the framework of subpopulation is adopted to search for different PSs, i.e., each weight vector corresponds to multiple individuals instead of one. As shown in Figure 3, an MMOP is decomposed into a few simple subproblems by a group of evenly distributed weight vectors (), and each weight vector is given to three distinct individuals to shape a subpopulation (i.e., the red, yellow, and blue dots indicate distinct individuals who are assigned to the same weight vector). In this way, the individuals corresponding to the same weight vector search toward the equivalent PSs. Furthermore, preserving the diversity of the subpopulation is vital in MMOEAs. Therefore, an environmental selection method is designed to separate individuals belonging to the same PS in the subpopulation.

2.2. The Proposed Algorithm

A decomposition-based harmony search algorithm for MMOPs, termed MOEA/D-HSA, is presented. In MOEA/D-HSA, first, initialize population P, weight vector W, and subpopulation (lines 1–6). Then, multiple individuals that are assigned to the same weight vector form a subpopulation (lines 7–9). Next, an offspring is generated by a modified harmony search algorithm and elite learning strategy (line 13). Lastly, the subpopulation is dynamically adjusted by an environmental selection method, which helps to preserve the diversity of the decision space (line 14). Algorithm 1 depicts the framework of MOEA/D-HSA.

Input: population size NP, decision variable dimension n, algorithm parameters: HMS, HMCR, PAR, bandwidth (BW), pr, t, the maximum number of evaluation functions MaxFES,
Output: the set TP.
(1)Initialize population P and weight vector W;
(2)Calculate the fitness Fit and ideal point ;
(3)Set FES = NP;
(4)for i=1:NP do
(5) Set  = [];
(6)end for
(7)for i = 1:NP do
(8), where ;
(9)end for
(10)while FES < MaxFES do
(11)  Set ;
(12)  for i = 1:|T| do
(13)   Generate offspring by Algorithm 2 and elite learning strategy;
(14)   Update ideal point and subpopulation by Algorithm 3, where ;
(15)   FES = FES + 1;
(16)end for
(17)end while
(18)TP ← all Pareto optimal solutions in ;
2.3. Reproduction

For balancing the diversity and convergence of the population, a modified harmony search algorithm [39] shown in Algorithm 2 is used to generate the offspring. In Algorithm 2, first, initialize the algorithm parameters (i.e., , , , and ). Then, parameters , , and are updated as follows [40, 41]:where MaxFES is the maximum number of evaluation functions, FES is the current number of evaluation functions, , and . With the help of adaptive parameters, the proposed algorithm has more extensive exploration ability in the early phase of evolution and nice exploitation ability in the late phase of evolution. It means that MOEA/D-HSA can well balance the relationship of diversity and convergence. Next, the harmony memory containing individuals is constructed. Finally, the offspring is generated.

Input: archive EXA, individual , algorithm parameters: ,, , , the maximum number of evaluation functions MaxFES, the current number of evaluation functions FES, decision variable dimension n.
Output: an offspring .
(1)Update the parameters , , and using equations (2)–(4)
(2)Select the nearest individuals to in EXA and put them into
(3)for j = 1:n do
(4) if , then
(5)  
(6)  if , then
(5)   
(7)  end if
(8) else
(9)  ;
(10) end if
(11)end for
(12);
2.4. The Elite Learning Strategy

For avoiding premature convergence, the elite learning strategy is employed to generate the offspring , which can increase the global search ability and help the algorithm to jump out of local optimal location.where is a Gaussian distribution with mean 0 and standard deviation , is the j-th dimension variable of the best individual in , and and are the upper and lower bounds of , . In the early phase of the algorithm, the larger value makes the offspring have a greater mutation range that is helpful for global search. In the late phase of the algorithm, small mutation range is helpful for local search. Therefore, is updated as follows:where MaxFES is the maximum number of evaluation functions, FES is the current number of evaluation functions, and are the upper and lower bounds of , and decreases as the iteration increases.

2.5. Environmental Selection

For MOEA/D-HSA, it is vital to keep the diversity of subpopulations in the decision space. To reach this goal, the environmental selection method presented in Algorithm 3 is employed to separate individuals belonging to the same PS in the subpopulation. In Algorithm 3, when the subpopulation scale exceeds , the individual in the subpopulation that is nearest to the offspring is compared with by the weighted Tchebycheff approach and the better individuals are saved. Otherwise, into the next generation. In the following, we take the subpopulation as an example to explain the working mechanism of environmental selection. As depicted in Figure 4, an MMOP has three equivalent PSs, and all individuals within the triangle are Pareto optimal solutions. In Figure 4, the subpopulation consists of four different individuals (i.e., , , , and offspring ). When the subpopulation scale is larger than , the individual in the subpopulation that is nearest to the offspring is compared with (here, is the closest individual to ), and the preferable one is preserved. Otherwise, into the subpopulation . By this way, MOEA/D-HSA can find multiple different PSs.

Input: offspring , Subpopulation , algorithm parameter .
Output: subpopulation .
(1)if || + 1 > , then
(2) Select the nearest individual to in
(3) if , then
(4)   and
(5) end if
(6)else
(7)
(8)end if
2.6. Complexity Analysis

MOEA/D-HSA consists of the following components: initialization subpopulation, reproduction operation, and the environmental selection. The complexity of initialization subpopulation and environmental selection is . The complexity of the reproduction operation is , where is the decision variable dimension. In summary, considering , the complexity of MOEA/D-HSA is .

3. Results and Discussion

3.1. Performance Metric and Benchmark Test Functions

In our experiment, 22 MMOPs from CEC 2019 competition [42] is utilized to examine the property of MMOEAs. In addition, inverted generational distance in the objective space (IGDF) [43, 44] and the Pareto set proximity (PSP) [16] are utilized to assess the property of algorithms. The lager the PSP is, the better the property of algorithm is. IGDF is just the opposite. They are defined as follows:(1)where denotes the Euclidean distance between and , and refer to the true PF and the obtained PF, respectively. can assess the diversity and convergence of the obtained solutions in the objective space.(2)where IGDX is inverted generational distance in the decision space, and represent the true PS and the obtained PS, respectively, denotes the cover ratio between and and the details refer to [16], is the Euclidean distance between and . can reflect the convergence and diversity of the population in the decision space.

3.2. Competitive Algorithms and Parameter Setup

To test the efficiency of the algorithm, MOEA/D-HAS is compared with five up-to-date MMOEAs: DN-NSGA-II [13], TriMOEA-TA&R [18], LORD [33], MOEA/D [28], and MO_Ring_PSO_SCD [16]. For a fair comparison, the population size is set to 800, all algorithms are independently implemented 25 times, and the maximum function evaluations are set to 80000 [16]. In addition, the parameter setting of all competitors is consistent with their original references. For MOEA/D-HSA, according to [40, 41, 45], the algorithm parameters are , , , , , , , , , and .

In further, the Wilcoxon’s rank-sum test [46] and Friedman’s test [47] are utilized to draw statistically reliable conclusions with significance level . The symbols “+,” “−,” and “ = ” denote that MOEA/D-HSA is distinctly better, worse, and not much different from the other competitor, respectively.

3.3. Comparison with Other Algorithms

The experimental results about PSP are shown in Table 1. As indicated in Table 1, MOAE/D-HSA achieves the best PSP on 13 test functions and defeats DN-NSGA-II and MOEA/D on all test functions. MO_Ring_PSO_SCD gets the maximum PSP on six test functions out of 22. LORD acquires the best performance on two test functions. TriMOEA-TA&R gets the maximum PSP on one test function. Although MOAE/D-HSA is not the best on some test functions, the difference is appreciably small and acceptable. Therefore, the experimental results suggest that MOAE/D-HSA can find multiple PSs with well-distribution and well-convergence.

The experimental results in IGDF are listed in Table 2. In Table 2, MOEA/D-HSA gets the smallest IGDF on six test functions and defeats TriMOEA-TA&R on most test functions. Moreover, MO_Ring_PSO_SCD is the best on six test functions out of 22. DN-NSGA-II acquires the best IGDF on four test functions. MOEA/D and LORD get the minimum IGDF on three test functions. In conclusion, it can be concluded that MOEA/D-HSA is not only outperforms the competitors on most test functions in IGDF but also can well approximate the true PF.

3.4. Visual Comparison

The final population distributions of all algorithms are shown in Figure 5 for MMF4 and Figure 6 for SYM_PART rotated. As illustrated in Figure 5, each algorithm except MOEA/D is able to locate all PS regions, but TriMOEA-TA&R has poor convergence in the decision space, and DN-NSGA-II achieves the least number of Pareto optimal solutions on each PS region. In addition, compared with other algorithms, not only MOEA/D-HSA can locate more Pareto optimal solutions on each PS but also the obtained PSs have well-diversity and well-convergence.

Besides, for SYM_PART rotated with nine equivalent PSs, as depicted in Figure 6 that all algorithms have good convergence in the decision space, but MOEA/D and DN-NSGA-II cannot locate all PSs. Furthermore, MO_Ring_PSO_SCD and MOEA/D-HSA have similar properties on SYM_PART rotated. And, compared with the remaining competitors, MOEA/D-HSA not only can search more Pareto optimal solutions on each PS but also show good convergence. In summary, based on visual comparison, MOEA/D-SS can find multiple PSs with good distribution and convergence.

To further verify the effectiveness of each component, two MOEA/D-HSA variants are test. SS-variant-1 indicates MOEA/D-HSA without the elite learning strategy. SS-variant-2 represents MOEA/D-HSA deletes an individual with the largest Tchebycheff value in the subpopulation instead of using the environmental selection operation. Table 3 records the average rankings of PSP obtained by MOEA/D-HSA and MOEA/D-HSA variants on all test functions. As depicted in Table 3, MOEA/D-HSA owns the best ranking. Therefore, it can be concluded that each component of the proposed algorithm can collaborate with the other components to enhance its overall performance.

3.5. Parameter Analysis

All parameters except are adaptively adjusted, so only the subpopulation size is discussed in this section. MOEA/D-HSA with is tested on all problems, and Table 4 records their rankings of the Friedman test. In Table 4, it can be seen that the property of MOEA/D-HSA is the best when is set to 5. Therefore, the subpopulation size is set to 5 in the proposed algorithm.

3.6. Analysis of Running Time

With the aim of comparing running time, all algorithms run 25 times on each test function, and the normalized wall-clock time of all algorithms is given in Table 5. As revealed in Table 5, TriMOEA-TA&R and MOEA/D have the shortest running time on 16 and 6 test functions, respectively. And, LORD takes the longest running time on all test functions. In addition, compared with the remaining algorithms, MMOEA/D-HSA achieves the shortest running time on most test functions. It can be concluded that the running time of MOEA/D-SS is not the shortest, but the difference is acceptable.

4. Conclusions

A simple but effective method MOEA/D-HSA is developed to tackle MMOPs. In MOEA/D-HSA, first, multiple individuals are assigned to the same weight vector to form a subpopulation to find equivalent PSs. Then, an environmental selection based on greedy selection is used to dynamically adjust the subpopulation size. Finally, a modified harmony search algorithm and elite learning strategy are utilized for balancing the diversity and convergence of the population. The experimental results prove that MOEA/D-HSA is superior to selected popular algorithms with respect to the IGDF and PSP values.

Besides, in this article, the dimension of the test problems is low. It is a meaningful study to develop MOEA/D-HSA on more complicated problems or the real applications [48, 49], such as the neural architecture search problems.

Data Availability

Data and code can be made available upon request to [email protected].

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Acknowledgments

This work was supported in part by the National Nature Science Foundation of China under Grant 61772391 and 62106186, in part by the Natural Science Basic Research Plan in Shaanxi Province of China under Grant 2022JQ-670, and in part by the Fundamental Research Funds for the Central Universities under Grant QTZX22047.