Abstract

This paper uses an augmented Lagrangian method based on an inexact exponential penalty function to solve constrained multiobjective optimization problems. Two algorithms have been proposed in this study. The first algorithm uses a projected gradient, while the second uses the steepest descent method. By these algorithms, we have been able to generate a set of nondominated points that approximate the Pareto optimal solutions of the initial problem. Some proofs of theoretical convergence are also proposed for two different criteria for the set of generated stationary Pareto points. In addition, we compared our method with the NSGA-II and augmented the Lagrangian cone method on some test problems from the literature. A numerical analysis of the obtained solutions indicates that our method is competitive with regard to the test problems used for the comparison.

1. Introduction

In the multiobjective optimization area, the goal is to minimize or maximize or both at the same time one or several objective functions. However, in most cases, there is no single point that optimizes all objective functions at the same time. Therefore, many concepts have been developed, including the Pareto optimal conditions, in order to reach solutions in the multiple objectives case. Multiobjective optimization modeling has been used to solve many life-related problems, such as physical, economic, and transport [15].

Many methods have been proposed for solving optimization problems. These methods can be classified into two groups: exact methods and metaheuristic methods. In general, exact methods allow obtaining of the exact Pareto optimal solutions of a given problem, but they are unsuitable for solving problems with numerous variables and/or a large number of objective or constraint functions. Metaheuristic methods try to give good approximations of the true Pareto optimal solutions. Those methods are subjected to performance tests to evaluate some characteristics, such as computational time, convergence of algorithms to Pareto optimal solutions, and distribution of provided solutions on the Pareto front [610]. A lot of the methods in this family of metaheuristic methods are inspired by natural phenomena, and there are not any theorems or propositions that show how their algorithms are optimal or how they are convergent.

The majority of these two classes of methods are iterative, meaning that the research of the solution begins at an initial point. At each iteration, the current solution is improved in order to achieve the optimal solution. Sometimes, it is hard to figure out how many iterations are needed. However, there is a mathematical foundation and convergence properties based on theorems and propositions. Among the most used and well-known methods, we have the steepest descent, projected gradient, and Newton methods. Here are the recent works on these topics: the multicriteria optimization by steepest methods was proposed by Fliege and Svaiter [11]; the projected gradient methods have been used in wide work [6, 8, 9, 12, 13]; some of these variants are the works of Gonçalves and Prudente [14] on the extension of the Hager-Zhang conjugated gradient method and nonlinear conjugate gradient method proposed by Lucambio Pérez and Prudente [15]; those of the Newton method were proposed in [16, 17]; some variants like that quasi-Newton methods was presented in [1820].

In practice, for solving multiobjective optimization problems with constraints, penalty functions are utilized to transform the initial problem into an unconstrained problem before commencing the optimization process. As a result, in recent years, several approaches using penalty functions have been proposed to address optimization problems subjected to inequality constraints, such as the augmented Lagrange function described in the following work [2133]. This approach has been extensively employed for solving single-objective optimization problems. More recently, Cocchi et al. [34, 35] have developed an extension of this approach for the multiobjective case. Additionally, Upadhayay et al. [36] have proposed a method based on the cone method, which involves transforming the initial multiobjective problem into a single-objective problem and subsequently applying the augmented Lagrangian method.

In this paper, we extend the augmented Lagrange method to solve multiobjective optimization problems using an inexact exponential penalty function. The last version is Echebest et al.’s [7] work on the augmented Lagrange, using the penalty exponential function to solve a single-objective optimization problem. We proposed two algorithms with the same characteristics as the metaheuristics. They are stochastic, produce a population of solutions at a run, and provide a good approximation of the Pareto optimal solutions. One approach focuses on the steepest descent, and the other focuses on the projected gradient. Additionally, the theoretical convergence study has been done for the two algorithms. Our results have been compared to those of NSGA-II method on some test problems from the literature. On the test problems we have dealt with, our algorithms are best compared to NSGA-II. Furthermore, we also conducted a comparison with the augmented Lagrange cone method, as it also utilizes the augmented Lagrangian.

The following is the structure of the remaining paper. In Section 2, we have presented some preliminary concepts for multiobjective optimization. In Section 3, we have detailed our proposed method throughout the algorithms and theoretical and numerical performance studies. In Section 4, we give our conclusions and perspectives on this work.

2. Preliminaries

In the rest of our work, we will use the following notations: is the set of positive reals, is the set of column vectors of dimension , and the image space of a matrix will be denoted by . The unit vector of dimension will be denoted . For any vector , , we define the convention for the following equalities and inequalities: (i) for all (ii) for all (iii) for all (iv) with (v) with

Without loss of generality, we will consider the multiobjective programming problem defined as follows: where ; are continuous and differentiable functions. is a nonempty convex subset of . Let us denote the admissible space of the problem (1) defined by .

We can state the following classical definitions of optimality in the Pareto sense, since it is not certain to find a solution that simultaneously minimizes all objective functions.

Definition 1. A point is the Pareto optimal for problem (1) if there exists no other such that

The set of the Pareto optimality is thus given by . Definition 1 gives an important property of the Pareto optimality, so we present the following definition, which proposes simpler conditions to obtain in the application.

Definition 2. A point is the weakly Pareto optimal for problem (1) if there exists no other such that

The set of Pareto optimality is thus given by . An existing relation between and is that is large and contains , i.e., . We say that is a local Pareto optimal (resp., local weak Pareto optimal) if there exists a neighborhood such that is the Pareto optimal (resp., weak Pareto optimal) for restricted to . Here, we are using a partial order induced by , ; this implies that a necessary but generally not sufficient condition for weak Pareto optimality is given by the following relation: where denotes the Jacobian matrix of . A point is stationary for if it satisfies the relation (4). Now, a necessary condition for stationary optimality is given by the following definition.

Definition 3. A point is said to be the Pareto-stationary for problem (1) if, for all ,

Note also that if is not the Pareto stationary, there exists a feasible direction such that . By posing , we can see that is continuous but does not admit a unique solution. Thus, we can define, as in [11], a problem which is well defined, i.e., it has a unique solution given by the following relation:

Now, by positing , the function indicating the optimal value of problem (6), and , the one indicating the optimal solution of problem (6), we have for all , and a point is said to be the Pareto stationary if and only if .

Now, we can give the following lemma, which proposes a well-known equivalent characterization of a Pareto-stationary point from the point of view of the projection.

Lemma 4. A point is said to be Pareto-stationary for problem (1) if, for all , where is the projection operator of the point in the convex set .

Now, taking into account Definition 3 and Lemma 4, which characterize a Pareto-stationary point, we can define two equivalent definitions which characterize an -Pareto-stationary point.

Definition 5. -Approximate-Pareto-stationary-point APSP1. Let . We say that is an -Pareto-stationary point for the problem (1) if

Definition 6. -Approximate-Pareto-stationary-point APSP2. Let . We say that is an -Pareto-stationary point for problem (1) if

The methods we propose were established from the strategies of MOPG and MOSD methods. We have presented them, respectively, through Algorithm 1 and Algorithm 2. In these two algorithms, Armijo’s rule was used to find the step of descent. Its algorithm is given by Algorithm 3, and we must note that the principle of Armijo’s rule is to determine a real such as the values of the objective function always decrease in partial order by component in a finite number of iterations.

  Data: ; .
1 k=1
2 while is not Pareto-stationary do
3   Compute
       
     ;
4   
5   ;
6 end
  Data: ; .
1 k=1
2 while is not Pareto-stationary do
3   Compute
       
     ;
4   
5   ;
6 end
  Data: ; ;   ; ; .
1 ;;
2 while do
3 ;
4 end

Now, we can present some main results that prove that the MOPG and MOSD algorithms produce solutions in a finite number of iterations. We will start by presenting the following lemma [11], which shows that Algorithm 3 is well-defined.

Lemma 7. Let , , and such as . Then, there exists such as for all .

The following lemma [11, 34] shows that the MOPG and MOSD algorithms are well-defined, i.e., that these algorithms stop in a finite number of iterations.

Lemma 8. Let be the sequence generated by Algorithm 1 and Algorithm 2. If has bounded level sets in the sense that is compact, then each limit point of is a Pareto-stationary point.

3. Main Results

3.1. Algorithms

In this section, we present the augmented Lagrangian function method with an inexact exponential penalty function to transform the constrained multiobjective programming problem into an unconstrained problem. The augmented Lagrangian function established from an inexact exponential penalty function is given by the following formula: for all , the Lagrange multiplier, a penalty parameter and a unit vector of . It is important to note that the augmented Lagrangian function established from an exponential penalty function is differentiable. The technique is the same as the one used for quadratic cases, but we adapt it to exponential cases.

The gradient of the component of the augmented Lagrangian based on an inexact exponential penalty is given by

Definition 5 and Definition 6 of the augmented Lagrangian subproblem established from an inexact penalty function are written as follows:

We can now define two different ways of defining -Pareto-stationary, expressed by the following two definitions.

Definition 9. Let . A point is an -approximate Pareto stationary point APSP1 for the Lagrange based on an inexact exponential penalty if for all feasible direction , it holds

Definition 10. Let . A point is an -approximate Pareto stationary point APSP2 for the Lagrange based on an inexact exponential penalty if for all feasible direction , it holds

Thus, based on the ideas of [7, 34], we propose an augmented Lagrangian algorithm based on an inexact exponential penalization for solving multiobjective optimization programs.

  Data: ; ; ; ; ;
     ; such that ; a list of feasible
     nondominated points for the original problem.
1 for do
2  Let the current Augmented Lagrangian using the Exponential Penalty
   Function defined as:
         
   Set ;
3  for do
4    if
     then
5       Set ;
6       Set ;
7       Set ;
8       if then
9         Set ;
10       end
11    end
12  end
13  Set ;
14  for do
15    Set ;
16    Set ;
17    Set ;
18    Set ;
19  end
20  if then
21    Set ;
22  else
23    Set ;
24  end
25 end
  Data: ; ; ; ; ;
     ; such that ; a list of feasible
     nondominated points for the original problem.
1 for do
2  Let the current Augmented Lagrangian using the Exponential Penalty
   Function defined as:
         
   Set ;
3 for do
4    if
     then
5       Set ;
6       Set ;
7       Set ;
8       if then
9         Set ;
10       end
11    end
12  end
13  Set ;
14  for do
15    Set ;
16    Set ;
17    Set ;
18    Set ;
19  end
20  if then
21    Set ;
22  else
23    Set ;
24  end
25 end

A detailed description of the two algorithms is as follows. As an input, we define a set of nondominated points of the initial problem under bound constraint (without the other constraints). This is the set that will be considered as the set of reference points to find the Pareto optimal solutions. At each iteration, the Lagrange function established from an exponential penalty function is used with a penalty parameter and Lagrange multipliers . The multiplier estimate for each point is given by the relation which is a multiplicative form, unlike the quadratic penalty form, where the dual update is additive. In the equation in line 17 of both algorithms, we have the parameter that measures progress in terms of infeasibility and complementarity. In line 4 of Algorithm 4 and Algorithm 5, each is used for exploration. If for Algorithm 5, , or for Algorithm 4, where is the optimal value of the problem; the point is used to generate a new descent direction and a descent step , which is obtained by . Then, a new point is determined by the MOPG or MOSD algorithm which is an -Pareto-stationary point where varies and converges to zero at each iteration. This new point is used to filter the points in . If dominates points in the set , then we delete these points and add to the set. For updating the Lagrange multipliers, note that for a , such that , the penalty term for nonfeasible points (i.e., ) and tends to for feasible points (i.e. ) [10].

We will now proceed to an analysis of the convergence of the two algorithms taking into account assumptions such as the convexity of the objective functions of the constraints and also the admissible space is nonempty.

3.2. Convergence Analysis

In this section, we present convergence results for Algorithm 4 and Algorithm 5. As usual in the scalar case, we also assume that the objective functions are convex, as indicated by the assumptions below.

Assumption 11. The set is closed and convex. The set is not empty.

Assumption 12. The objective function has bounded level sets in the multiobjective sense, i.e., the set is compact.

Assumption 13. The sequence is such that .

Note that under Assumption 11, Assumption 12, and Proposition 14 from the work of Cocchi and Lapucci [34], we can deduce that the MOPG and MOSD algorithms of Algorithms 4 and 5, respectively, are well-defined, i.e., they stop in a finite number of iterations. For the step size, the good definition of Algorithm 3 results from Lemma 7 which is a main result proving that the step size is determined in a finite number of iterations.

The following proposition characterizes the solutions of the solution set generated by Algorithms 4 and 5.

Proposition 14. Let be the sequence of set generated by Algorithm 4 and Algorithm 5. Then, for each and for each , is an -Pareto-stationary point and is not dominated by any other point in with respect to equation (10).

Now, for the convergence results for each point , we start by giving the following technical result.

Lemma 15. Let be a sequence of set generated by Algorithm 4 or Algorithm 5. Let be a sequence for all such that Assume that is admissible. Then, for all such that , we have for all sufficiently large.

Proof. Let , for all . From the instructions of the algorithms, we consider the following two cases: (a)The sequence is bounded by definition, since ; for sufficiently large, we get . (b) boundedAccording to line (25) of the instructions of Algorithm 4 or Algorithm 5, there exists a such that for all , . We obtain for sufficiently large . Thus, which implies that . By the definition of , we get the result.

Then, we will prove that the points generated by the ALEPMO1 and ALEPMO2 algorithms are feasible based on the following propositions.

Proposition 16. Feasibility for -approximate ALEPMO1. Let be the sequence of set generated by Algorithm 4 with APSP1. Let be any sequence of points such that for all . Then, each cluster point of is a feasible point of problem (1), i.e., .

Proof. The proof of this proposition can be found in the very nature of the definition of Algorithm 4 and proposition 5 in the work of Drummond and Iusem in [12].

Proposition 17. Feasibility for -approximate ALEPMO2). Let be the sequence of set generated by Algorithm 5 with APSP2. Let be any sequence of points such that for all , with . Let , for all and for all . Then, is a feasible point of problem (1), i.e., .

Proof. Let be an infinite subset such that

Consider the two cases: (i)the sequence is bounded(ii)the sequence is unbounded, i.e.,

Case 1. Since is bounded, from the instruction of the algorithm, there must exist such that, for all , we have . This means that i.e., Since by assumption for all and , it has to be Thus, we obtain which implies that . But ; hence,

Case 2. From the instruction of Algorithm 5, we have at each iteration

Letting and , we obtain for all

Since is nonempty, we can choose . Using the convexity of the , we can bound the last term as follows:

This inequality is satisfied if the s are convex (i.e., ), and is nonnegative since .

Using equation (23) and by dividing by , we obtain

Now, suppose by contradiction that .

Given that, , is bounded, and the are continuous; for sufficiently large , we have

In addition, we have

Let us set . Thus, using the equation (25), we have For sufficiently large , we obtain which is absurd. Thus, the set , i.e., is feasible.

Finally, we prove that a limit point of the sequence generated by the algorithm is a Pareto-optimal point.

Proposition 18. Optimality for -approximate ALEPMO1. Let be the sequence of set generated by Algorithm 4 with APSP1. Let be any sequence of points such that for all . Suppose that the sequence is bounded. Then, every cluster point of is a Pareto-stationary point of problem (1).

Proof. Let be an infinite subset such as According to the Proposition 16, we have . Let us suppose by contradiction that is not the Pareto-stationary for problem (1), since by definition, is convex; there exists such that and Instructions of Algorithm 1, by posing we get at each iteration that Using the properties of the projection, we have for all that By adding and subtracting and rearranging, we get Let us set And using the convexity of the last two terms can be bounded as follows: Now, considering the term , we have Recalling that , we have Let us set . Now, replacing the different transformations in equation (35), we obtain Passing to the limit for , sufficiently large, since , are continuous, , the sequence is bounded, and , we obtain which contradicts our initial hypothesis.

Proposition 19. Optimality for -approximate ALEPMO2. Let be the sequence of set generated by Algorithm 4 with APSP2. Let be any sequence of points such that for all . Then, every cluster point of is a Pareto-stationary point of problem (1).

Proof. Let be an infinite subset such as According to Proposition 17, we have . Suppose by contradiction that is not Pareto-stationary for problem (1), since by definition, is convex, there exists such that and By posing and instructions of Algorithm 5, we have at each iteration Now consider the term since by definition the constraints are convex, using the properties of convexity, we get which implies that Using the fact that by definition, we have Equation (25) becomes Thus, considering the term if , for sufficiently large, we have Recalling that for , , and , we obtain which contradicts our initial assumption.

3.3. Numerical Experiences

In this section, we apply Algorithms 4 and 5 to problems with bound constraints , linear and nonlinear constraints. We first compared the two methods that we named, respectively, ALEXPMO1 and ALEXPMO2, and then, as our methods are nonscalar, we compared them with a well-known nonscalar method for solving multiobjective optimization programs, namely, the NSGA-II method. As a reminder, the NSGA-II method is a genetic algorithm based on a nondominant strategy. The code of the NSGA-II method is available at https://www.mathworks.com/matlabcentral/fileexchange/49806-matlab-code-for-constrained-nsga-ii-dr-s-baskar-s-tamilselvi-and-p-r-varshini.

In order to compare the different methods, we use the performance profiles developed by Dolan and Moré [37] and later used in many works [34, 35, 3841] regarding the purity metric and the spread metrics (-spread and -spread metrics). The purity metric measures the quality of the Pareto front generated by an algorithm. It gives the percentage of nondominated solutions generated by the method [41]. The purity metric is given by the following formula: with , the solutions generated by a solver for a problem , where is the set of solvers and is the set of test problems. represents the set of solutions generated by all solvers for the problem () without the dominated points.

The spread metrics used are -Spread and -Spread. The -Spread metric measures the maximum spacing of solutions generated by a solver [41]. It is given by the following formula: where represents the number of solutions generated by a solver, is the number of objective functions, and whose values of are arranged in ascending order. The -spread metric measures the distribution of solutions generated by a solver [41]. It is calculated by the following formula: where is the average of the with . and represent the extreme points indexed by and . Thus, we used the technique proposed in [41] to compute the extreme points for problems that do not have an analytic front. We first removed the dominant points from the meeting on all these fronts. Then, for each component of the objective function, we selected the pair corresponding to the highest distance in pairs measured using .

We then use the performance profiles proposed in [37, 40] for an appreciation of the performance of the four metrics presented above. We refer the reader to the articles cited above for more information on performance profiles. Recall that the performance profiles are presented by a diagram of a cumulative distribution function which is defined as follows: with . Since performance profiles are used for metrics whose lowest value indicates better performance and metric purity, we will pose as proposed in [40]. For more information on the metrics, we refer the reader to the references cited above.

We have implemented Algorithm 4 and Algorithm 5 in MATLAB. The search directions and the optimal value are computed by solving the subproblem (6). Algorithm 4 and Algorithm 5 are implemented with the following parameters: , , , , , and . Since we use Armijo’s rule to determine the descent step, we use and . For the NSGA-II method, we used the default parameters except for the number of generations which was set to 20,000 since the number of generations and the number of executions for each problem allows it to reduce the sensitivity of its genetic operators.

As we presented previously, in the section devoted to the convergence analysis, Algorithms 4 and 5 converge for convex problems. Therefore, we have tested their performance with convex problems existing in the literature. Table 1 presents the set of test problems that we used, which are a total of 70. The first column gives the name of the problems, the second shows the number of variables, the third presents the number of objectives, the fourth is devoted to the bound constraints, and the last column is dedicated to highlight the sources of the problem. Since the algorithms run on problems that have bound constraints, we have defined search domains in the form for problems that do not have bound constraints. These problems have their names preceded by the letter M in Table 1 to indicate that they have been modified. The bound constraints are transformed to linear constraints in the following way: given that the domain is defined as , we obtain and whose number is with as the number of variables. For Algorithm 4 and Algorithm 5, the set of initial nondominated points is determined in the space and the projection in Algorithm 4 into the domain.We used an HP EliteBook laptop equipped with an Intel Core i7-3687U processor with a base frequency range of 2.10 GHz to 2.60 GHz and 4 GB of RAM to test our algorithms.

where

We begin our analysis by comparing the performance of the algorithms ALEXPMO1, ALEXPMO2, and NSGA-II in terms of computation time, the purity metric, and the spread metrics.

Figure 1 shows that ALEXPMO1 and ALEXPMO2 are competitive, and that ALEXPMO2 has the highest probability of being the best method. Comparing with NSGA-II, we see that ALEXPMO2 is in the lead with an interest factor of 8 and a probability of 0.50. However, if we increase this interest factor to more than 8, NSGA-II becomes the best method with a probability of 0.90 for an interest factor of 10.

Figure 2 examines the performance of the algorithms in terms of purity. Figure 2(a) shows that there is no significant difference between ALEXPMO2 and ALEXPMO1. However, Figures 2(b) and 2(c) reveal that ALEXPMO2 and ALEXPMO1 surpass NSGA-II with a probability of about of being the best methods.

Figure 3 focuses on the performance of the algorithms in terms of -spread. According to Figure 3(a), ALEXPMO1 is better than ALEXPMO2 for an interest factor of less than 21. However, ALEXPMO2 is better than ALEXPMO1 for an interest factor greater than 21. As for Figures 3(b) and 3(c), ALEXPMO2 is better than NSGA-II and ALEXPMO1 for an interest factor less than 51 with a probability of about 0.30 and remains competitive for an interest factor greater than 51.

Figure 4 gives the performance of the algorithms in terms of -spread. Figures 4(a)4(c) show that ALEXPMO1 and ALEXPMO2 are competitive compared to NSGA-II in the uniform distribution of solutions on the Pareto front.

We have especially added a comparative study of MLF1, BNH1, M-LAP2, and M-JOS problems. At first, we present the Pareto optimal front of these problems by using our two algorithms and also the NSGA-II algorithm.

Here, we give the values of the performance parameters that we present above. It is about the computational time, purity, and spread (-spread and -spread) of all three algorithms mentioned above presented in Tables 26.

We have examined the Pareto fronts for different reference problems, including one variable MLF1 (a multimodal problem), BNH1 (two variables constrained), M-LAP2 (30 and 100 variables), and M-JOS1 (100 variables). Figures 5(a) and 5(b), 6(a) and 6(b), and 7 show the results obtained with 100 solutions. The results indicate that the ALEXPMO1 and ALEXPMO2 methods are superior to NSGA-II according to the purity, -spread, and -spread metrics. For the MLF1 problem, we find that the solutions proposed by ALEXPMO1 and ALEXPMO2 are closer to the global front than those generated by NSGA-II.

3.4. Comparisons of ALEXPMO1 with another Lagrangian Method

In this subsection, we compare the ALEXPMO1 method with another well-established method called “augmented Lagrangian cone method” (ALCM), developed by Upadhayay et al. [36]. The purpose of this comparison is to evaluate the relative performances of both approaches, namely, ALEXPMO1 and ALCM.

We selected six representative test problems that were successfully solved by ALCM to conduct this comparison. The parameters of our method, ALEXPMO1, were fixed as defined in the previous sections. For the ALCM method, we set the parameter to 30 for all problems, chose as a random number generated within the interval (0,1), set to a value of , fixed at 2.5, set to 10, and set the bounds of the Lagrange multipliers, and , to 0 and 1, respectively.

To solve the subproblem, we set the parameters to 0.95, to 0.80, initialized with random values within the interval (0, 1), and set to 0.

By conducting a comprehensive comparative analysis of the performances of both methods on the six test problems, including the problem, problems with 50, 100, and 500 variables, the problem, and the problem, we found that ALEXPMO1 outperformed ALCM in terms of solution distribution on the Pareto front. Although ALCM may be competitive for specific problems, it is important to highlight that, with the chosen parameters, ALEXPMO1 manages to identify solutions on the global Pareto front that are superior to those obtained by ALCM for the multimodal problem .

To support our results, we conducted a visual comparison of the Pareto fronts generated by both methods, illustrated in Figures 8(a)8(c) and Figures 9(a)9(c). These in-depth and objective analyses provide a better understanding of the advantages and limitations of each method, thus contributing to the advancement of research in the field of multiobjective optimization.

Table 7 presents the comparative study of ALEXPMO1 and ALCM, while Figures 8 and 9 illustrate the Pareto fronts generated by both methods for each problem. This detailed and objective analysis allows for a better understanding of the advantages and limitations of each method and contributes to the advancement of research in the field of multiobjective optimization.

4. Conclusion

In this study, we presented a new approach for solving multiobjective optimization problems, which combines an inexact exponential penalty function with the augmented Lagrangian technique. To solve the subproblem obtained by using our new approach, we used the steepest descent or projected gradient, which allowed us to produce, respectively, Algorithms 4 and 5 for general convex multiobjective optimization problems. The convergence properties of both methods have been examined using assumptions such as convexity and boundlessness. Our numerical experiments indicate that the two proposed algorithms are competitive compared to existing methods in the literature.

Data Availability

The data used to support the conclusion of the study are included in the paper.

Conflicts of Interest

The authors declare no competing interest.

Authors’ Contributions

Appolinaire Tougma and Kounhinir Some contributed equally to this work.

Acknowledgments

The authors wish to thank the anonymous referees for their remarks that contributed to improve the presentation.