Abstract

Curriculum sequencing problem is crucial to e-learning system, which is a NP-hard optimization problem and commonly solved by swarm intelligence. As a form of swarm intelligence, particle swarm optimization (PSO) is widely used in various kinds of optimization problems. However, PSO is found ineffective in complex optimization problems. The main reason is that PSO is ineffective in diversity preservation, leading to high risks to be trapped by the local optima. To solve this problem, a novel hybrid PSO algorithm is proposed in this study. First, a competitive-genetic crossover strategy is proposed for PSO to balance the convergence and diversity. Second, an adaptive polynomial mutation is introduced in PSO to further improve its diversity preservation ability. Furthermore, a curriculum scheduling model is proposed, where several constraints are taken into considerations to ensure the practicability of the curriculum sequencing. The numerical comparison experiments show that the proposed algorithm is effective in solving function optimization in comparison to several popular PSO variants; furthermore, for the optimization of the designed curriculum sequencing problem, the proposed algorithm shows significant advantages over the compared algorithms with respect to the degree of the satisfaction of the objectives, i.e., 20, 14, and 5 percentages higher, respectively.

1. Introduction

With the rapid developments of communication technologies, e-learning is growing as an important teaching method. However, the traditional e-learning systems commonly ignore reusability and are not adaptive [1]. Furthermore, the traditional e-learning systems are inefficient in flexibility and cannot provide either the adaptive learning or the fixed curriculum path for students [2, 3]. Due to its characteristic of NP hard, individualizing paths for each student are costly to providers, since teachers typically perform the sorting process manually. Therefore, it is of great value to automate the curriculum sequencing to reduce the cost of high-quality e-learning. To solve this issue, many kinds of models and optimization methods are carried out for curriculum sequencing problems [411], where particle swarm optimization (PSO) [12] is widely adopted as the optimizer.

As a form of swarm intelligence, PSO builds on the population-based searching technique, where each particle (A feasible solution in the swarm) includes two attributes, i.e., the position and the velocity. The former represents the information of decision variables of the particles and the latter is the evolutionary direction of the particles. Based on equations (1) and (2), the swarm is able to keep updating and traverse the decision space to find better solutions:where is the dimension of the position of the particle in generation , is the best particle of the current swarm and is the best position searched by the particle so far, is the velocity of the particle, and and are the coefficients, while and are two random number generated within . As one can see, PSO is of high simplicity.

However, PSO is found ineffective in complex problems, such as the multimodal and large-scale optimization problems [13, 14]. The main reason is that PSO lacks global searching ability and is prone to be trapped by the local optimum due to its poor ability in balancing the exploration and exploitation.

To this end, a huge amount of effort has been carried out, mainly being classified into three categories: the improvements on parameters [1519], different kinds of topologies [2022], and hybridizations with other kinds of techniques and algorithms [23, 24]. Nevertheless, the existing algorithms still cannot obtain the global optima when solving complex problems. The main reason is that the current PSO and its variants are still inefficient in diversity preservation. For instance, CSO tries to enhance its diversity preservation ability by guiding the updated particles with the mean position of the whole swarm. However, the mean position is shared by all the updated particles, resulting in risks in premature convergence.

To improve the diversity preservation ability of PSO, this study proposes a novel hybrid particle swarm optimization algorithm; the main contributions of this study are listed as follows:(1)A competitive-genetic crossover operation is proposed, which combines the competitive mechanism, the genetic crossover, and the elite strategy to improve the diversity of the exemplars and ensure a comprehensive exploitation ability of PSO(2)An adaptive polynomial mutation is introduced in PSO to further enhance its diversity preservation ability, leading to the proposed hybrid particle swarm optimization algorithm(3)A curriculum sequencing model is proposed and the proposed algorithm is applied into solving the curriculum sequencing problem to improve the efficiency of the e-learning systems

The rest of this study is organized as follows. Section 2 presents the literature review for the mainstream of the improvements on PSO. Section 3 provides a detailed description of the proposed algorithm. The experiments and the case study are conducted in Section 4, and we conclude this study in Section 5.

As mentioned above, the current works for improving PSO can be mainly categorized into the following three classes.

2.1. Control of the Parameters

Different parameters in PSO are of different functions: the parameter in inertia component mainly focuses on keeping the particles move according to the historical information for global optimization; the parameters in the social components are related to the convergence. Shi and Eberhart propose a linear control method to dynamically adjust the inertia weight [25]. The main goal of their work is to enhance the diversity preservation ability for PSO in the early optimization stage and emphasize the convergence ability for PSO in the later stage. Furthermore, they propose a fuzzy adjustment strategy for the inertia weight [26]. Ratnaweera et al. propose HPSO-TVAC to dynamically adjust the acceleration coefficients during the run [16]. Zhan et al. propose the adaptive particle swarm optimization algorithm, changing the acceleration coefficients according to a predesigned evolutionary state estimation strategy [18]. Piotrowski et al. put forward a study to investigate the influences of the swarm size on the performance of PSO [27]. Tian and Shi propose a PSO variant, where a logistic map and a sigmoid-like inertia weight are utilized to initialize the swarm and balance the exploration and exploitation, respectively [28].

2.2. Modifications on the Topologies

The main idea of the methods in this category is designing new information exchanging topologies to improve the diversity preservation ability to enhance the search ability of PSO. Mendes et al. investigate a fully informed topology, where particles learn from their neighbors instead of the global best particle in the swarm [20]. Liang et al. propose comprehensive learning strategy which allows each particle’s historical personal best position to be the exemplar for others [21]. Chen et al. propose ALCPSO, which dynamically replaces the global best particle with another individual according to several predefined rules to guide the updated particles [29]. Cheng et al. propose FBE, where the whole swarm is divided into two subswarms, and a competitive mechanism is designed to select exemplars for particles [30]. The weak particles in the competition learn from the best particle in the internal subswarm and randomly selected particles from the external subswarm, while a mutation operation is executed on the strong particles. Furthermore, Cheng and Jin propose the competitive swarm optimizer (CSO), which only updates a half of the particles and adopts the strong particles in the competition to guide the weak particles [22]. Inspired by the social learning actions, Cheng and Jin propose SLPSO, which allows particles to learn from all the particles that are better than themselves [31]. Yang et al. propose DLLSO [32], which divides particles into different levels based on their fitness value. Afterwards, each particle chooses two different exemplars from superior levels. Zhang et al. propose a modified PSO by introducing a dynamic neighborhood-based learning strategy to enhance the diversity preservation ability for PSO [33]. Zeng et al. propose DNSPSO, where a novel velocity updating mechanism is designed to adjust the personal best position and the global best position based on a designed distance-based dynamic neighborhood to enhance the information sharing among the particles [34].

2.3. Hybridization with Other Techniques

Hybridizing PSO with other techniques is able to make use of the advantages of both PSO and other techniques. Van et al. propose CCPSO-SK and CCPSO-HK by integrating the co-operative co-evolutionary framework and PSO to solve the large-scale optimization [35, 36]. Li and Yao put forward CCPSO2, where the Gaussian and Cauchy mutation are proposed to update individuals to balance the diversity and convergence [37]. Qin et al. propose a PSO variant by dividing the whole swarm into learned and learning subswarms at each generation; afterwards, the learning subswarm will learn from the learned subswarm with respect to a random probability [38]. Li et al. propose a hybrid algorithm by incorporating the update mechanism of PSO into biogeographic optimization algorithm to improve the exploration ability of the algorithm [12]. The genetic learning particle swarm optimization put forward by Gong et al. adopts crossover and mutation operators to enhance the exploration ability of PSO [39]. Similarly, Chen et al. design two kinds of crossover operations to breed promising exemplars [23]. Chen et al. propose HPSOSSM, which uses a logistic map sequence to enhance the diversity and adopts a spiral-shaped mechanism to improve the convergence speed [40].

3. Proposed Algorithm

As discussed above, the main issue in PSO is that PSO is ineffective in diversity preservation. Targeting against this issue, this study puts forward two improvements for PSO: a competitive-genetic crossover operator and an adaptive mutation operator. The details are presented as follows.

3.1. Competitive-Genetic Crossover Operator

Competitive mechanism-based learning strategy proposed in [22] has been demonstrated to be effective in both convergence and diversity preservation. The reason is that it not only can be used to select exemplars for particles for convergence but also is beneficial to the diversity preservation by only updating half of the particles. On the contrary, genetic algorithm (GA) [41] potentially has better convergence compared with PSO, since the promising information in the population of GA can be directly copied to other solutions and kept to the offsprings, while PSO updates its particles according to stochastic strategies. Motived by this, a competitive-genetic crossover operator (CGCO) is proposed to conduct convergence and improve the diversity preservation ability for PSO.

First, the competitive mechanism is conducted to determine the strong particles’ set (the winners in the competition) and the weak particles’ set (the losers in the competition) at each generation, which can be shown in Figure 1. To be specific, (i) the whole swarm is randomly divided into subswarms, i.e., each subswarm includes two particles, (ii) the particles in each subswarm conduct the competition according to their fitness to determine the strong particle and the weak particle, i.e., the winner and loser in the competition, respectively.

Second, a multipoint crossover is conducted on the paired strong particles and the corresponding weak particles. Different from the mechanism in GA, only the strong particles’ information will be copied into the weak particles in the proposed crossover operator. On the contrary, of the particles are further adopted in the crossover operation. The crossover operation for a particle can be formulated aswhere is the position of the particle in the losers group, is the corresponding winner and is the historical personal best position of , and , , , and are four randomly generated indexes for crossover.

Finally, an elite strategy is adopted: an offspring is kept to the next generation if it has improvements in comparison to its parent (the corresponding losers) or the offspring will be kept if a generated random number is less than 0.5.

Together, the pseudocode of CGCO is shown in Algorithm 1.

Input: swarm , fitness value vector , and swarm size
Output: The offsprings
(1)
(2) and Conduct the competition mechanism
(3)for to do
(4), Select the particle and the corresponding in and
(5) Conduct the crossover operation between and
(6) if is better than then
(7)  
(8) else
(9)  if rand 0.5 then
(10)   
(11)  else
(12)   
(13)  end
(14) end
(15)end
3.2. Adaptive Mutation Operator

Although CGCO is beneficial to diversity by only updating half of the particles, it only exchanges existing information between the particles. Therefore, there still exist high risks that the swarm will be trapped by the local optima. To solve this problem, an adaptive mutation operator (AMO) is proposed.

First, each particle is initialized with a counter in the swarm initialization stage.

Second, in the optimization stage, the mutation rate for the particle is computed according towhere counter(i) records how many times that the particle successfully has been updated, while is the cost number of the generation and is the predefined maximum of the mutation rate.

With , each particle conducts the polynomial mutation to improve the diversity of the swarm. One can find that the better a particle, the smaller the corresponding mutation rate. This is reasonable since the information of the promising particles should be kept with a large likelihood for convergence.

3.3. HPSO-GA

To sum up, a competitive-genetic crossover operator and an adaptive mutation strategy are proposed to conduct convergence and diversity preservation for PSO. By integrating these two strategies together, a novel hybrid PSO is proposed, which is refer to HPSO-GA in the following for simplicity. The procedure of HPSO-GA can be found in Figure 2.

4. Experiments and Results

In this section, numerical comparisons are first conducted to test the performance of the proposed HPSO-GA; second, a curriculum sequencing model is put forward. Afterwards, HPSO-GA and other three algorithms are applied into solving the proposed curriculum sequencing problem to test the reliability of the proposed algorithm.

4.1. Numerical Comparisons
4.1.1. Compared Algorithms

To test the numerical optimization performance of HPSO-GA, five popular PSO variants are selected in the comparisons, including ALCPSO [29], LIPS [20], CSO [22], HPSO-TVAC [16], and DLLSO [32]; the benchmarks posted by CEC 2013 is adopted to test the algorithms.

4.1.2. Experimental Settings

For a fair comparison, all the compared algorithms adopt the suggested parameter settings in the corresponding references; the swarm size for the compared algorithms and the dimensionality of the benchmarks are set to 100; the maximum number of the fitness evaluations is adopted as the termination criterion, which is set to . The parameter settings for HPSO-GA are set as follows: the crossover rate is set to 1 and the maximum mutation rate is set to 0.05; each algorithm is run 31 times on each benchmark. The Wilcoxon rank sum test is adopted for the statistical analysis between the peer algorithms and HPSO-GA, where the significance level is set to 0.05.

4.1.3. Results

Table 1 shows the results of the comparison, where the mean of the corresponding results are recorded. The symbols “,” “,” and “” in the bottom of the table indicate that the results of HPSO-GA are significantly better, significantly worse, or statistically similar to the results obtained by the corresponding peer algorithms, respectively. The best results of the average performance are highlighted by gray.

As shown by the results, HPSO-GA wins 16 times out of the 28 benchmarks with respect to the mean results. With a deeper insight, for the five unimodal functions to , HPSO-GA wins for 3 times; for the basic multimodal functions to , HPSO-GA wins for 8 times; for the composition functions to , HPSO-GA wins for 5 times. The statistic analysis results also indicate the competitive performance of HPSO-GA: it significantly outperforms the peer algorithms for 25, 18, 23, 24, and 20 times, respectively.

Therefore, one can see that the proposed algorithm is competitive in large-scale optimization (functions with dimensionality of 100). This can be explained by (i) the proposed CGCO is more suitable for convergence than PSO’s position update strategy, since the information of the promising solutions can be directly fed into the updated particles; (ii) CGCO is able to enhance the diversity preservation ability for the proposed algorithm by diverse the exemplars; (iii) the proposed CGCO only updates half of the particles at each generation, which also results in a good diversity preservation ability; (iv) the proposed adaptive mutation operator can effectively further improve the diversity preservation ability for the proposed algorithm.

4.2. Curriculum Sequencing Optimization

In this section, a curriculum sequencing model is proposed. Afterwards, the proposed HPSO-GA and three algorithms are tested on the proposed model.

4.2.1. Problem Description

Traditional e-learning systems are neither reusable nor adaptive [3]. Furthermore, current e-learning systems commonly fix the learning paths for each student and are ineffective in providing adaptive learning schemes. A main reason is that the curriculum sequencing problem is a NP-hard optimization problem, resulting in difficulties to e-learning provider for properly scheduling the courses for students. Targeting against this issue, various kinds of curriculum sequencing models and optimization algorithms have been proposed [42, 43].

The curriculum sequencing problem can be commonly formulated by , where represents the students with three properties, for instance, including the student’s available time , student’s ability , and student’s objectives ; is a mapping function which maps a student to a finite ordered set of learning objectives (LOs) that can be assigned to . is the constraints. The goal of the curriculum sequencing is that scheduling courses for students and satisfying all the constraints as much as possible.

In this study, we build the curriculum sequencing model by taking the following constraints into considerations.(1)Various learning objectives for students(2)International cooperation teaching(3)Two-side satisfaction, e.g., both the requirements of teachers and students should be satisfied(4)Suitable strength, e.g., students should be assigned with courses that not exceed their ability level

Following the above concerns, the proposed model is constructed as equations (5)–(9):where and are the time constraints for students, which are used to examine whether the assigned total courses’ time and each course fits the students’ available time, is the maximum number of students that a course can fit, means whether the courses assigned to a student can fit the difficulty level that the student is in, means whether the courses assigned to a student can meet his/her learning goals, and are the course number assigned to the corresponding student and the number of all the students, respectively, and are the total time of the courses and the time of the course, is the maximum number of the students that the course can fit, is the corresponding number of students assigned to the course, is the learning objectives of the student while is the objective involved in the course, means whether the two conditions are fit to each other, if it is, it returns 1 or it returns 0; finally, the goal of the curriculum sequencing problem is to maximize the sum of the satisfaction degree (%) of the five constraints.

With the above designs, the proposed model can ensure that (i) the requirements of the students can be maximized and (ii) the time availability of the different students (e.g., the students live in different countries, resulting in different time availability) can be taken into considerations. Note that all the above constraints are built on 10 days period; such design is flexible to the student; they can dynamically change their learning goals and course difficulty level according to their improvements.

4.2.2. Simulation and Results

To test the proposed algorithm and model, comparison experiments are conducted with the following settings.

(1) Model Settings. First, to build the dataset, a real e-learning dataset from an information technology diploma program is considered, where 1000 students are randomly selected. The dataset is obtained from the corresponding student affairs’ system. Each student is represented by an ID, a set of learning objectives, and an ability indicator and total/sectional available time as follows: S = {ID: integer, objective: integer vector, ability: integer ranging between 1 and 5, total/sectional available time: integer, integer vector}. Here, we randomly select 30% students as the foreign students with a time difference of 12 hours, while the maximum number of students that a course can fit is set to 30. Each course is represented by its ID, difficulty level, required time, and covered learning objectives as follows: Course = {ID: integer, difficulty level: integer ranging between 1 and 5, time required: integer, objective: integer vector}. Each solution of the optimization algorithms is coded by integer strategy and has a fixed dimensionality (the number of students multiplied by the maximum number of the courses that a student can take); the maximum number of the courses that a student can take during 10 days is set to 80. The time period adopted to conduct the experiments is 60 days. Furthermore, we only randomly select three learning objectives for each student in the comparison and test the scalability of the algorithms with respect to different number of the learning objectives.

(2) Compared Algorithms. Three algorithms are selected in the comparison: genetic algorithm, SwarmRW, and SwarmRW-rnd [3]. For the genetic algorithm, the crossover rate is set to 1 and the mutation rate is set to 0.001; for the two swarm-based algorithms, the suggested parameter settings in the corresponding references are adopted; for the proposed HPSO-GA, the crossover rate is set to 1 and the maximum mutation rate is set to 0.05. The swarm size for each algorithm is set to 200 due to the large number of decision variables; the maximum FEs is set to . The Wilcoxon rank sum test is adopted to conduct the statistical test, where the significance level is set to 0.05.

(3) Results. Figure 3 shows the comparisons between HPSO-GA and the peer algorithms with respect to . As one can see, HPSO-GA outperforms GA, SwarmRW, and SwarmRW-rnd by more than 26 %, 11 %, and 7 %, respectively. The corresponding Wilcoxon rank sum test results show that the HPSO-GA significantly outperforms the compared algorithms: between HPSO-GA and GA, between HPSO-GA and SwarmRW, and between HPSO-GA and SwarmRW-rnd.

Furthermore, the runtime of the algorithms is shown in Figure 4. One can see that SwarmRW is of the most computational simplicity, followed by the GA. HPSO-GA and SwarmRW-inc are comparable with respect to the runtime. However, in general, the runtime of HPSO-GA is acceptable in comparison to the GA and SwarmRW, and it brings improvements in the optimization results.

In addition, the satisfaction of individual constraint obtained by different algorithms is shown in Table 2. As shown by the results, HPSO-GA obtains the best results for 4 times out of the 5 constraints, followed by SwarmRW-rnd wins on one constraint. The results demonstrate that the proposed HPSO-GA is not only promising in the overall goal but also competitive in individual constraints.

Finally, the scalability of different algorithms with respect to the variation of students’ learning objective is tested; the results are shown in Figure 5. It can be observed that all the algorithms perform worse with the increasing of the learning objectives. This can be explained by that the difficulty of covering all the learning objectives of students becomes harder with the increasing of the number of the learning objectives. Therefore, it poses higher challenges to the searching ability of the algorithms. Nevertheless, the proposed HPSO-GA always outperforms other compared algorithms with respect to different number of learning objectives, which demonstrates the satisfied scalability of the proposed algorithm.

In summary, one can find that (i) the proposed curriculum sequencing model can be combined with real-world dataset and (ii) the proposed HPSO-GA is effective in solving the proposed model and provide proper curriculum paths for students.

5. Conclusions

Targeting against the modeling and optimization of the curriculum sequencing, this study first proposes a novel hybrid PSO algorithm by designing new particle update strategies, where a competitive-genetic crossover operator and an adaptive mutation operator are proposed for both convergence and diversity preservation. In the experiments, the numerical comparisons show that the proposed algorithm is competitive in comparison to several peer algorithms, indicating the effectiveness of the proposed strategies. On the contrary, several constraints are taken into considerations to model the curriculum sequencing problem, where the sectional available time expends the reliability of the proposed model for international e-learning systems.

Furthermore, the period-based model strategy is potentially beneficial to students to dynamically change their learning strategies, which is a future study point of this study.

Data Availability

The data are not available due to the nature of this research as participants of this study did not agree for their data to be shared publicly.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the Study on the Mechanism of Industrial-Education Co-Cultivation for Interdisciplinary Technical and Skilled Personnel in Chinese Intelligent Manufacturing Industry (Planning project for the 14th Five-Year Plan of National Education Sciences (BJA210093)).