Applied Computational Intelligence and Soft Computing The latest articles from Hindawi Publishing Corporation © 2016 , Hindawi Publishing Corporation . All rights reserved. Application of Bipolar Fuzzy Sets in Graph Structures Wed, 10 Feb 2016 13:10:24 +0000 A graph structure is a useful tool in solving the combinatorial problems in different areas of computer science and computational intelligence systems. In this paper, we apply the concept of bipolar fuzzy sets to graph structures. We introduce certain notions, including bipolar fuzzy graph structure (BFGS), strong bipolar fuzzy graph structure, bipolar fuzzy -cycle, bipolar fuzzy -tree, bipolar fuzzy -cut vertex, and bipolar fuzzy -bridge, and illustrate these notions by several examples. We study -complement, self-complement, strong self-complement, and totally strong self-complement in bipolar fuzzy graph structures, and we investigate some of their interesting properties. Muhammad Akram and Rabia Akmal Copyright © 2016 Muhammad Akram and Rabia Akmal. All rights reserved. Prediction of Defective Software Modules Using Class Imbalance Learning Sun, 07 Feb 2016 07:13:28 +0000 Software defect predictors are useful to maintain the high quality of software products effectively. The early prediction of defective software modules can help the software developers to allocate the available resources to deliver high quality software products. The objective of software defect prediction system is to find as many defective software modules as possible without affecting the overall performance. The learning process of a software defect predictor is difficult due to the imbalanced distribution of software modules between defective and nondefective classes. Misclassification cost of defective software modules generally incurs much higher cost than the misclassification of nondefective one. Therefore, on considering the misclassification cost issue, we have developed a software defect prediction system using Weighted Least Squares Twin Support Vector Machine (WLSTSVM). This system assigns higher misclassification cost to the data samples of defective classes and lower cost to the data samples of nondefective classes. The experiments on eight software defect prediction datasets have proved the validity of the proposed defect prediction system. The significance of the results has been tested via statistical analysis performed by using nonparametric Wilcoxon signed rank test. Divya Tomar and Sonali Agarwal Copyright © 2016 Divya Tomar and Sonali Agarwal. All rights reserved. Constrained Fuzzy Predictive Control Using Particle Swarm Optimization Sun, 28 Jun 2015 09:09:59 +0000 A fuzzy predictive controller using particle swarm optimization (PSO) approach is proposed. The aim is to develop an efficient algorithm that is able to handle the relatively complex optimization problem with minimal computational time. This can be achieved using reduced population size and small number of iterations. In this algorithm, instead of using the uniform distribution as in the conventional PSO algorithm, the initial particles positions are distributed according to the normal distribution law, within the area around the best position. The radius limiting this area is adaptively changed according to the tracking error values. Moreover, the choice of the initial best position is based on prior knowledge about the search space landscape and the fact that in most practical applications the dynamic optimization problem changes are gradual. The efficiency of the proposed control algorithm is evaluated by considering the control of the model of a 4 × 4 Multi-Input Multi-Output industrial boiler. This model is characterized by being nonlinear with high interactions between its inputs and outputs, having a nonminimum phase behaviour, and containing instabilities and time delays. The obtained results are compared to those of the control algorithms based on the conventional PSO and the linear approach. Oussama Ait Sahed, Kamel Kara, and Mohamed Laid Hadjili Copyright © 2015 Oussama Ait Sahed et al. All rights reserved. Hiding Information in Reversible English Transforms for a Blind Receiver Sun, 31 May 2015 11:55:24 +0000 This paper proposes a new technique for hiding secret messages in ordinary English text. The proposed technique exploits the redundancies existing in some English language constructs. Redundancies result from the flexibility in maneuvering certain statement constituents without altering the statement meaning or correctness. For example, one can say “she went to sleep, because she was tired” or “Because she was tired, she went to sleep.” The paper provides a number of such transformations that can be applied concurrently, while keeping the overall meaning and grammar intact. The proposed data hiding technique is blind since the receiver does not keep a copy of the original uncoded text (cover). Moreover, it can hide more than three bits per statement, which is higher than that achieved in the prior work. A secret key that is a function of the various transformations used is proposed to protect the confidentiality of the hidden message. Our security analysis shows that even if the attacker knows how the transforms are employed, the secret key provides enough security to protect the confidentiality of the hidden message. Moreover, we show that the proposed transformations do not affect the inconspicuousness of the transformed statements, and thus unlikely to draw suspicion. Salma Banawan and Ibrahim Kamel Copyright © 2015 Salma Banawan and Ibrahim Kamel. All rights reserved. A Software Tool for Assisting Experimentation in Dynamic Environments Wed, 22 Apr 2015 11:43:10 +0000 In real world, many optimization problems are dynamic, which means that their model elements vary with time. These problems have received increasing attention over time, especially from the viewpoint of metaheuristics methods. In this context, experimentation is a crucial task because of the stochastic nature of both algorithms and problems. Currently, there are several technologies whose methods, problems, and performance measures can be implemented. However, in most of them, certain features that make the experimentation process easy are not present. Examples of such features are the statistical analysis of the results and a graphical user interface (GUI) that allows an easy management of the experimentation process. Bearing in mind these limitations, in the present work, we present DynOptLab, a software tool for experimental analysis in dynamic environments. DynOptLab has two main components: (1) an object-oriented framework to facilitate the implementation of new proposals and (2) a graphical user interface for the experiment management and the statistical analysis of the results. With the aim of verifying the benefits of DynOptLab’s main features, a typical case study on experimentation in dynamic environments was carried out. Pavel Novoa-Hernández, Carlos Cruz Corona, and David A. Pelta Copyright © 2015 Pavel Novoa-Hernández et al. All rights reserved. Concise and Accessible Representations for Multidimensional Datasets: Introducing a Framework Based on the D-EVM and Kohonen Networks Sun, 01 Mar 2015 11:22:05 +0000 A new framework intended for representing and segmenting multidimensional datasets resulting in low spatial complexity requirements and with appropriate access to their contained information is described. Two steps are going to be taken in account. The first step is to specify ()D hypervoxelizations, , as Orthogonal Polytopes whose th dimension corresponds to color intensity. Then, the D representation is concisely expressed via the Extreme Vertices Model in the -Dimensional Space (D-EVM). Some examples are presented, which, under our methodology, have storing requirements minor than those demanded by their original hypervoxelizations. In the second step, 1-Dimensional Kohonen Networks (1D-KNs) are applied in order to segment datasets taking in account their geometrical and topological properties providing a non-supervised way to compact even more the proposed -Dimensional representations. The application of our framework shares compression ratios, for our set of study cases, in the range 5.6496 to 32.4311. Summarizing, the contribution combines the power of the D-EVM and 1D-KNs by producing very concise datasets’ representations. We argue that the new representations also provide appropriate segmentations by introducing some error functions such that our 1D-KNs classifications are compared against classifications based only in color intensities. Along the work, main properties and algorithms behind the D-EVM are introduced for the purpose of interrogating the final representations in such a way that it efficiently obtains useful geometrical and topological information. Ricardo Pérez-Aguila and Ricardo Ruiz-Rodríguez Copyright © 2015 Ricardo Pérez-Aguila and Ricardo Ruiz-Rodríguez. All rights reserved. On the Performance Improvement of Devanagari Handwritten Character Recognition Sun, 22 Feb 2015 06:06:49 +0000 The paper is about the application of mini minibatch stochastic gradient descent (SGD) based learning applied to Multilayer Perceptron in the domain of isolated Devanagari handwritten character/numeral recognition. This technique reduces the variance in the estimate of the gradient and often makes better use of the hierarchical memory organization in modern computers. -weight decay is added on minibatch SGD to avoid overfitting. The experiments are conducted firstly on the direct pixel intensity values as features. After that, the experiments are performed on the proposed flexible zone based gradient feature extraction algorithm. The results are promising on most of the standard dataset of Devanagari characters/numerals. Pratibha Singh, Ajay Verma, and Narendra S. Chaudhari Copyright © 2015 Pratibha Singh et al. All rights reserved. Cascade Support Vector Machines with Dimensionality Reduction Thu, 15 Jan 2015 06:18:45 +0000 Cascade support vector machines have been introduced as extension of classic support vector machines that allow a fast training on large data sets. In this work, we combine cascade support vector machines with dimensionality reduction based preprocessing. The cascade principle allows fast learning based on the division of the training set into subsets and the union of cascade learning results based on support vectors in each cascade level. The combination with dimensionality reduction as preprocessing results in a significant speedup, often without loss of classifier accuracies, while considering the high-dimensional pendants of the low-dimensional support vectors in each new cascade level. We analyze and compare various instantiations of dimensionality reduction preprocessing and cascade SVMs with principal component analysis, locally linear embedding, and isometric mapping. The experimental analysis on various artificial and real-world benchmark problems includes various cascade specific parameters like intermediate training set sizes and dimensionalities. Oliver Kramer Copyright © 2015 Oliver Kramer. All rights reserved. Towards Scalable Distributed Framework for Urban Congestion Traffic Patterns Warehousing Tue, 06 Jan 2015 08:26:19 +0000 We put forward architecture of a framework for integration of data from moving objects related to urban transportation network. Most of this research refers to the GPS outdoor geolocation technology and uses distributed cloud infrastructure with big data NoSQL database. A network of intelligent mobile sensors, distributed on urban network, produces congestion traffic patterns. Congestion predictions are based on extended simulation model. This model provides traffic indicators calculations, which fuse with the GPS data for allowing estimation of traffic states across the whole network. The discovery process of congestion patterns uses semantic trajectories metamodel given in our previous works. The challenge of the proposed solution is to store patterns of traffic, which aims to ensure the surveillance and intelligent real-time control network to reduce congestion and avoid its consequences. The fusion of real-time data from GPS-enabled smartphones integrated with those provided by existing traffic systems improves traffic congestion knowledge, as well as generating new information for a soft operational control and providing intelligent added value for transportation systems deployment. A. Boulmakoul, L. Karim, M. Mandar, A. Idri, and A. Daissaoui Copyright © 2015 A. Boulmakoul et al. All rights reserved. Testing Automation of Context-Oriented Programs Using Separation Logic Mon, 29 Dec 2014 09:35:53 +0000 A new approach for programming that enables switching among contexts of commands during program execution is context-oriented programming (COP). This technique is more structured and modular than object-oriented and aspect-oriented programming and hence more flexible. For context-oriented programming, as implemented in COP languages such as ContextJ* and ContextL, this paper introduces accurate operational semantics. The language model of this paper uses Java concepts and is equipped with layer techniques for activation/deactivation of layer contexts. This paper also presents a logical system for COP programs. This logic is necessary for the automation of testing, developing, and validating of partial correctness specifications for COP programs and is an extension of separation logic. A mathematical soundness proof for the logical system against the proposed operational semantics is presented in the paper. Mohamed A. El-Zawawy Copyright © 2014 Mohamed A. El-Zawawy. All rights reserved. Effect of Population Structures on Quantum-Inspired Evolutionary Algorithm Wed, 24 Dec 2014 06:34:22 +0000 Quantum-inspired evolutionary algorithm (QEA) has been designed by integrating some quantum mechanical principles in the framework of evolutionary algorithms. They have been successfully employed as a computational technique in solving difficult optimization problems. It is well known that QEAs provide better balance between exploration and exploitation as compared to the conventional evolutionary algorithms. The population in QEA is evolved by variation operators, which move the Q-bit towards an attractor. A modification for improving the performance of QEA was proposed by changing the selection of attractors, namely, versatile QEA. The improvement attained by versatile QEA over QEA indicates the impact of population structure on the performance of QEA and motivates further investigation into employing fine-grained model. The QEA with fine-grained population model (FQEA) is similar to QEA with the exception that every individual is located in a unique position on a two-dimensional toroidal grid and has four neighbors amongst which it selects its attractor. Further, FQEA does not use migrations, which is employed by QEAs. This paper empirically investigates the effect of the three different population structures on the performance of QEA by solving well-known discrete benchmark optimization problems. Nija Mani, Gursaran Srivastava, A. K. Sinha, and Ashish Mani Copyright © 2014 Nija Mani et al. All rights reserved. Developing Programming Tools to Handle Traveling Salesman Problem by the Three Object-Oriented Languages Tue, 23 Dec 2014 08:18:06 +0000 The traveling salesman problem (TSP) is one of the most famous problems. Many applications and programming tools have been developed to handle TSP. However, it seems to be essential to provide easy programming tools according to state-of-the-art algorithms. Therefore, we have collected and programmed new easy tools by the three object-oriented languages. In this paper, we present ADT (abstract data type) of developed tools at first; then we analyze their performance by experiments. We also design a hybrid genetic algorithm (HGA) by developed tools. Experimental results show that the proposed HGA is comparable with the recent state-of-the-art applications. Hassan Ismkhan and Kamran Zamanifar Copyright © 2014 Hassan Ismkhan and Kamran Zamanifar. All rights reserved. Long Term Solar Radiation Forecast Using Computational Intelligence Methods Thu, 11 Dec 2014 00:10:29 +0000 The point prediction quality is closely related to the model that explains the dynamic of the observed process. Sometimes the model can be obtained by simple algebraic equations but, in the majority of the physical systems, the relevant reality is too hard to model with simple ordinary differential or difference equations. This is the case of systems with nonlinear or nonstationary behaviour which require more complex models. The discrete time-series problem, obtained by sampling the solar radiation, can be framed in this type of situation. By observing the collected data it is possible to distinguish multiple regimes. Additionally, due to atmospheric disturbances such as clouds, the temporal structure between samples is complex and is best described by nonlinear models. This paper reports the solar radiation prediction by using hybrid model that combines support vector regression paradigm and Markov chains. The hybrid model performance is compared with the one obtained by using other methods like autoregressive (AR) filters, Markov AR models, and artificial neural networks. The results obtained suggests an increasing prediction performance of the hybrid model regarding both the prediction error and dynamic behaviour. João Paulo Coelho and José Boaventura-Cunha Copyright © 2014 João Paulo Coelho and José Boaventura-Cunha. All rights reserved. Lyapunov-Based Controller for a Class of Stochastic Chaotic Systems Wed, 10 Dec 2014 00:10:19 +0000 This study presents a general control law based on Lyapunov’s direct method for a group of well-known stochastic chaotic systems. Since real chaotic systems have undesired random-like behaviors which have also been deteriorated by environmental noise, chaotic systems are modeled by exciting a deterministic chaotic system with a white noise obtained from derivative of Wiener process which eventually generates an Ito differential equation. Proposed controller not only can asymptotically stabilize these systems in mean-square sense against their undesired intrinsic properties, but also exhibits good transient response. Simulation results highlight effectiveness and feasibility of proposed controller in outperforming stochastic chaotic systems. Hossein Shokouhi-Nejad, Amir Rikhtehgar Ghiasi, and Saeed Pezeshki Copyright © 2014 Hossein Shokouhi-Nejad et al. All rights reserved. Script Identification from Printed Indian Document Images and Performance Evaluation Using Different Classifiers Sun, 07 Dec 2014 14:16:50 +0000 Identification of script from document images is an active area of research under document image processing for a multilingual/ multiscript country like India. In this paper the real life problem of printed script identification from official Indian document images is considered and performances of different well-known classifiers are evaluated. Two important evaluating parameters, namely, AAR (average accuracy rate) and MBT (model building time), are computed for this performance analysis. Experiment was carried out on 459 printed document images with 5-fold cross-validation. Simple Logistic model shows highest AAR of 98.9% among all. BayesNet and Random Forest model have average accuracy rate of 96.7% and 98.2% correspondingly with lowest MBT of 0.09 s. Sk Md Obaidullah, Anamika Mondal, Nibaran Das, and Kaushik Roy Copyright © 2014 Sk Md Obaidullah et al. All rights reserved. A Comparative Study of EAG and PBIL on Large-Scale Global Optimization Problems Sun, 07 Dec 2014 07:26:40 +0000 Estimation of Distribution Algorithms (EDAs) use global statistical information effectively to sample offspring disregarding the location information of the locally optimal solutions found so far. Evolutionary Algorithm with Guided Mutation (EAG) combines global statistical information and location information to sample offspring, aiming that this hybridization improves the search and optimization process. This paper discusses a comparative study of Population-Based Incremental Learning (PBIL), a representative of EDAs, and EAG on large-scale global optimization problems. We implemented PBIL and EAG to build an experimental setup upon which simulations were run. The performance of these algorithms was analyzed in terms of solution quality and computational cost. We found that EAG performed better than PBIL in attaining a good quality solution, but the latter performed better in terms of computational cost. We also compared the performance of EAG and PBIL with MA-SW-Chains, the winner of CEC’2010, and found that the overall performance of EAG is comparable to MA-SW-Chains. Imtiaz Hussain Khan Copyright © 2014 Imtiaz Hussain Khan. All rights reserved. Identification of a Multicriteria Decision-Making Model Using the Characteristic Objects Method Thu, 27 Nov 2014 00:10:02 +0000 This paper presents a new, nonlinear, multicriteria, decision-making method: the characteristic objects (COMET). This approach, which can be characterized as a fuzzy reference model, determines a measurement standard for decision-making problems. This model is distinguished by a constant set of specially chosen characteristic objects that are independent of the alternatives. After identifying a multicriteria model, this method can be used to compare any number of decisional objects (alternatives) and select the best one. In the COMET, in contrast to other methods, the rank-reversal phenomenon is not observed. Rank-reversal is a paradoxical feature in the decision-making methods, which is caused by determining the absolute evaluations of considered alternatives on the basis of the alternatives themselves. In the Analytic Hierarchy Process (AHP) method and similar methods, when a new alternative is added to the original alternative set, the evaluation base and the resulting evaluations of all objects change. A great advantage of the COMET is its ability to identify not only linear but also nonlinear multicriteria models of decision makers. This identification is based not on a ranking of component criteria of the multicriterion but on a ranking of a larger set of characteristic objects (characteristic alternatives) that are independent of the small set of alternatives analyzed in a given problem. As a result, the COMET is free of the faults of other methods. Andrzej Piegat and Wojciech Sałabun Copyright © 2014 Andrzej Piegat and Wojciech Sałabun. All rights reserved. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization Tue, 25 Nov 2014 00:00:00 +0000 This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO), for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD) data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM) model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO) algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1) the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2) the GRCM-PSO method is able to handle the real HPLC-DAD data set. Lizhi Cui, Zhihao Ling, Josiah Poon, Simon K. Poon, Junbin Gao, and Paul Kwan Copyright © 2014 Lizhi Cui et al. All rights reserved. Frequent Pattern Mining of Eye-Tracking Records Partitioned into Cognitive Chunks Sun, 23 Nov 2014 09:20:09 +0000 Assuming that scenes would be visually scanned by chunking information, we partitioned fixation sequences of web page viewers into chunks using isolate gaze point(s) as the delimiter. Fixations were coded in terms of the segments in a mesh imposed on the screen. The identified chunks were mostly short, consisting of one or two fixations. These were analyzed with respect to the within- and between-chunk distances in the overall records and the patterns (i.e., subsequences) frequently shared among the records. Although the two types of distances were both dominated by zero- and one-block shifts, the primacy of the modal shifts was less prominent between chunks than within them. The lower primacy was compensated by the longer shifts. The patterns frequently extracted at three threshold levels were mostly simple, consisting of one or two chunks. The patterns revealed interesting properties as to segment differentiation and the directionality of the attentional shifts. Noriyuki Matsuda and Haruhiko Takeuchi Copyright © 2014 Noriyuki Matsuda and Haruhiko Takeuchi. All rights reserved. The Mixed Type Splitting Methods for Solving Fuzzy Linear Systems Tue, 18 Nov 2014 08:50:18 +0000 We consider a class of fuzzy linear systems (FLS) and demonstrate some of the existing methods using the embedding approach for calculating the solution. The main aim in this paper is to design a class of mixed type splitting iterative methods for solving FLS. Furthermore, convergence analysis of the method is proved. Numerical example is illustrated to show the applicability of the methods and to show the efficiency of proposed algorithm. H. Saberi Najafi, S. A. Edalatpanah, and S. Shahabi Copyright © 2014 H. Saberi Najafi et al. All rights reserved. Investigations on Incipient Fault Diagnosis of Power Transformer Using Neural Networks and Adaptive Neurofuzzy Inference System Thu, 13 Nov 2014 08:28:35 +0000 Continuity of power supply is of utmost importance to the consumers and is only possible by coordination and reliable operation of power system components. Power transformer is such a prime equipment of the transmission and distribution system and needs to be continuously monitored for its well-being. Since ratio methods cannot provide correct diagnosis due to the borderline problems and the probability of existence of multiple faults, artificial intelligence could be the best approach. Dissolved gas analysis (DGA) interpretation may provide an insight into the developing incipient faults and is adopted as the preliminary diagnosis tool. In the proposed work, a comparison of the diagnosis ability of backpropagation (BP), radial basis function (RBF) neural network, and adaptive neurofuzzy inference system (ANFIS) has been investigated and the diagnosis results in terms of error measure, accuracy, network training time, and number of iterations are presented. Nandkumar Wagh and D. M. Deshpande Copyright © 2014 Nandkumar Wagh and D. M. Deshpande. All rights reserved. A Novel Time Series Prediction Approach Based on a Hybridization of Least Squares Support Vector Regression and Swarm Intelligence Sun, 09 Nov 2014 11:16:44 +0000 This research aims at establishing a novel hybrid artificial intelligence (AI) approach, named as firefly-tuned least squares support vector regression for time series prediction . The proposed model utilizes the least squares support vector regression (LS-SVR) as a supervised learning technique to generalize the mapping function between input and output of time series data. In order to optimize the LS-SVR’s tuning parameters, the incorporates the firefly algorithm (FA) as the search engine. Consequently, the newly construction model can learn from historical data and carry out prediction autonomously without any prior knowledge in parameter setting. Experimental results and comparison have demonstrated that the has achieved a significant improvement in forecasting accuracy when predicting both artificial and real-world time series data. Hence, the proposed hybrid approach is a promising alternative for assisting decision-makers to better cope with time series prediction. Nhat-Duc Hoang, Anh-Duc Pham, and Minh-Tu Cao Copyright © 2014 Nhat-Duc Hoang et al. All rights reserved. Forward and Reverse Process Models for the Squeeze Casting Process Using Neural Network Based Approaches Mon, 27 Oct 2014 12:02:03 +0000 The present research work is focussed to develop an intelligent system to establish the input-output relationship utilizing forward and reverse mappings of artificial neural networks. Forward mapping aims at predicting the density and secondary dendrite arm spacing (SDAS) from the known set of squeeze cast process parameters such as time delay, pressure duration, squeezes pressure, pouring temperature, and die temperature. An attempt is also made to meet the industrial requirements of developing the reverse model to predict the recommended squeeze cast parameters for the desired density and SDAS. Two different neural network based approaches have been proposed to carry out the said task, namely, back propagation neural network (BPNN) and genetic algorithm neural network (GA-NN). The batch mode of training is employed for both supervised learning networks and requires huge training data. The requirement of huge training data is generated artificially at random using regression equation derived through real experiments carried out earlier by the same authors. The performances of BPNN and GA-NN models are compared among themselves with those of regression for ten test cases. The results show that both models are capable of making better predictions and the models can be effectively used in shop floor in selection of most influential parameters for the desired outputs. Manjunath Patel Gowdru Chandrashekarappa, Prasad Krishna, and Mahesh B. Parappagoudar Copyright © 2014 Manjunath Patel Gowdru Chandrashekarappa et al. All rights reserved. 2-Layered Architecture of Vague Logic Based Multilevel Queue Scheduler Thu, 09 Oct 2014 10:07:54 +0000 In operating system the decisions which CPU scheduler makes regarding the sequence and length of time the task may run are not easy ones, as the scheduler has only a limited amount of information about the tasks. A good scheduler should be fair, maximizes throughput, and minimizes response time of system. A scheduler with multilevel queue scheduling partitions the ready queue into multiple queues. While assigning priorities, higher level queues always get more priorities over lower level queues. Unfortunately, sometimes lower priority tasks get starved, as the scheduler assures that the lower priority tasks may be scheduled only after the higher priority tasks. While making decisions scheduler is concerned only with one factor, that is, priority, but ignores other factors which may affect the performance of the system. With this concern, we propose a 2-layered architecture of multilevel queue scheduler based on vague set theory (VMLQ). The VMLQ scheduler handles the impreciseness of data as well as improving the starvation problem of lower priority tasks. This work also optimizes the performance metrics and improves the response time of system. The performance is evaluated through simulation using MatLab. Simulation results prove that the VMLQ scheduler performs better than the classical multilevel queue scheduler and fuzzy based multilevel queue scheduler. Supriya Raheja, Reena Dadhich, and Smita Rajpal Copyright © 2014 Supriya Raheja et al. All rights reserved. Merging Agents and Cloud Services in Industrial Applications Tue, 19 Aug 2014 00:00:00 +0000 A novel idea to combine agent technology and cloud computing for monitoring a plant floor system is presented. Cloud infrastructure has been leveraged as the main mechanism for hosting the data and processing needs of a modern industrial information system. The cloud offers unlimited storage and data processing in a near real-time fashion. This paper presents a software-as-a-service (SaaS) architecture for augmenting industrial plant-floor reporting capabilities. This reporting capability has been architected using networked agents, worker roles, and scripts for building a scalable data pipeline and analytics system. Francisco P. Maturana, Juan L. Asenjo, Neethu S. Philip, and Shweta Chatrola Copyright © 2014 Francisco P. Maturana et al. All rights reserved. Network Partitioning Domain Knowledge Multiobjective Application Mapping for Large-Scale Network-on-Chip Tue, 12 Aug 2014 13:10:15 +0000 This paper proposes a multiobjective application mapping technique targeted for large-scale network-on-chip (NoC). As the number of intellectual property (IP) cores in multiprocessor system-on-chip (MPSoC) increases, NoC application mapping to find optimum core-to-topology mapping becomes more challenging. Besides, the conflicting cost and performance trade-off makes multiobjective application mapping techniques even more complex. This paper proposes an application mapping technique that incorporates domain knowledge into genetic algorithm (GA). The initial population of GA is initialized with network partitioning (NP) while the crossover operator is guided with knowledge on communication demands. NP reduces the large-scale application mapping complexity and provides GA with a potential mapping search space. The proposed genetic operator is compared with state-of-the-art genetic operators in terms of solution quality. In this work, multiobjective optimization of energy and thermal-balance is considered. Through simulation, knowledge-based initial mapping shows significant improvement in Pareto front compared to random initial mapping that is widely used. The proposed knowledge-based crossover also shows better Pareto front compared to state-of-the-art knowledge-based crossover. Yin Zhen Tei, Yuan Wen Hau, N. Shaikh-Husin, and M. N. Marsono Copyright © 2014 Yin Zhen Tei et al. All rights reserved. Individual Identification Using Linear Projection of Heartbeat Features Sun, 10 Aug 2014 12:46:58 +0000 This paper presents a novel method to use the electrocardiogram (ECG) signal as biometrics for individual identification. The ECG characterization is performed using an automated approach consisting of analytical and appearance methods. The analytical method extracts the fiducial features from heartbeats while the appearance method extracts the morphological features from the ECG trace. We linearly project the extracted features into a subspace of lower dimension using an orthogonal basis that represent the most significant features for distinguishing heartbeats among the subjects. Result demonstrates that the proposed characterization of the ECG signal and subsequently derived eigenbeat features are insensitive to signal variations and nonsignal artifacts. The proposed system utilizing ECG biometric method achieves the best identification rates of 85.7% for the subjects of MIT-BIH arrhythmia database and 92.49% for the healthy subjects of our IIT (BHU) database. These results are significantly better than the classification accuracies of 79.55% and 84.9%, reported using support vector machine on the tested subjects of MIT-BIH arrhythmia database and our IIT (BHU) database, respectively. Yogendra Narain Singh Copyright © 2014 Yogendra Narain Singh. All rights reserved. Image Enhancement under Data-Dependent Multiplicative Gamma Noise Sun, 01 Jun 2014 11:21:11 +0000 An edge enhancement filter is proposed for denoising and enhancing images corrupted with data-dependent noise which is observed to follow a Gamma distribution. The filter is equipped with three terms designed to perform three different tasks. The first term is an anisotropic diffusion term which is derived from a locally adaptive p-laplacian functional. The second term is an enhancement term or a shock term which imparts a shock effect at the edge points making them sharp. The third term is a reactive term which is derived based on the maximum a posteriori (MAP) estimator and this term helps the diffusive term to perform a Gamma distributive data-dependent multiplicative noise removal from images. And moreover, this reactive term ensures that deviation of the restored image from the original one is minimum. This proposed filter is compared with the state-of-the-art restoration models proposed for data-dependent multiplicative noise. Jidesh Pacheeripadikkal and Bini Anattu Copyright © 2014 Jidesh Pacheeripadikkal and Bini Anattu. All rights reserved. Stateless Malware Packet Detection by Incorporating Naive Bayes with Known Malware Signatures Tue, 15 Apr 2014 07:15:36 +0000 Malware detection done at the network infrastructure level is still an open research problem ,considering the evolution of malwares and high detection accuracy needed to detect these threats. Content based classification techniques have been proven capable of detecting malware without matching for malware signatures. However, the performance of the classification techniques depends on observed training samples. In this paper, a new detection method that incorporates Snort malware signatures into Naive Bayes model training is proposed. Through experimental work, we prove that the proposed work results in low features search space for effective detection at the packet level. This paper also demonstrates the viability of detecting malware at the stateless level (using packets) as well as at the stateful level (using TCP byte stream). The result shows that it is feasible to detect malware at the stateless level with similar accuracy to the stateful level, thus requiring minimal resource for implementation on middleboxes. Stateless detection can give a better protection to end users by detecting malware on middleboxes without having to reconstruct stateful sessions and before malwares reach the end users. Ismahani Ismail, Sulaiman Mohd Nor, and Muhammad Nadzir Marsono Copyright © 2014 Ismahani Ismail et al. All rights reserved. Novel Adaptive Bacteria Foraging Algorithms for Global Optimization Tue, 25 Mar 2014 11:42:46 +0000 This paper presents improved versions of bacterial foraging algorithm (BFA). The chemotaxis feature of bacteria through random motion is an effective strategy for exploring the optimum point in a search area. The selection of small step size value in the bacteria motion leads to high accuracy in the solution but it offers slow convergence. On the contrary, defining a large step size in the motion provides faster convergence but the bacteria will be unable to locate the optimum point hence reducing the fitness accuracy. In order to overcome such problems, novel linear and nonlinear mathematical relationships based on the index of iteration, index of bacteria, and fitness cost are adopted which can dynamically vary the step size of bacteria movement. The proposed algorithms are tested with several unimodal and multimodal benchmark functions in comparison with the original BFA. Moreover, the application of the proposed algorithms in modelling of a twin rotor system is presented. The results show that the proposed algorithms outperform the predecessor algorithm in all test functions and acquire better model for the twin rotor system. Ahmad N. K. Nasir, M. O. Tokhi, and N. Maniha Abd. Ghani Copyright © 2014 Ahmad N. K. Nasir et al. All rights reserved.