Statistical Analysis and Comparison of the Performance of Meta-Heuristic Methods Based on their Powerfulness and EffectivenessResearch Areas : Machine learning
(Univerrsity of Birjand)
Hassan Farsi 2
Seyed Hamid Zahiri 3 (University of Birjand)
Keywords: Effectiveness, Meta-heuristic Algorithms, Optimization, Powerfulness, Statistical Analysis.,
In this paper, the performance of meta-heuristic algorithms is compared using statistical analysis based on new criteria (powerfulness and effectiveness). Due to the large number of meta-heuristic methods reported so far, choosing one of them by researchers has always been challenging. In fact, the user does not know which of these methods are able to solve his complex problem. In this paper, in order to compare the performance of several methods from different categories of meta-heuristic methods new criteria are proposed. In fact, by using these criteria, the user is able to choose an effective method for his problem. For this reason, statistical analysis is conducted on each of these methods to clarify the application of each of these methods for the users. Also, powerfulness and effectiveness criteria are defined to compare the performance of the meta-heuristic methods to introduce suitable substrate and suitable quantitative parameters for this purpose. The results of these criteria clearly show the ability of each method for different applications and problems.
 R. Bellman, "Dynamic Programming", Science, Vol. 153, No. 3731, 1966, pp. 34-37.
 W. Kuo, V. R. Prasad, F. A. Tillman, and C.-L. Hwang, Optimal Reliability Design: Fundamentals and Applications, Cambridge University Press, 2001.
 J. A. Snyman, Practical Mathematical Optimization. Springer, 2005.
 I. BoussaïD, J. Lepagnot, and P. Siarry, "A Survey on Optimization Metaheuristics", Information Sciences, Vol. 237, 2013, pp. 82-117.
 A. Sezavar, H. Farsi, and S. Mohamadzadeh, "A Modified Grasshopper Optimization Algorithm Combined with CNN for Content Based Image Retrieval", International Journal of Engineering, Vol. 32, No. 7, 2019, pp. 924-930.
 A. H. Hosseinian and V. Baradaran, "A Multi-Objective Multi-Agent Optimization Algorithm for the Community Detection Problem", J. Inform. Syst. Telecommun, Vol. 6, No. 1,2019, pp. 169-179.
 S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, "Optimization By Simulated Annealing", Science, Vol. 220, No. 4598, 1983, pp. 671-680.
 J. R. Koza and J. R. Koza, Genetic Programming, On the Programming of Computers :Natural Selection MIT press, 1992.
 A. Walker, J. Hallam, and D. Willshaw, "Bee-Havior in a Mobile Robot: The Construction of a Self-Organized Cognitive Map and Its Use in Robot Navigation within a Complex, Natural Environment", IEEE International Conference on Neural Networks, 1993, pp. 1451-1456.
 F. Glover, "Tabu Search for Nonlinear and Parametric Optimization (With Links to Genetic Algorithms)", Discrete Applied Mathematics, Vol. 49, No. 1-3, 1994, pp. 231-255.
 J. Kennedy and R. Eberhart, "Particle Swarm Optimization", in Proceedings of ICNN'95-International Conference on Neural Networks, 1995, Vol. 4, pp. 1942-1948.
 K. M. Passino, "Biomimicry of Bacterial Foraging for Distributed Optimization and Control", IEEE control systems magazine, Vol. 22, No. 3, 2002, pp. 52-67.
 D. Simon, "Biogeography-Based Optimization", IEEE Transactions on Evolutionary Computation, Vol. 12, No. 6, 2008, pp. 702-713.
 S. Mirjalili, S. M. Mirjalili, and A. Lewis, "Grey Wolf Optimizer", Advances in Engineering Software, Vol. 69, 2014, pp. 46-61.
 S. Mirjalili, "The Ant Lion Optimizer", Advances in Engineering Software, Vol. 83, 2015, pp. 80-98.
 S. Mirjalili, "Moth-flame Optimization Algorithm: A Novel Nature-Inspired Heuristic Paradigm", Knowledge-Based Systems, Vol. 89, 2015, pp. 228-249.
 S. Mirjalili "Dragonfly Algorithm: a New Meta-Heuristic Optimization Technique for Solving Single-Objective, Discrete, and Multi-Objective Problems", Neural Computing and Applications, Vol. 27, No. 4, 2016, pp. 1053-1073.
 S. Mirjalili, S. M. Mirjalili, and A. Hatamlou, "Multi-Verse Optimizer: a Nature-Inspired Algorithm for Global Optimization", Neural Computing and Applications, Vol. 27, No. 2, pp. 495-513, 2016.
 S. Mirjalili, "SCA: a Sine Cosine Algorithm for Solving Optimization Problems", Knowledge-Based Systems, Vol. 96, 2016, pp. 120-133.
 S. Mirjalili and A. Lewis, "The Whale Optimization Algorithm", Advances in Engineering Software, Vol. 95, 2016, pp. 51-67.
 S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, and S. M. Mirjalili, "Salp Swarm Algorithm: A Bio-Unspired Optimizer for Engineering Design Problems", Advances in Engineering Software, Vol. 114, 2017, pp. 163-191.
 S. M. Almufti, "Historical Survey on Metaheuristics Algorithms", International Journal of Scientific World, Vol. 7, No. 1, 2019, pp. 1.
 S. Shirke and R. Udayakumar, "Evaluation of Crow Search Algorithm (CSA) for Optimization in Discrete Applications", International Conference on Trends in Electronics and Informatics (ICOEI), 2019, pp. 584-589.
 M. Dorigo and G. Di Caro, "Ant Colony Optimization: a New Meta-Heuristic", in Proceedings of the Congress on Evolutionary Computation-CEC99, Vol. 2, 1999, pp. 1470-1477.
 M. Clerc, Particle Swarm Optimization. John Wiley & Sons, 2010.
 J. G. Digalakis and K. G. Margaritis, "On Benchmarking Functions for Genetic Algorithms", International Journal of Computer Mathematics, Vol. 77, No. 4, 2001, pp. 481-506.
 M. Molga and C. Smutnicki, "Test Functions for Optimization Needs", Test Functions for Optimization Needs, Vol. 101, 2005, pp. 48.
 X.-S. Yang, "Firefly Algorithm, Stochastic Test Functions and Design Optimisation," International Journal of Bio-Inspired Computation, Vol. 2, No. 2, 2010, pp. 78-84.
 D. Molina, J. Poyatos, J. Del Ser, S. García, A. Hussain, and F. Herrera, "Comprehensive Taxonomies of Nature-and Bio-inspired Optimization: Inspiration Versus Algorithmic Behavior, Critical Analysis Recommendations", Cognitive Computation, Vol. 12, No. 5, 2020, pp. 897-9339.
Journal of Information Systems and Telecommunication
ISSN 2322-1437 / EISSN:2345-2773
Statistical Analysis and Comparison of the Performance of Meta-Heuristic Methods Based on their Powerfulness and Effectiveness
Mehrdad Rohani1*, Hassan Farsi1, Seyed Hamid Zahiri1
1.Department of Electronic and Computer Engineering, University of Birjand, Birjand, Iran
Received: 04 May 2021 / Revised: 24 Jul 2021/ Accepted: 24 Aug 2021
In this paper, the performance of meta-heuristic algorithms is compared using statistical analysis based on new criteria (powerfulness and effectiveness). Due to the large number of meta-heuristic methods reported so far, choosing one of them by researchers has always been challenging. In fact, the user does not know which of these methods are able to solve his complex problem. In this paper, in order to compare the performance of several methods from different categories of meta-heuristic methods new criteria are proposed. In fact, by using these criteria, the user is able to choose an effective method for his problem.
For this reason, statistical analysis is conducted on each of these methods to clarify the application of each of these methods for the users. Also, powerfulness and effectiveness criteria are defined to compare the performance of the meta-heuristic methods to introduce suitable substrate and suitable quantitative parameters for this purpose. The results of these criteria clearly show the ability of each method for different applications and problems.
Keywords: Effectiveness; Meta-heuristic Algorithms; Optimization; Powerfulness; Statistical Analysis.
Optimization is conducting a process to find the best acceptable answer by considering the limits and requirements of the issue. To solve an optimization issue, there might be different answers for the desired optimal parameter, in which function, namely the goal function, is defined to compare these answers and choosing an optimal answer. An optimization method must be able to extract the optimal answer for this function. The advance of the computer in the last five decades leads to the improvement of the optimization methods. Each of these methods has a different ability to solve an optimization problem. However, due to existing the great number of optimization methods, the most important question that arises is which method is suitable, and provides the best performance for solving the problem. These methods can be classified into three broad categories: Enumeration methods, calculates-based methods, and random methods, which are explained in the following.
Enumeration methods: In each iteration, only one point t belonging to the answer space is examined. This category of methods is simpler than the others in terms of implementation but needs considerable calculations. In these methods, no mechanism exists to decrease the scope of the search space, and the scope of the search space is very large. For instance, dynamic programming is an example of these methods that acts completely unintelligent [1, 2].
Calculates-based methods: In these methods, the set of necessary and sufficient conditions are used that apply to the answer to the problem. These methods usually use the gradients of the goal to search. It is possible that sometimes, due to discontinuity of the goal function. Its gradients cannot be calculable. Therefore, these methods also face challenges .
Random methods: One of the uncommon methods to find the optimal answer for a problem is to consider all of the possible answers. In this case, the goal function is calculated for all of the possible answers and at the end, the best answer is selected. In this case, the complete count leads to the exact answer to the problem. Using this method in practical problems, is not possible due to the vast range of possible answers for the problem. Given this issue, the effort is always to present methods that have the ability to decrease the search space. To solve this problem, random search methods such as heuristic methods and meta-heuristic methods are presented. This category of methods is able to present a proper answer and close to the optimal answer in a limited time and unlike enumeration and calculates based methods. Random search methods are mainly based on enumeration methods except that they use additional information to guide the search. These methods are completely general in terms of the application area. They are able to solve very complex problems.
The main problem of the heuristic methods is that they get caught in local optimal points and early converge to these points. Meta-heuristic methods are presented to solve this problem . In fact, meta-heuristic methods belong to the category of methods that have a solution to exit local optimal points. These methods are able to be used in a broad range of problems. Optimization methods are used in various fields [5, 6]. The user chooses one of the methods based on the application.
In this paper, we try to examine a set of methods and present statistical analysis. For this reason we determine the stability of each method and their real-time performance. Also, two new criteria powerfulness and effectiveness have been introduced and the performance of each methods has been examined by these two new criteria.
In the next section, meta-heuristic methods from different categories are introduced. In the third section, benchmarks as well as conventional evaluation criteria (Best fitness, average run time and standard deviation) are introduced. Also, new criteria and statistical analysis have been introduced in this section. In the fourth section, the tests and results are reviewed.
2- Meta-Heuristic Methods
During the last three decades, the introduction of new meta-heuristic methods and also their application in different devices has been considerably increasing. In 1983, Kirkpatrick proposed the simulated annealing method . In 1992, Koza introduced the genetic programming method . After that Walker et al introduced the ﬁrst algorithm based on bee colonies in 1993 . In 1994, the term of meta-heuristic was used by Golver when introducing the tabu search method . In 1995, Kennedy and Eberhat introduced particle swarm optimization . In 2002, Passino introduced the bacterial foraging-based method . In 2008, a bio-geography-based optimization algorithm was introduced by Simon . Many methods in this area have been presented in the last decade. In 2014, the grey wolf algorithm was introduced by Mirjalili et al. . In 2015, the Ant Lion Optimizer (ALO) was presented by Mirjalili . In the same year, the moth-flame optimization algorithm was presented by Mirjalili . In 2016, Mirjalili introduced dragonfly (DA), multi-verse (MVO), sine cosine (SCA), and whale optimization (WOA) algorithms for optimization [17-20]. In 2017, the salp swarm algorithm (SSA) was presented by Mirjalili . Development of these algorithms has occurred at a high speed in recent years.
These methods are able to exit the local optimal answers and move to the global optimal answer in a short period of time. The important factor in these methods is the dynamic balance between the exploration and exploitation strategies. Exploration is able to properly search the answer space. Exploitation strategy performs the search operation in spaces with more possibility and prevents the loss of time in search space in which the possibility of the optimal answer is low. Meta-heuristic methods are divided by categories of methods including methods based on single-point and based on population, inspired by nature and without inspiration by nature, with memory and memoryless, and probabilistic-definitive. Some of the meta-heuristic methods are memoryless. They do not use the information obtained during the search. Some of them take advantage of the obtained information. Single point-based methods change an answer during the search process, while population-based methods consider a set of answers during the search . Generally, this type of method is slower comparing to the single-point methods but they are able to produce more desirable answers. However, due to the advance in computer calculation power, the population-based methods hold more importance. Among these methods, WOA, GWO, DA and SSA can be pointed out. All of these methods belong to the category of nature-inspired methods which have memory and they are also based on population. Another category of these methods is presented in figure 1 .
2-1- Whale Optimization Algorithm (WOA)
This method is inspired by social behavior, the mechanism of which is based on the social movement of humpback whales and how they hunt. Humpback whales are able to identify the location of the prey . The primary location of the search agents in modeling the algorithm is considered to be close to the desired situation, and after determining the best search agent, other agents update their location according to that. This behavior is modeled through the Eq. (1) and Eq. (2). All the equations in this subsection are adopted from .
Fig. 1. Classification of meta-heuristic algorithms 
Where t is the current iteration, and are the coefficient vectors, is the location vectors for the best response obtained in the t iteration and is the location vector. Also, must be updated in each iteration in case of the existence of a better answer. Coefficient vectors and are calculated with Eq. (3) and Eq. (4).
Search for prey (exploitation phase): In this method, random selection is used to update the search agents to let the algorithm perform a global search in the search space, which is modeled by Eq. (8) and Eq. (9).
Finally, after finishing the iterations, the location of the alpha wolf will be selected as the optimum point.
2-3- Dragonfly Optimization Algorithm (DA)
The dragonfly algorithm is based on mass intelligence and inspired by nature. The main idea is based on the behavior of dragonflies while hunting for food and prey . The mass behavior and mass formation of the dragonflies are performed for two purposes: prey which is called the static mass or nutrition, and the immigrant or dynamic mass. These behaviors are modeled by the Eq. (17). All the equations in this subsection are adopted from .
Where is separation, is alignment, and is cohesion. is current location, is the number of neighbors, is the th neighbor, and is the speed of the th neighbor. The location of the food (goal) and the enemy (search agent) is modeled by Eq. (18) and Eq. (19).
Where is the location of the search agent. is the location of the food source, and is the location of the enemy. The behavior of the dragonflies is modeled by Eq. (19).
, , , and are the weight values for adjusting the exploration and exploitation processes. The parameter is the weight of inertia and is the iteration counter. The position of each search agent is expressed according to the Eq. (20).
To improve the exploration in the search space and modeling the static behavior of the dragonflies, when there is no neighbor the random walking process is added to this algorithm to update the location of the dragonflies according to the Eq. (21).
Finally, the location of the dragonflies is updated based on the two static and dynamic behaviors and the best answer is selected as the optimal answer to the problem .
2-4- Salp Swarm Optimization Algorithm (SSA)
This method is inspired by the social behavior of the salps. These creatures move in the deep waters in groups and under the name of salp chain. This behavior, as some researchers believe, is for better movement and fast access to food . To model the behavior of the salps, they are divided into two groups of leader and followers. The leader is the first member of the salps chain and the others are called the followers. The food source for the salps is known as the matrix, and the location of the salps is modeled by the Eq. (23). All the equations in this subsection are adopted from .
Where is the location of the first salp (leader) in the jth dimension, is the location of the food in the th dimension, is the upper bound of the jth dimension, and is the lower bound of the jth dimension. The parameters and are random numbers between 0 and 1, but the parameter has an important role, namely exploration, in the search space, and is modeled by Eq. (24).
Where is the running iteration and is the maximum iteration. The location of the follower salps is expressed by Eq. (25).
Where and is the location of the th follower salp in the jth dimension. In this way, the SSA algorithm updates the location of the leader and its followers in each iteration and introduces the best location as the best answer in the last iteration .
3- Stability Analysis of Meta-Heuristic Methods Based on Statistical Methods
So far, different methods have been reported to evaluate the stability of meta-heuristic methods based on statistical analysis. In this section, we introduce the proposed method while referring to the methods that have been expressed so far in research.
3-1- Conventional Stability Analysis Methods
Stability analysis of the meta-heuristic methods is performed using two methods of mathematical analysis and statistical analysis. Dorigo in the optimization method of ants ACO proved by limiting the pheromone that the ant method converges to the optimal answer, and the method will be stable . Clerc performed statistical analysis for the particle swarm method and guaranteed the convergence under some conditions for the available parameters in the problem . In this research, Monte Carlo evaluation is used to analyze the stability of the meta-heuristic methods. This analysis includes the examination of convergence in different optimization problems. Also, the ability of each method in encountering each of these problems is determined with high confidence through many iterations. The time parameter is also calculated for each algorithm so that a user can have the ability to choose a method based on online and offline applications. Also, two new metrics namely powerfulness and effectiveness are introduced to examine the convergence and power of a method in obtaining the global answer.
3-1-1- Benchmark Functions
Researchers face varies problems with different complexities. They use different meta-heuristic algorithms to solve and optimize their problems. We have used 23 benchmark functions with different complexities including unimodal, multimodal, and fixed-dimension. The performance of each algorithm has been tested by solving these 23 problems [26-28]. These functions are presented in table 1-3. Fmin indicates the optimum point in the search space. Table 1 indicates the F1-F7 functions that have one global optimum point, and search the exploitation process in the search space, and test the performance of a method. Benchmark functions of F8-F13 are multimodal functions and they are shown in table 2. This category has several local optimum points, and these local optimum points exponentially increase with respect to the increase in the dimension. These functions are perfectly capable of test the exploration process in the search space. The third category is the fixed-dimension multimodal. These functions also have one global optimum point and several local optimum points that, similar to the second category, analyze the effectiveness and powerfulness of a method. These are shown in table 3. Figure 2 indicates the 3-D representation of the benchmark functions’ search space.
Table 1: Description of unimodal benchmark