Abstract
In recent years, with the increasing demand for social production, engineering design problems have gradually become more and more complex. Many novel and well-performing meta-heuristic algorithms have been studied and developed to cope with this problem. Among them, the Spherical Evolutionary Algorithm (SE) is one of the classical representative methods that proposed in recent years with admirable optimization performance. However, it tends to stagnate prematurely to local optima in solving some specific problems. Therefore, this paper proposes an SE variant integrating the Cross-search Mutation (CSM) and Gaussian Backbone Strategy (GBS), called CGSE. In this study, the CSM can enhance its social learning ability, which strengthens the utilization rate of SE on effective information; the GBS cooperates with the original rules of SE to further improve the convergence effect of SE. To objectively demonstrate the core advantages of CGSE, this paper designs a series of global optimization experiments based on IEEE CEC2017, and CGSE is used to solve six engineering design problems with constraints. The final experimental results fully showcase that, compared with the existing well-known methods, CGSE has a very significant competitive advantage in global tasks and has certain practical value in real applications. Therefore, the proposed CGSE is a promising and first-rate algorithm with good potential strength in the field of engineering design.
Similar content being viewed by others
1 Introduction
In the context of today's modern technologies and industries, not only do numerous optimization challenges develop in many disciplines but also the scope and complexity of these difficulties, like engineering design problems [1, 2], are progressively expanding. Therefore, there is an urgent need for more efficient and robust methods to solve the problems in real production [3]. Optimization methods have established themselves as practical tools for navigating complex feature spaces, offering efficient solutions within a reasonable timeframe [4, 5]. However, their performance can be uncertain when addressing a broad spectrum of scenarios, ranging from single-objective problems to multi-objective challenges [6, 7]. Moreover, they face the complexity of many objective cost functions, which require achieving a delicate equilibrium among numerous conflicting objectives and constraints [8]. Recognizing the limitations of conventional approaches, researchers have pioneered a new generation of single-objective evolutionary and meta-heuristic methods designed to tackle even more difficult problem domains [9]. Researchers have respected meta-heuristic algorithms in recent years and have become one of the most essential methods widely used to solve real-world problems [10]. Compared to traditional optimization methods, it is a probability-based global search method, which focuses on optimal solutions by simulating natural and human wisdom, etc. It is characterized by simple structure, small computational effort, and comprehensibility but also enables one to explore the search space faster and thus solve the optimal problem efficiently. Nowadays, many researchers, inspired by social, natural, and physical phenomena, have proposed many meta-heuristic algorithms based on different merit-seeking rules. There are the Differential Evolution (DE) [11], Harris Hawks Optimizer (HHO) [12], particle swarm optimizer (PSO) [13], Grey Wolf Optimizer (GWO) [14], Slime Mould Algorithm (SMA) [15], Whale Optimizer (WOA) [14], Runge–Kutta Optimizer (RUN) [16], Bat-inspired Algorithm (BA) [17], Multi-verse Optimizer (MVO) [14], Moth-flame Optimizer (MFO) [14], Ant Colony Optimizer (ACO) [18], Sine Cosine Algorithm (SCA) [14], and Hunger Games Search (HGS) [19].
However, the algorithms have general drawbacks, including the second-rate convergence effect and difficulty getting away from the local optimum [20, 21]. Among them, the convergence effect is mainly reflected in the speed and accuracy. This is one of the main reasons some algorithms, such as the basic genetic algorithm (GA), have difficulty solving different optimization problems [22]. Therefore, many researchers have investigated the existing meta-heuristic algorithms to alleviate these deficiencies further. For example, chaotic BA (CBA) [23], improved ACO (RCACO) [24], chaotic SCA [25], improved WOA (EWOA) [26], improved GWO (GWOCMALOL) [27], random learning gradient-based optimizer (RLGBO) [28], MFO with sine cosine mechanisms (SMFO) [29], A-C parametric WOA (ACWOA) [30], and so on. In addition, to satisfy the requirements of the engineering design field, many same-class variant algorithms have been proposed in recent years. For example, Mohamed et al. [31] introduced the triangular mutation rule into DE to propose an enhanced algorithm known as NDE, which was used to tackle five frequently used constrained engineering design issues and five constrained mechanical design challenges. Khalilpourazari et al. [32] created an efficient method known as WCMFO to address numerical and restricted engineering problems using MFO and Water Circulation strategy traits. Wang et al. [33] introduced a gravitational search into the Rider Optimization Algorithm to propose an improved variant known as GSSROA with good effectiveness in optimizing the front axle weight of a car under nonlinear constraints.
Han et al. [34] proposed an improved Crow Search Algorithm (ISCSA) using weight coefficients, optimal bootstrap, and spiral search to solve four classical engineering design problems. Han et al. [35] developed an efficient variant, MSFWA, to tackle numerical and restricted engineering designs. Kamboj et al. [36] proposed an efficient hybrid algorithm (hHHO-SCA) for 11 constrained multidisciplinary engineering design problems by introducing the SCA into the exploration phase of the HHO. Abualigah et al. [37] proposed an efficient global optimization method (AOASC) using the SCA to improve the exploitation of the Arithmetic Optimization Algorithm (AOA) for optimizing five engineering problems. Qi et al. [38] proposed a new WOA version, LXMWOA, by Lévy initialization, directed crossover, and mutation strategies in synergy with the original WOA mechanism, and validated it on three engineering challenges. Zhao et al. [39] proposed a modified ACO version, called ADNOLACO, and demonstrated its practical value in real applications on four engineering problems. Su et al. [40] proposed a more powerful Cuckoo Search Algorithm (CCFCS) using the cross-optimizer and dispersed foraging strategy and successfully experimented on five classical engineering cases. According to the above, it is clear that engineering optimization approaches based on meta-heuristic algorithms are popular among researchers, and associated applications have been relatively mature.
The Spherical Evolution Algorithm (SE) was proposed by Tang et al. in 2019 as a new strong performance algorithm based on the spherical search style [41], but it equally does not circumvent the common drawbacks of meta-heuristic algorithms. Therefore, many researchers have proposed optimization schemes to alleviate these drawbacks for the needs of SE problems and practical problems with different characteristics and complexity. For example, Yang et al. [42] made the wingsuit flying search (WFS) and SE work together to propose an effective and robust hybrid algorithm (CCWFSSE). Cai et al. [43] introduced the SCA into SE and proposed an enhanced variant of SE that can utilize the search space more efficiently. Weng et al. [44] combined Laplace's cross search and the Nelder–Mead simplex method with SE to propose an efficient improved method (LCNMSE), which was used to solve the parameter extraction problem for solar cell and photovoltaic models. Li et al. [45] proposed a stronger hybrid algorithm for global optimization experiments using SE and the salp swarm algorithm (SSA) characteristics. Zhang et al. [46] first integrated PSO and SE to propose a hybrid algorithm with significant advantages. Yang et al. [47] proposed a novel ladder descent strategy to optimize the update rule of SE, which further proposed a ladder spherical evolutionary search algorithm named LSE. Yang et al. [48] successfully proposed an enhanced version of SE (CSE) by adding chaotic local search (CLS) to SE. Zhao et al. [49] proposed an enhanced adaptive spherical search algorithm (SSDE) with differential evolution based on the opposition learning strategy, phased search mechanism, nonlinear adaptive parameters, and variational crossover methods. Zhou et al. [50] proposed an enhanced SE (DSCSE) based on a novel dynamic sine cosine mechanism for estimating key parameters of PV cells and modules under fixed and varying temperature and light conditions. Li et al. [51] innovatively designed a new lottery-based elite retention strategy to improve the optimization capability of SE, and thus proposed a lottery elite SE (LESE).
The SE has been widely utilized in various real-world issues because of its excellent convergence speed and accuracy. However, there is still space to enhance its convergence, exploitability, and ability to eliminate the local optima for many challenges. As a result, this paper first proposes the CGSE, a better structure of SE. For the first time in CGSE, the Cross-search Mutation (CSM) is incorporated into SE to balance the search for and use processes, hence enhancing SE convergence. Then, the Gaussian Backbone Strategy (GBS) is included in SE to raise population variety in the vicinity of the existing near-optimal solution, which considerably improves its exploitation ability and strengthens SE's potential for getting away from the local optimum. In this study, a set of comparative validation tests are performed on 30 benchmark functions of the IEEE CEC 2017 to demonstrate the supremacy of CGSE in the field of global optimization. Among them, the results analysis of the ablation experiment, qualitative analysis experiment, and stability test experiment not only prove that the CGSE proposed in this paper achieves a better balance between exploration and exploitation than the original SE with the synergy of two optimization components but also verifies that the CGSE is the best-improved version proposed in this study. In addition, the results of CGSE are compared to nine basic algorithms and nine peer variants, proving that CGSE also has a very significant performance advantage compared to some other algorithms. Finally, CGSE addresses six real-world engineering challenges, proving that CGSE also has good practical value and excellent competitiveness in solving these practical problems.
The following are the key works and contributions of this paper:
-
An enhanced SE algorithm is proposed by introducing the CSM and the GBS into SE, called CGSE.
-
The CGSE is applied to a series of experiments on the IEEE CEC 2017, in which it is proved to have strong convergence, robustness, and competitiveness in global optimization.
-
The CGSE is used to address six real-world engineering design challenges and is well-competitive with other approaches.
-
In conclusion, the CGSE proposed in this paper has good potential and excellent optimization results for addressing real-world optimization problems.
The rest of this paper is organized as follows: Sect. 2 introduces the basic principles of this paper to propose CGSE; Sect. 3 describes the details of the proposed CGSE; Sect. 4 designs a series of benchmark function experiments and engineering experiments for verifying the performance and competitiveness of CGSE; Sect. 5 discusses the research and experimental results of this paper; Sect. 6 concludes the research of this paper as well as gives an overview of future related work.
2 Basic Theories
To provide theoretical support for the research work of this paper, this section introduces the core fundamental theories involved in this paper, mainly including the SE, CSM, and GBS, and proposes a variant of SE that incorporates the above three strategies.
2.1 Spherical Evolution (SE) Algorithm
In 2019, inspired by various meta-heuristic algorithms, Tang et al. [41] innovatively proposed the SE with a spherical search style and outstanding performance. It differs from the hypercube search style of the traditional meta-heuristic algorithms, which traverses the solution space by spherical search style to explore and exploit the global optimal solution.
2.1.1 Search Styles
Figures 1 and 2 depict the search trajectories of the meta-heuristic algorithms in two dimensions space using the hypercube and sphere search styles, respectively. Comparing the two figures makes it easy to notice the difference between the two search styles. Figure 1 shows that the hypercube style's search range is a rectangle and that its search process is very uncertain. In this case, how are the algorithms with hypercube search style updated? Specifically, the blue vector indicates the presence of a scale factor in only one of the two dimensions X and Y; the red vector indicates the presence of a scale factor with discrepancy in both dimensions X and Y. In particular, DC denotes the only update unit in this rectangular space when the scale factors on dimensions X and Y are one. When compared to Fig. 2, it is clear that there are major differences between the two strategies, most notably the search method and the search space. The spherical search process adjusts the rotation angle and search radius \({DC}{\prime}\) to search the targets and has a larger search area, because it is a circle with D as the center, and DC as the radius. This also means that the spherical search has more search and local exploitation advantages than the hypercube search.
2.1.2 Principle of SE
According to the description in Sect. 2.1.1, it is known that the essential difference between SE and the meta-heuristic algorithms with hypercube search style is the difference in search rules. Then, how does the SE specifically search the solution space? Next, this subsection will go over the evolutionary principle of SE through the specific mathematical model.
According to the literature [41, 50], the mathematical model of the spherical search in different dimensional spaces is shown in Eqs. (1) ~ (3)
where \({\Vert {X}_{\alpha ,*}-{X}_{\beta ,*}\Vert }_{2}\) is the Euclidean distance representing the vector \({X}_{\alpha ,*}\) and the vector \({X}_{\beta ,*}\). And it is the radius of a high-dimensional sphere. \(\omega\) is a scale factor that may be used to change the search radius. \(dim\) is the size of the dimension. \(\theta\) represents the angle between \({X}_{\alpha ,*}\) and \({X}_{\beta ,*}\).
Based on Eqs. (1) ∼ (3), SE carries out the search task by combining the random selection method (RSM) and the designated individual selection method (DISM). And seven search processes are developed, as shown in Eqs. (4) ∼ (10).
SE01/current-to-best/1:
SE02/best/1:
SE03/best/2:
SE04/rand /1:
SE05/rand/2:
SE06/current/1:
SE07/current/2:
where \({X}_{{r}_{1}}, {X}_{{r}_{2}},{{\text{X}}}_{{r}_{3}},{X}_{{r}_{4}},\) and \({X}_{{r}_{5}}\) are randomly selected individuals from the population. \({X}_{g}\) is the current optimal individual. \(i\) and \(j\) are the individual index and the dimension index, respectively. The 'current' refers to the 'current individual', the 'best' refers to the 'current global best individual', and the 'rand' refers to the 'random individual'. 1 or 2 refers to the number of spherical units made up of random individuals.
During the SE search, to produce a new search agent, there are first three search agents randomly selected from the population to be optimized, including: \({X}_{{r}_{1}}\), \({X}_{{r}_{2}}\), and \({X}_{{r}_{3}}\). Next, \({X}_{{r}_{1}}\) is used as the initial vector, and the radius of the search region is also generated in terms of the modulus or Euclidean distance of the \({X}_{{r}_{2}}\) and \({X}_{{r}_{3}}\). Then, there are two steps for SE to select the dimensions to be updated, which are (1) updating the parameter \(DSF\in [1, dim]\), (2) selecting \(p\) dimensions in \([1, DSF]\) for the search task. Finally, the radius and rotation angle of the search are constantly adjusted based on the spherical search principle to efficiently search the space near the initial vector and generate a new solution vector \({X}^{new}\). To further analyze the search process of SE, the next part of this subsection will introduce the search principle of SE in detail by the SE04, as shown in Algorithm 1.
2.2 Cross-search Mutation (CSM)
CSM is an optimization operator that has been hot in recent years and is often used to enhance the convergence ability of meta-heuristic algorithms. It originated from a swarm intelligence optimizer (CSO) proposed by Meng et al. [52] and was constructed based on two search operators. These two operators greatly enhance the information interaction capability of CSO on both search agent and dimensional aspects, ensuring not just population variety but also good convergence ability. Based on the inspiration given above, this study introduces CSM to SE in an attempt to overcome the convergence defects of SE by utilizing CMS's search property, including prompting SE to go out of the local optimum and improving its convergence speed and accuracy. The CMS is detailed in two aspects below.
Horizontal crossover search (HCS): This operation enables a kind of arithmetic crossover between the same dimensions of two different agents in the population. Thus, it can make different agents learn from each other in the same dimension, effectively improving the population's exploration ability. Assuming that the \(j\)th dimensions of the parent agents \({X}_{a}\) and \({X}_{b}\) are crossed horizontally, they generate kids with Eq. (11) and Eq. (12)
where \({X}_{a,j}^{new}\) and \({X}_{b,j}^{new}\) are the jth dimensional generations of \({X}_{a}\) and \({X}_{b}\) generated by HCS, respectively. And \({X}_{a,j}\) and \({X}_{b,j}\) are the jth dimensions of \({X}_{a}\) and \({X}_{b}\), respectively, and \({\varepsilon }_{1}{,\varepsilon }_{2},{c}_{1}\) and \({c}_{2}\) are the contraction factors, \({\varepsilon }_{1}{, \varepsilon }_{2}\in [\mathrm{0,1}]\), \({c}_{1}{,c}_{2}\in [-\mathrm{1,1}]\).
Vertical crossover search (VCS): The agent carries out this action between two of its independent dimensions. When performing the optimization task, it helps the dimension that is trapped locally and stalled to get away from the local optimum without affecting the information of the other dimensions. Assuming a longitudinal crossover between the \(m\)th and \(n\)th dimensions of the parent agent \({X}_{i}\), it can be achieved by Eq. (13)
where ε ∈ [0,1] is a contraction factor and \({X}_{i,m}^{new}\) is the generated offspring using the VCS.
2.3 Gaussian Barebone Strategy (GBS)
The Gaussian function is a very popular operator in meta-heuristic algorithms [53, 54]. In recent years, it has often been used by many peer researchers to design various optimization strategies for approving the performance of the algorithms. For example, Wei et al. [55] introduced the Gaussian barebone strategy in HHO to improve the search capability of the original HHO. They further enabled the improved algorithm (GBHHO) combined with KLEM to propose an effective intelligent model for entrepreneurial intention prediction. Wu et al. [56] proposed the Gaussian barebone mutation-enhanced SMA (GBSMA) to alleviate the convergence defects of the original SMA using the Gaussian barebone strategy in concert with the greedy selection mechanism. Xu et al. [57] effectively modified the exploration and exploitation process of GOA by the elite opposition-based learning and Gaussian barebone strategy, thus improving the GOA's merit-seeking ability.
According to the preceding studies, it is known that the distributional characteristics of Gaussian functions can help search agents more fully utilize the data of the approximate optimal solution, which has the chance to speed up the exploitation process and thus strengthen the algorithm's convergence ability. Inspired by the above, this paper improves the second stage of the SE search process by utilizing the GBS in combination with the search characteristics of SE. The specific details of GBS are shown in Eq. (14)
where \({X}_{i,j}^{new}\) represents a new generation generated using GBS, and \(i\) and \(j\) denote the position index and dimension index of the search agent, respectively. \({r}_{1},{r}_{2},{r}_{3}\) are the position indexes of three randomly selected agents. \(n(\mu ,\sigma )\) is the Gaussian distribution function, where \(\mu\) denotes the mean of two positions, and \(\sigma\) is used as the standard deviation.
3 Proposed CGSE
The SE is a recently proposed meta-heuristic algorithm whose special search method has proven to be significantly effective in traversing solution spaces and has shown a very powerful search capability in global optimization. However, it also has some defects in practical applications. For example, the randomness of its search is vicious in some cases, which may cause it to be over-exploratory and has a high chance of causing excellent resources to be wasted, which is not conducive to escape from the local optimum, thus affecting its exploitation capability. Therefore, SE still has great room for optimization. To make SE better applied to global optimization tasks and engineering problems, this subsection integrates CSM and GBS introduced in Sect. 2 with SE, and innovatively proposes a new variant of SE with stronger performance, called CGSE.
First, CSM is introduced in the first half of the SE work for incrementally enhancing the learning ability among search agents and between dimensions to provide a high-quality population and an excellent approximate global optimal solution for the optimization task in the second half. Then, GBS is used in the second half of the SE work to explore and exploit a more optimal global solution in collaboration with the original strategy of SE. More specifically, the implementation details of CGSE are described by Algorithm 2, and Fig. 3 also shows the improvement process of CGSE by the flowchart.
According to the above description, the complexity of CGSE itself is mainly affected by the initializing population, the original SE mechanism, the CSM, the GBS, and the fitness value calculation. It is assumed that N denotes the population size, D denotes the dimension, and T denotes the maximum number of iterations. When considering only one iteration process, the complexity of initializing the population is \(O(N\times D)\), the complexity of the original SE mechanism is \(O(N\times logN+N\times D)\), the complexity of the CSM is \(O(N\times D+{N}^{2})\), the complexity of the GBS is \(O(N\times D)\), and the complexity of calculation of the fitness value is \(O(N\times logN)\). Therefore, when the program terminates after T iterations, the complexity of CGSE is \(O(T\times (3N\times D+N^2+ 4N\times logN)/2)\).
4 Experiments and Results
To provide objective evidence for the proposed CGSE, this section sets up a series of global optimization experiments to synthesize the advantages of CGSE. Since CGSE is a new SE variant built by fusing CSM, GBS, and SE, this subsection first discusses CSM and GBS's effects on SE and proves that CGSE is the best-improved version by the ablation experiment. Next, the search characteristics of CGSE are explored by the qualitative analysis experiments, and the advantages of CCGSE over SE are elaborated based on the experimental results. Then, the robust stability of CGSE is proved by the stability test. Importantly, this section further proves that CGSE still has significant advantages among high-performance algorithms by comparing CGSE with 18 excellent algorithms, including nine well-known original algorithms and nine recently proposed peer variants. Finally, the CGSE is effectively applied to six engineering problems, demonstrating that the algorithm has a good effect in practical applications.
4.1 Global Optimization Experiments of CGSE
4.1.1 Experiment Setup
This subsection mainly describes the basic setup of the global optimization experiments on the IEEE CEC2017 benchmark function set [58]. To ensure fairness, the number and dimensions of agents are fixed at a uniform 30, and each algorithm is evaluated 300,000 times. Since the algorithms are somewhat stochastic, this subsection uniformly runs each algorithm 30 times independently to obtain their average optimization results. Specifically, the qualitative analysis and stability test set their experimental parameters, which will be described in the corresponding experiments. In addition, to avoid external factors interfering with the experiments, all benchmark function experiments are carried out in the same computer environment and MATLAB 2021. The core parameters of the computer are 11th Gen Intel(R) Core (TM) i7-11,700 @ 2.50 GHz and 32 GB RAM.
To better analyze and show the results of each algorithm more easily, this paper uses the average value (AVG) to reflect the superiority or inferiority of the algorithms' performance and measures its stability by the standard variance (STD). Moreover, the comparison results are shown visually by two nonparametric tests, the Wilcoxon signed-rank test (WSRT) [59] and the Freidman test (FT) [60].
Notably, the experiments' optimal results are indicated through boldface characters. The smaller "AVG" means that the algorithm obtains better fitness, i.e., better convergence accuracy. The smaller "STD" indicates that the algorithm's stability is more robust. In WSRT, " ± / = " is a vital evaluation metric, which indicates that CGSE is "better/worse/equal" in terms of optimization results than the other algorithms in this paper. "Mean " reflects the comprehensive results of the WSRT; "Rank" is the comprehensive ranking of the algorithms compared. In addition, the P value of WSRT is also a very important reference indicator. The P values less than 0.05 are marked in bold in this paper, which means that the optimization effects of CGSE and other algorithms are significantly different. In other words, the experimental results are reliable and statistically significant.
4.1.2 Ablation Experiment
It is described in Sect. 3 that CGSE is an enhanced SE variant proposed by the synergy of CSM, GBS, and the original SE. According to the literature, it is known that the relationship between the core of the SE and optimization components can reflect the adaptability of the proposed CGSE to handle different types of tasks. Therefore, this subsection tries to show more visually the effect of CSM and GBS on CGSE by the ablation experiment.
Table 1 shows the combination details of CSM and GBS with SE, where 1 indicates that the corresponding strategy is used, and 0 indicates that it is not. The results of the ablation experiment based on the 30 benchmark functions in IEEE CEC 2017 are shown in Table 2. It can be seen from analyzing the distribution of AVG over the four algorithms in this experiment that CGSE can obtain the best performance on the relatively most functions. In addition, it can also be determined by the distribution of STD that the performance of CGSE is more stable than that of CSE, GSE, and SE.
Table 3 shows the results of analyzing this experiment using WSRT. It is observed from the table that the overall ranking of CGSE in this experiment is the first, which means that it has the best overall performance. It is worth noting that the "Mean" of CGSE is 2.07, which still has a significant advantage compared with the second-ranked GSE (2.23). In addition, according to " ± / = ", it can be found that CGSE outperforms GSE on 12 functions, is similar to GSE on 11 functions, and is worse only on seven functions.
More importantly, CGSE performs better than SE on most functions. Table A1 in the Appendix shows the P values obtained from this experiment based on WSRT. The table shows that the P values of the other three SE methods are less than 0.05 on most functions compared to CGSE, which indicates that CGSE has a significant difference in performance with them in most cases, fully proving the reliability of the results for comparison. Figure 4 shows this ablation experiment's FT results and visually shows that CGSE has the best score on FT, and the rest are GSE, CSE, and SE in that order. Therefore, combining the above analysis results, it can be demonstrated that the CSM and GBS can contribute positively to the SE alone. The CGSE proposed is the best-improved variant in this study.
To further analyze the effect of CSM and GBS on the SE, this subsection gives the convergence images of CGSE and the other three methods on the nine benchmark functions, as shown in Fig. 5. These convergence images show that CGSE has the relatively best convergence performance and that CSM and GBS can help CGSE exploit the global optimal solution in most cases. On the functions F1, F3, F12, F18, F21, and F30, the convergence curves of GSE have obvious inflection points, which indicate that GBS helps SE successfully escape from the local optimum, not only significantly accelerates the convergence speed of SE but also dramatically improves its convergence accuracy. It is most noteworthy that the inflection point of GSE is significantly earlier, and the convergence accuracy is improved after introducing CSM into GSE. This indicates that CSM enhances the convergence ability of GSE and further proves that the CGSE with both together is the best-improved version in this study.
4.1.3 Qualitative Analysis for CGSE
To further explore the search characteristics of CGSE, this subsection designs the qualitative analysis experiments on CGSE from four aspects based on the IEEE CEC2017, including the historical distribution of search agents, the trajectory of dimensional particles, the variation of the average fitness value and the iterative curve. In particular, this experiment uses the iterative process to analyze the results conveniently. The iteration number is 2000 times, and the remaining settings are the same as in Sect. 4.1.1's experimental configuration. As shown in Fig. 6, the experimental results of this experiment are given. To better study and comprehend the CGSE search process, Fig. 6(a) gives the three-dimensional model of the five functions, which represent the search space of CGSE.
Figure 6(b) depicts the distribution of the best CGSE search agents over the whole optimization task, with the red marker indicating the best global position and the black dots indicating the best historical positions. Analyzing the picture reveals that in most situations, the historical optimum solutions cluster around the global optimal solution, and only a very tiny percentage of the historical solutions are discrete. This suggests that CGSE can search the solution space as much as feasible and locate the approximate optimal solution as rapidly as possible, reflecting its exploration power and excellent exploitation capability. At the same time, these results further explain the reasons for the faster convergence and better convergence accuracy obtained by CGSE.
Figure 6(c) records the fluctuation curves of the first-dimensional vector of the search agents of CGSE during the iteration. Based on the variation of the mid- and early period curves on each function, it can be seen that the search vectors have significant variability. This implies that CSM allows search agents to learn socially, which is particularly useful for the CGSE to traverse the solution space, assisting search agents to get away from the local area and hunt for the global optimal solution. It is noted that the GBS operates in conjunction with the original SE's search rules in mid- and later iterations of CGSE, which plays a critical role in increasing CGSE's exploitation and helps to accelerate the convergence rate.
Figure 6(d) depicts the average fitness of all CGSE search agents during iterations. It can be observed that the curves all converge immediately after the start of the iteration, indicating that CGSE can search the solution space fast and correctly identify the approximate optimal solution. Figure 6(e) also shows the convergence curves of CGSE and SE to demonstrate the convergence effect of CGSE. It is clear that the convergence speed and accuracy of CGSE are substantially greater than those of SE, and the convergences also show that CGSE can leave the local area better.
4.1.4 Scalability Test for CGSE
As we all know, the stability of an algorithm is crucial to its performance. In other words, the algorithm's stability can considerably represent its performance. Therefore, to further verify that CGSE has strong comprehensive performance, two different dimensions (50 and 100) are used to test the stability of CGSE on the IEEE CEC2017 in this subsection. According to the comparison results of CGSE and SE in Table 4, it can be observed that CGSE has the relatively best AVG and STD on most functions, whether at 50 dimensions or 100 dimensions, and then combined with the experimental results of 30 dimensions in Sect. 4.1.2, it can be concluded that the CGSE proposed in this paper is stronger than SE in terms of stability and convergence ability; thus, CGSE is more robust.
Table 5 analyzes the results of this stability experiment by WSRT. The WSRT ranking of CGSE is best in both dimensions, as shown in the table. Specifically, CGSE is superior or similar to SE on most functions and slightly inferior to SE on only two or three functions. This indicates that CGSE is superior to SE and has robust stability. The p values derived from the WSRT are shown in Appendix A,. They are almost less than 0.05 in both dimensions, indicating statistically significant differences and reliability between CGSE and SE.
It is further stated that CGSE has undeniable advantages compared to SE. Next, the results in Fig. 7 show that CGSE achieves the minimum FT score in both dimensions, proving that the stability of CGSE is robust and reliable. Meanwhile, Figs. 8 and 9 show the partial convergence images of CGSE and SE at 50 and 100 dimensions, respectively. It is clear that CGSE is much better than SE in the speed and accuracy of convergence. As a result, it can be concluded that the CGSE proposed is an enhanced algorithm with better overall performance than SE.
4.1.5 Comparison with Basic Algorithms
To certify that CGSE has significant competitiveness, this subsection compares CGSE with nine well-known basic algorithms on IEEE CEC2017, including DE [11], HHO[12], ACOR [18], GWO[61], SMA[15], WOA[62], RUN[16], BA [17], and MVO[63]. Table 6 compares the results of this experiment, showing that CGSE has the best AVG and the smallest STD for most functions. This demonstrates that CGSE captures the optimal optimization and has superior flexibility and strong stability for problems with different complexities. In particular, CGSE shows stronger performance on unimodal, hybrid, and combination functions than the other nine basic algorithms.
WSRT analyzes the experimental results of CGSE with nine basic algorithms. As shown in Table 7, CGSE ranks first overall in the WSRT. Notably, CGSE outperforms DE (second overall ranking) on 25 functions and is slightly worse or similar on only five functions. Also, it outperforms MVO (third in the overall ranking) on 28 functions and performs similarly on only two functions. Table 16 in the Appendix shows the P values obtained for this WSRT, almost all of which are less than 0.05. This indicates that there is a significant difference in performance between CGSE and the other nine algorithms. In addition, it further indicates that the results of this experiment are reliable. As shown in Fig. 10, the results of evaluating CGSE and the nine basic algorithms by FT show that CGSE has the best FT score and has a significant advantage over DE, which has the second-highest score. To visually show the comparison results, some of the convergence images are shown in Fig. 11. On the images listed, it can be seen that CGSE has the best convergence accuracy. Furthermore, CGSE has quicker convergence on the F1, F12, F15, F22, and F30. The above results reveal that CGSE has considerable benefits and is highly competitive when compared to the basic algorithms.
4.1.6 Comparison with Peer Variants
In Sect. 4.1.5, the proposed CGSE has been proven to have the best comprehensive performance in comparison with nine basic algorithms. To verify the advantages of CGSE again, this subsection compares the CGSE and nine recently proposed peer variants on the IEEE CEC2017, including LCNMSE [44], CLACO [64], LXMWOA [38], GCHHO [65], SWEGWO [66], LGCMFO [67], RLGBO [28], ASMA [68] and ASCA_PSO [69]. Table 8 gives the results of the comparison between CGSE and nine peer variants. The distribution of AVG and STD in the table demonstrates that CGSE has better performance advantages and more robust stability than the other nine variants on most functions.
As shown in Table 9, the results of this experiment are analyzed by WSRT and show that CGSE has the best overall performance. CGSE outperforms ASMA (ranked second in WSRT) on 22 functions, performs similarly on five functions, and is only slightly worse on three functions. Notably, CGSE outperforms LCNMSE (a new SE variant, ranked 8th in the WSRT) on 27 functions and is only slightly worse or similar in performance on three functions. Table A4 in Appendix A shows the P values obtained by WSRT; almost all P values are less than 0.05. This proves the reliability of CGSE is superior to the other nine peer variants.
Figure 12 shows that CGSE gets the best FT score, indicating that it is also relatively the best in FT. To better show the performance of CGSE, Fig. 13 gives some convergence curves of this experiment. It can be noticed from this figure that the CGSE has the relatively best convergence accuracy. Combining the above conclusions, we concluded that CGSE has a better comprehensive performance than nine peer variants. More importantly, the results of this comparison reaffirm the core advantage of CGSE and prove that CGSE is an SE variant with outstanding performance proposed in this paper.
4.2 Engineering Applications of CGSE
Section 4.1 has proved that CGSE has an excellent performance in the field of global optimization. However, this cannot be used to corroborate that the CGSE has a strong ability to tackle real-world challenges. Therefore, this subsection applies CGSE to six real engineering problems to prove its application advantages. Notably, the objective solutions obtained must satisfy some constraints. Notably, the experiments’ optimal results are indicated through boldface characters.
4.2.1 Tension/Compression Spring Design (TCSD)
The TCSD's goal is to decrease the spring f(x) mass while meeting certain limitations. The problem has four inequality constraints: minimum deflection, shear stress, oscillation frequency, and outer diameter limit, as well as three design variables: average spring coil diameter \((D)\), spring wire diameter \((d)\), and effective spring coil number \((N)\). The design model is expressed mathematically as shown in Eqs. (17) ~ Eq. (19)
where \(variable range: 0.05\le {x}_{1}\le 2, 0.25\le {x}_{2}\le 1.3, 2\le {x}_{3}\le 150.\)
To explore better design rules for the TSCD, CGSE is applied to the TCSD along with the other 12 methods. Table 10 gives the results of the CGSE and the other 12 methods on this problem. It can be observed that the CGSE has the best result for the TCSD, which means that the CGSE can obtain the relatively smallest spring mass among the 13 methods without violating the conditional constraints. Therefore, the proposed CGSE can better solve the TSCD.
4.2.2 Welded Beam Design (WBD)
The WBD is a minimization problem that seeks to lower the cost of fabricating welded beams. As we all know, weld thickness (\(h\)), the length \((l)\), thickness \((b),\) and height \((t)\) of the beam stem are the keys to affect its quality. Therefore, it is necessary to optimize them. As a note, while solving the problem, these variables must satisfy the constraints of bending stress \((\theta )\), deflection \((\delta )\), buckling load \(({P}_{c})\), and shear stress \((\tau )\). The WBD model is expressed mathematically, as shown in Eq. (20) ~ Eq. (22).
where \({\text{P}}=60001{\text{b}}\), \({\text{L}}=14\in \text{, }{\updelta }_{{\text{max}}}=0.25\in ,{\text{E}}=30\times {1}^{6}{\text{psi}},\mathrm{ G}=12\times 1{0}^{6}{\text{psi}}\),\({\uptau }_{{\text{max}}}=13600{\text{psi}}\), \({\upsigma }_{{\text{max}}}=30000{\text{psi}}\), \(Variable range: 0.1\le {x}_{1}\le 2, 0.1\le {x}_{2}\le 10, 0.1\le {x}_{3}\le \mathrm{10,0.1}\le {x}_{4}\le 2.\)
As shown in Table 11, CGSE and eight other peers are applied to optimize the WBD. Considering the optimal cost, the result obtained by CGSE is the best among all the methods in this experiment. This is proof that CGSE has had considerable success in resolving the WBD.
4.2.3 Pressure Vessel Design (PVD)
PVD’s goal is to reduce the production cost \(f(x)\) as much as possible while satisfying the demand and the condition constraints. In the PVD, the shell thickness (\({T}_{s}\)), the head thickness (\({T}_{h}\)), the inner radius (\(r\)), and the container length (\(l\), without the head) are critical to affect \(f(x)\). Therefore, this subsection attempts to optimize the problem using the CGSE proposed in this paper. The PVD model is expressed mathematically, as shown in Eq. (23) ~ Eq. (25).
where \(variable ranges: 0\le {x}_{1}\le 99, 0\le {x}_{2}\le 99, 10\le {x}_{3}\le 200, 10\le {x}_{4}\le 200.\)
As shown in Table 12, CGSE has solved the PVD, and it is easy to see that the optimization results of CGSE are quite satisfactory. This is because the result of CGSE on the PVD is the best compared with the other methods, for example, CPSO [75], WOA [62], CDE [72], and so on. Therefore, it can be concluded that CGSE has good potential for solving the PVD.
4.2.4 Hydrostatic Thrust Bearing Design (HTBD)
When using hydrostatic thrust bearings, the power loss is desired to be as low as possible. Therefore, the HTBD problem is posed and involves four main design variables: bearing step radius \((R)\), flow rate \((Q)\), oil viscosity \((\mu ),\) and recess radius \((R0)\). It is necessary to note that the following two conditions must be met when optimizing the bearing parameters, including carrying a specific load and providing axial support. The HTBD model is expressed mathematically, as shown in Eqs. (26) ~ (27).
where \({C}_{1}=10.04\), \(n=-3.55\), \({P}_{max}=1000\), \({W}_{s}\)=101,000, \(\Delta {T}_{max}=50\), \({h}_{min}=0.001\), \(g=386.4\), \(N=750\), \(Variable ranges: 5\le {D}_{e},{D}_{i}\le 15\), \(0.01\le t\le 6\), \(0.05\le h\le 0.5.\)
Table 13 gives the results of CGSE with other methods for solving the HTBD. It can be seen that the optimal solution obtained by CGSE is 19,504.2206, which is smaller than other methods, for example, ADNOLACO [39], GASO [87], PSO [88], and so on. This indicates that CGSE can further improve HTBD.
4.2.5 Speed Reducer Design (SRD)
It is well known that the gearbox is one of the key components in the gearbox, and its weight is an important factor affecting the performance and aesthetics of the gearbox. Therefore, to reduce the weight of gearboxes as much as possible, the SRD has been proposed and has become a relatively popular topic in current mechanical research. It involves seven key variables, including the tooth breadth (\(b\)), the gear module (\(m\)), the number of teeth in the pinion (\(p\)), the length of the first shaft between the bearings (\({l}_{1}\)), the length of the second shaft between the bearings (\({l}_{2}\)), the diameter of the first shaft (\({d}_{1}\)), and the diameter of the second shaft (\({d}_{2}\)). At the same time, it must also satisfy the 11 constraints, such as the shaft's surface stress and the gear teeth' bending stress. The SRD model is expressed mathematically, as shown in Eq. (28) ~ Eq. (30).
where \(variable ranges: 2.6\le {z}_{1}\le 3.6, 0.7\le {z}_{2}\le 0.8, 17\le {z}_{3}\le 28, 7.3\le {z}_{4}\le 28, 7.3\le {z}_{5}\le 8.3, 2.9\le {z}_{6}\le 3.9, 5.0\le {z}_{7}\le 5.5.\)
Table 14 shows the optimization results of CGSE and the other six methods, such as RIME[91], AGOA[71], and GWO[19] for the SRD. Considering the minimum optimal cost, the CGSE’s result is the most desirable in this comparison, which proves that the CGSE can further optimize the weight of the reducer while maintaining the quality.
4.2.6 I-beam Design (IBD)
In the IBD, the design goal is to find the optimal variables that minimize the vertical deflection of the I-beam. Among them, length, thickness, and two heights are the key structural parameters of this problem, and the detailed mathematical model of this problem is shown in Eqs. (31) ~ (33)
where \(variable range: 10\le {x}_{1}\le 50, 10\le {x}_{2}\le 80, 0.9\le {x}_{3}\le 5, 0.9\le {x}_{4}\le 5\).
Based on this problem, this subsection compares CGSE with RIME [91], AGOA [71], IARSM [92], ARSM [92], SOS [93], and CS [94]. The comparison results show that CGSE optimizes relatively best, which indicates that CGSE outperforms the other solvers in this experiment. Therefore, it can be concluded that the CGSE proposed in this paper can potentially be an effective tool for solving IBD Table 15.
5 Discussion
In this study, the goal of the proposed CGSE is to overcome the shortcomings of the original SE in terms of search and convergence, which are presented in Sect. 4.1. In this algorithm, the Cross-search Mutation (CSM) is applied to the middle and early stages of the CGSE iterative process, enhancing social learning ability. This operation improves the diversity of the population to some extent and enhances the convergence effect of CGSE. Meanwhile, the Gaussian Backbone Strategy (GBS) acts in the middle and late stages of the CGSE iterative process to enhance the diversity within the local range of the near-optimal solution, which not only significantly enhances the exploitation ability but also helps CGSE to move away from the local optimal solution. Under the synergy of these two optimization components, CGSE achieves the balance between exploration and exploitation more perfectly than the original SE.
To demonstrate the above study, this paper carries out a series of global optimization experiments and engineering application experiments for the proposed CGSE. First, this paper designs the ablation experiments based on the IEEE CEC 2017 benchmark function, which shows that the contribution of CSM and GBS to improve the performance of SE is very significant and effectively mitigates the problems of SE's tendency to fall into local optimums and premature convergence. At the same time, it also proves that CGSE is the best-performing improved version proposed in this study.
Next, we carry out a history search experiment for CGSE by combining the characteristics of different benchmark functions in the paper. The experiment analyzes the search characteristics of CGSE from four perspectives: the distribution of historical solutions in the two-dimensional space, the trajectory of the first agent in the first dimension, the average fitness value of the population, and the convergence curve, respectively. Then, CGSE and SE are compared in 50 and 100 dimensions. In combination with the results of ablation experiments based on 30 dimensions, we conclude that CGSE can obtain excellent results and good robustness when dealing with different dimensional problems.
Moreover, we compare CGSE with nine well-known basic algorithms and nine new peer variants in 30 dimensions, demonstrating that CGSE has more outstanding comprehensive performance and competitiveness than the existing well-known methods. Finally, the optimization effect of the proposed CGSE is further explored through six real engineering design experiments, including TCSD, WBD, PVD, HTBD, SRD, and IBD. By analyzing the comparison results with well-known methods under these six problems, it is easy to realize that the CGSE is relatively capable of obtaining the least-cost solution, which is of high practical significance for solving engineering problems.
In summary, this paper validates the successfulness of the proposed CGSE as comprehensively as possible, and the multiple categories of results together confirm the research significance and referable value of CGSE in the field of searching for optimal solutions.
6 Conclusions and Future Directions
This paper proposes a quite constructive variant of SE called CGSE. After conducting an in-depth analysis and study of the CGSE's search behavior, the mathematical model of it is given. To prove the success and effectiveness of the proposed CGSE, we compare the convergence effect of the CGSE, the intermediate variant, and the original SE using the IEEE CEC2017 benchmark function and carries out a quality and stability analysis of the CGSE. To illustrate the competitiveness and robustness of CGSE, it is made to be compared in detail with nine well-known basic algorithms, and nine other peer variants. Furthermore, six classical engineering design problems are used to demonstrate that CGSE is also of good practical value and superior competitiveness in solving real-world problems.
In conclusion, it is shown that CGSE is significantly more successful in producing optimal solutions. However, it still has some shortcomings. CGSE may suffer from premature convergence and poor robustness in certain global tasks. In addition, since CGSE is an excellent variant of SE built by incorporating CMS and GBS, this inevitably leads to it possessing a relatively higher complexity. Therefore, we hope to further optimize the parameters and design structure of CGSE in future work. In addition, it can be extended to other application areas, such as feature selection and machine learning.
Data Availability Statement
The data involved in this study are all public data, which can be downloaded through public channels.
References
Mohamed, A. W., Abutarboush, H. F., Hadi, A. A., & Mohamed, A. K. (2021). Gaining-sharing knowledge based algorithm with adaptive parameters for engineering optimization. IEEE Access, 9, 65934–65946.
Zhu, M., Guan, X., Li, Z., He, L., Wang, Z., & Cai, K. (2023). Semg-based lower limb motion prediction using cnn-lstm with improved pca optimization algorithm. Journal of Bionic Engineering, 20(2), 612–627.
Zhang, K., Wang, Z., Chen, G., Zhang, L., Yang, Y., Yao, C., Wang, J., & Yao, J. (2022). Training effective deep reinforcement learning agents for real-time life-cycle production optimization. Journal of Petroleum Science and Engineering, 208, 109766.
Cao, B., Zhao, J., Gu, Y., Fan, S., & Yang, P. (2019). Security-aware industrial wireless sensor network deployment optimization. IEEE Transactions on Industrial Informatics, 16(8), 5309–5316.
Duan, Y., Zhao, Y., & Hu, J. (2023). An initialization-free distributed algorithm for dynamic economic dispatch problems in microgrid: Modeling, optimization and analysis. Sustainable Energy, Grids and Networks, 2023, 101004.
Cao, B., Zhao, J., Yang, P., Gu, Y., Muhammad, K., Rodrigues, J. J., & de Albuquerque, V. H. C. (2019). Multiobjective 3-d topology optimization of next-generation wireless data center network. IEEE Transactions on Industrial Informatics, 16(5), 3597–3605.
Cao, B., Zhao, J., Gu, Y., Ling, Y., & Ma, X. (2020). Applying graph-based differential grouping for multiobjective large-scale optimization. Swarm and Evolutionary Computation, 53, 100626.
Cao, B., Fan, S., Zhao, J., Tian, S., Zheng, Z., Yan, Y., & Yang, P. (2021). Large-scale many-objective deployment optimization of edge servers. IEEE Transactions on Intelligent Transportation Systems, 22(6), 3841–3849.
Zhang, L., Sun, C., Cai, G., & Koh, L. H. (2023). Charging and discharging optimization strategy for electric vehicles considering elasticity demand response. eTransportation, 18, 100262.
Yang, M., Wang, Y., Liang, Y., & Wang, C. (2022). A new approach to system design optimization of underwater gliders. IEEE/ASME Transactions on Mechatronics, 27(5), 3494–3505.
Storn, R., & Price, K. (1997). Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341–359.
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). (2019) Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems-the International Journal of Escience, 97, 849–872.
Cao, B., Gu, Y., Lv, Z., Yang, S., Zhao, J., & Li, Y. (2020). Rfid reader anticollision based on distributed parallel particle swarm optimization. IEEE Internet of Things Journal, 8(5), 3099–3107.
Mirjalili S., Dong J.S., Lewis A. (2019) Nature-inspired optimizers: Theories, literature reviews and applications. Springer,.
Li, S., Chen, H., Wang, M., Heidari, A. A., & Mirjalili, S. (2020). Slime mould algorithm: A new method for stochastic optimization. Future Generation Computer Systems, 111, 300–323.
Ahmadianfar, I., Asghar, H. A., Gandomi, A. H., Chu, X., & Chen, H. (2021). Run beyond the metaphor: An efficient optimization algorithm based on runge kutta method. Expert Systems with Applications, 2021, 115079.
Yang, X.-S. (2010). A new metaheuristic bat-inspired algorithm (pp. 65–74). Springer.
Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155–1173.
Yang, Y., Chen, H., Heidari, A. A., & Gandomi, A. H. (2021). Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Systems with Applications, 177, 114864.
Xu, Y., Chen, H., Heidari, A. A., Luo, J., Zhang, Q., Zhao, X., & Li, C. (2019). An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Expert Systems with Applications, 129, 135–155.
She, A., Wang, L., Peng, Y., & Li, J. (2023). Structural reliability analysis based on improved wolf pack algorithm ak-ss. Structures, 57, 105289.
Qian, L., Zheng, Y., Li, L., Ma, Y., Zhou, C., & Zhang, D. (2022). A new method of inland water ship trajectory prediction based on long short-term memory network optimized by genetic algorithm. Applied Sciences, 12(8), 4073.
Adarsh, B. R., Raghunathan, T., Jayabarathi, T., & Yang, X.-S. (2016). Economic dispatch using chaotic bat algorithm. Energy, 96, 666–675.
Zhao, D., Liu, L., Yu, F., Heidari, A. A., & Chen, H. (2020). Chaotic random spare ant colony optimization for multi-threshold image segmentation of 2d kapur entropy. Knowledge-Based Systems, 216, 106510.
Ji, Y., Tu, J., Zhou, H., Gui, W., Liang, G., Chen, H., & Wang, M. (2020). An adaptive chaotic sine cosine algorithm for constrained and unconstrained optimization. Complexity, 2020, 1–36.
Tu, J., Chen, H., Liu, J., Heidari, A. A., Zhang, X., Wang, M., Ruby, R., & Pham, Q.-V. (2021). Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowledge-Based Systems, 212, 106642.
Hu, J., Chen, H., Heidari, A. A., Wang, M., Zhang, X., Chen, Y., & Pan, Z. (2021). Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowledge-Based Systems, 213, 106684.
Zhou, W., Wang, P., Heidari, A. A., Zhao, X., & Chen, H. (2021). Random learning gradient based optimization for efficient design of photovoltaic models (energy conversion and management, impact factor: 9.709. Energy Conversion and Management, 230(29), 113751.
Chen, C., Wang, X., Yu, H., Wang, M., & Chen, H. (2021). Dealing with multi-modality using synthesis of moth-flame optimizer with sine cosine mechanisms. Mathematics and Computers in Simulation, 188, 291–318.
Elhosseini, M. A., Haikal, A. Y., Badawy, M., & Khashan, N. (2019). Biped robot stability based on an a–c parametric whale optimization algorithm. Journal of Computational Science, 31, 17–32.
Mohamed, A. W. (2018). A novel differential evolution algorithm for solving constrained engineering optimization problems. Journal of Intelligent Manufacturing, 29, 659–692.
Khalilpourazari, S., & Khalilpourazary, S. (2019). An efficient hybrid algorithm based on water cycle and moth-flame optimization algorithms for solving numerical and constrained engineering optimization problems. Soft Computing, 23, 1699–1722.
Wang, G., Yuan, Y., & Guo, W. (2019). (2019) An improved rider optimization algorithm for solving engineering optimization problems. IEEE Access, 7, 80570–80576.
Han, X., Xu, Q., Yue, L., Dong, Y., Xie, G., & Xu, X. (2020). An improved crow search algorithm based on spiral search mechanism for solving numerical and engineering optimization problems. IEEE Access, 8, 92363–92382.
Han, X., Yue, L., Dong, Y., Xu, Q., Xie, G., & Xu, X. (2020). Efficient hybrid algorithm based on moth search and fireworks algorithm for solving numerical and constrained engineering optimization problems. The Journal of Supercomputing, 76, 9404–9429.
Kamboj, V. K., Nandi, A., Bhadoria, A., & Sehgal, S. (2020). An intensify harris hawks optimizer for numerical and engineering optimization problems. Applied Soft Computing, 89, 106018.
Abualigah, L. M., Ewees, A. A., Al-qaness, M. A. A., Elaziz, M. E. A., Yousri, D., Ibrahim, R. A., & Altalhi, M. (2022). Boosting arithmetic optimization algorithm by sine cosine algorithm and levy flight distribution for solving engineering optimization problems. Neural Computing and Applications, 34, 8823–8852.
Qi, A., Zhao, D., Yu, F., Heidari, A. A., Chen, H., & Xiao, L. (2022). Directional mutation and crossover for immature performance of whale algorithm with application to engineering optimization. Journal of Computational Design and Engineering, 9(2), 519–563.
Zhao, D., Liu, L., Yu, F., Heidari, A. A., Wang, M., Chen, H., & Muhammad, K. (2022). Opposition-based ant colony optimization with all-dimension neighborhood search for engineering design. Journal of Computational Design and Engineering, 9(3), 1007–1044.
Su, H., Zhao, D., Yu, F., Heidari, A. A., Xu, Z., Alotaibi, F. S., Mafarja, M., & Chen, H. (2023). A horizontal and vertical crossover cuckoo search: Optimizing performance for the engineering problems. Journal of Computational Design and Engineering, 10(1), 36–64.
Tang, D. (2019). Spherical evolution for solving continuous optimization problems. Applied Soft Computing, 81, 105499.
Yang, J., Zhang, Y., Wang, Z., Todo, Y., Lu, B., & Gao, S. (2021). A cooperative coevolution wingsuit flying search algorithm with spherical evolution. International Journal of Computational Intelligence Systems, 14(1), 178.
Cai P., Yang H., Zhang Y., Todo Y., Tang Z., Gao S. (2020) A sine cosine algorithm enhanced spherical evolution for continuous optimization problems. 2020 13th International Symposium on Computational Intelligence and Design (ISCID), 1–6.
Weng, X., Heidari, A. A., Liang, G., Chen, H., Ma, X., Mafarja, M., & Turabieh, H. (2021). Laplacian nelder-mead spherical evolution for parameter estimation of photovoltaic models. Energy Conversion and Management, 243, 114223.
Li Z., Yang H., Zhang Z., Todo Y., Gao S. (2020) Spherical evolution enhanced with salp swarm algorithm. 2020 13th International Symposium on Computational Intelligence and Design (ISCID), 62–66.
Zhang, Z., Lei, Z., Zhang, Y., Todo, Y., Tang, Z., & Gao, S. (2020). A hybrid spherical evolution and particle swarm optimization algorithm. In 2020 IEEE international conference on artificial intelligence and information systems (ICAIIS), Dalian, China (pp. 167–172).
Yang, H., Gao, S., Wang, R. L., & Todo, Y. (2021). A ladder spherical evolution search algorithm. IEICE Transactions on Information and Systems, 104, 461–464.
Yang, L., Gao, S., Yang, H., Cai, Z., Lei, Z., & Todo, Y. (2021). Adaptive chaotic spherical evolution algorithm. Memetic Computing, 13(3), 383–411.
Zhao, J., Zhang, B., Guo, X., Qi, L., & Li, Z. (2022). Self-adapting spherical search algorithm with differential evolution for global optimization. Mathematics, 10(23), 4519.
Zhou, W., Wang, P., Heidari, A. A., Zhao, X., Turabieh, H., Mafarja, M., & Chen, H. (2021). Metaphor-free dynamic spherical evolution for parameter estimation of photovoltaic modules. Energy Reports, 7, 5175–5202.
Li, J., Zhang, Z., Lei, Z., Yi, J., & Gao, S. (2022). A lottery-based spherical evolution algorithm with elite retention strategy. In 2022 14th international conference on intelligent human-machine systems and cybernetics (IHMSC), Hangzhou, China, (pp. 109–113).
Meng, A.-B., Chen, Y.-C., Yin, H., & Chen, S.-Z. (2014). Crisscross optimization algorithm and its application. Knowledge-Based Systems, 67, 218–229.
Gao, W., Chan, F. T. S., Huang, L., & Liu, S. (2015). Bare bones artificial bee colony algorithm with parameter adaptation and fitness-based neighborhood. Information Sciences, 316, 180–200.
Heidari, A. A., Abbaspour, R. A., & Jordehi, A. R. (2017). Gaussian bare-bones water cycle algorithm for optimal reactive power dispatch in electrical power systems. Applied Soft Computing, 57, 657–671.
Wei, Y., Lv, H., Chen, M., Wang, M., Heidari, A. A., Chen, H., & Li, C. (2020). Predicting entrepreneurial intention of students: An extreme learning machine with gaussian barebone harris hawks optimizer. IEEE Access, 8, 76841–76855.
Wu, S., Heidari, A. A., Zhang, S., Kuang, F., & Chen, H. (2023). Gaussian bare-bone slime mould algorithm: Performance optimization and case studies on truss structures. Artificial Intelligence Review, 2023, 1–37.
Xu, Z., Heidari, A. A., Kuang, F., Khalil, A., Mafarja, M. M., Zhang, S., Chen, H., & Pan, Z. (2022). Enhanced gaussian bare-bones grasshopper optimization: Mitigating the performance concerns for feature selection. Expert Systems with Applications, 212, 118642.
Wu G., Mallipeddi R., Suganthan P. (2016) Problem definitions and evaluation criteria for the cec 2017 competition and special session on constrained single objective real-parameter optimization.
García, S., Fernández, A., Luengo, J., & Herrera, F. (2010). Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences, 180(10), 2044–2064.
Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1(1), 3–18.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61.
Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in Engineering Software, 95, 51–67.
Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2016). Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Computing and Applications, 27(2), 495–513.
Liu, L., Zhao, D., Yu, F., Heidari, A. A., Li, C., Ouyang, J., Chen, H., Mafarja, M., Turabieh, H., & Pan, J. (2021). Ant colony optimization with cauchy and greedy levy mutations for multilevel covid 19 x-ray image segmentation. Computers in Biology and Medicine, 136, 104609.
Song, S., Wang, P., Heidari, A. A., Wang, M., Zhao, X., Chen, H., He, W., & Xu, S. (2021). Dimension decided harris hawks optimization with gaussian mutation: Balance analysis and diversity patterns. Knowledge-Based Systems, 215, 106425.
Yang, X., Zhao, D., Yu, F., Heidari, A. A., Bano, Y., Ibrohimov, A., Liu, Y., Cai, Z., Chen, H., & Chen, X. (2022). An optimized machine learning framework for predicting intradialytic hypotension using indexes of chronic kidney disease-mineral and bone disorders. Computers in Biology and Medicine, 145, 105510.
Xu, Y., Chen, H., Luo, J., Zhang, Q., Jiao, S., & Zhang, X. (2019). Enhanced moth-flame optimizer with mutation strategy for global optimization. Information Sciences, 492, 181–203.
Chen, X., Huang, H., Heidari, A. A., Sun, C., Lv, Y., Gui, W., Liang, G., Gu, Z., Chen, H., Li, C., & Chen, P. (2022). An efficient multilevel thresholding image segmentation method based on the slime mould algorithm with bee foraging mechanism: A real case with lupus nephritis images. Computers in Biology and Medicine, 142, 105179.
Issa, M., Hassanien, A. E., Oliva, D., Helmi, A., Ziedan, I., & Alzohairy, A. (2018). Asca-pso: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment. Expert Systems with Applications, 99, 56–70.
Tu, J. Z., Chen, H. L., Liu, J. C., Heidari, A. A., Zhang, X. Q., Wang, M. J., Ruby, R., & Pham, Q. V. (2021). Evolutionary biogeography-based whale optimization methods with communication structure: Towards measuring the balance. Knowledge-Based Systems, 212, 31.
Wang, G. C., Heidari, A. A., Wang, M. J., Kuang, F. J., Zhu, W., & Chen, H. L. (2021). Chaotic arc adaptive grasshopper optimization. Ieee Access, 9, 17672–17706.
Huang, F.-Z., Wang, L., & He, Q. (2007). An effective co-evolutionary differential evolution for constrained optimization. Applied Mathematics and computation, 186(1), 340–356.
Mahdavi, M., Fesanghary, M., & Damangir, E. (2007). An improved harmony search algorithm for solving optimization problems. Applied mathematics and computation, 188(2), 1567–1579.
Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering optimization problems. Computers in Industry, 41(2), 113–127.
He, Q., & Wang, L. (2007). An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Engineering Applications of Artificial Intelligence, 20(1), 89–99.
Mezura-Montes, E., & Coello, C. A. C. (2008). An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. International Journal of General Systems, 37(4), 443–473.
Rashedi, E., Nezamabadi-Pour, H., & Saryazdi, S. (2009). Gsa: A gravitational search algorithm. Information sciences, 179(13), 2232–2248.
Arora J.S. (2004) Introduction to optimum design. Elsevier,
Belegundu, A. D., & Arora, J. S. (1985). A study of mathematical programming methods for structural optimization. Part i: Theory. International Journal for Numerical Methods in Engineering, 21(9), 1583–1599.
Kaveh, A., & Khayatazad, M. (2012). A new meta-heuristic method: Ray optimization. Computers and Structures, 112–113, 283–294.
Coello Coello, C. A., & Mezura, M. E. (2002). Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Advanced Engineering Informatics, 16(3), 193–203.
Krohling, R. A., & Coelho, L. D. S. (2006). Coevolutionary particle swarm optimization using gaussian distribution for solving constrained optimization problems B (Cybernetics). IEEE Transactions on Systems, Man, and Cybernetics, Part B, 36(6), 1407–1416.
Coello Coello, C. A. (2002). Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Computer Methods in Applied Mechanics and Engineering, 191(11), 1245–1287.
Coelho L.d.S. (2010). Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Systems with Applications, 37(2), 1676–1683.
Sandgren, E. (1988). Nonlinear integer and discrete programming in mechanical design. In Proceedings of the ASME 1988 design technology conferences. 14th design automation conference, Kissimmee, Florida, USA, 25–28 September 1988 (pp. 95–105).
Kannan, B., & Kramer, S. (1994). An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, 116, 405–411.
He, S., Prempain, E., & Wu, Q. (2004). An improved particle swarm optimizer for mechanical design optimization problems. Engineering Optimization - ENG OPTIMIZ, 36, 585–605.
Kennedy J., & Eberhart R. (1995). Particle swarm optimization. In Proceedings of ICNN'95 - International conference on neural networks, Perth, WA, Australia (pp. 1942–1948).
Rao, R. V., Savsani, V. J., & Vakharia, D. P. (2011). Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer-Aided Design, 43(3), 303–315.
Kentli, A., & Sahbaz, M. (2014). Optimisation of hydrostatic thrust bearing using sequential quadratic programming. Oxidation Communications, 37(4), 1144–1152.
Su, H., Zhao, D., Heidari, A. A., Liu, L., Zhang, X., Mafarja, M., & Chen, H. (2023). (2023) Rime: A physics-based optimization. Neurocomputing, 532, 183–214.
Wang G. (2003) Adaptive response surface method using inherited latin hypercube design points., 210–220.
Cheng, M.-Y., & Prayogo, D. (2014). Symbiotic organisms search: A new metaheuristic optimization algorithm. Computers & Structures, 139, 98–112.
Gandomi, A. H., Yang, X.-S., & Alavi, A. H. (2013). (2013) Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Engineering with Computers, 29, 17–35.
Funding
This paper is partially supported by MRC (MC_PC_17171); Royal Society (RP202G0230); BHF (AA/18/3/34220); Hope Foundation for Cancer Research (RM60G0680); GCRF (P202PF11); Sino-UK Industrial Fund (RP202G0289); LIAS (P202ED10, P202RE969); Data Science Enhancement Fund (P202RE237); Fight for Sight (24NN201); Sino-UK Education Fund (OP202006); BBSRC (RM32G0178B8); Natural Science Foundation of Zhejiang Province (LZ22F020005); National Natural Science Foundation of China (62076185); The 18th batch of innovative and entrepreneurial talent funding projects in Jilin Province (No. 49); Natural Science Foundation of Jilin Province (YDZJ202201ZYTS567).
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
Conflict of Interest
The authors declare that there is no conflict of interest regarding the publication of the article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Li, Y., Zhao, D., Heidari, A.A. et al. Gaussian Backbone-Based Spherical Evolutionary Algorithm with Cross-search for Engineering Problems. J Bionic Eng 21, 1055–1091 (2024). https://doi.org/10.1007/s42235-023-00476-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42235-023-00476-1