样式: 排序: IF: - GO 导出 标记为已读
-
Empirical Analysis of the Dynamic Binary Value Problem with IOHprofiler arXiv.cs.NE Pub Date : 2024-04-24 Diederick Vermetten, Johannes Lengler, Dimitri Rusin, Thomas Bäck, Carola Doerr
Optimization problems in dynamic environments have recently been the source of several theoretical studies. One of these problems is the monotonic Dynamic Binary Value problem, which theoretically has high discriminatory power between different Genetic Algorithms. Given this theoretical foundation, we integrate several versions of this problem into the IOHprofiler benchmarking framework. Using this
-
Large Language Models as In-context AI Generators for Quality-Diversity arXiv.cs.NE Pub Date : 2024-04-24 Bryan Lim, Manon Flageat, Antoine Cully
Quality-Diversity (QD) approaches are a promising direction to develop open-ended processes as they can discover archives of high-quality solutions across diverse niches. While already successful in many applications, QD approaches usually rely on combining only one or two solutions to generate new candidate solutions. As observed in open-ended processes such as technological evolution, wisely combining
-
Biologically-Informed Excitatory and Inhibitory Balance for Robust Spiking Neural Network Training arXiv.cs.NE Pub Date : 2024-04-24 Joseph A. Kilgore, Jeffrey D. Kopsick, Giorgio A. Ascoli, Gina C. Adam
Spiking neural networks drawing inspiration from biological constraints of the brain promise an energy-efficient paradigm for artificial intelligence. However, challenges exist in identifying guiding principles to train these networks in a robust fashion. In addition, training becomes an even more difficult problem when incorporating biological constraints of excitatory and inhibitory connections.
-
GRSN: Gated Recurrent Spiking Neurons for POMDPs and MARL arXiv.cs.NE Pub Date : 2024-04-24 Lang Qin, Ziming Wang, Runhao Jiang, Rui Yan, Huajin Tang
Spiking neural networks (SNNs) are widely applied in various fields due to their energy-efficient and fast-inference capabilities. Applying SNNs to reinforcement learning (RL) can significantly reduce the computational resource requirements for agents and improve the algorithm's performance under resource-constrained conditions. However, in current spiking reinforcement learning (SRL) algorithms, the
-
NeuraChip: Accelerating GNN Computations with a Hash-based Decoupled Spatial Accelerator arXiv.cs.NE Pub Date : 2024-04-23 Kaustubh Shivdikar, Nicolas Bohm Agostini, Malith Jayaweera, Gilbert Jonatan, Jose L. Abellan, Ajay Joshi, John Kim, David Kaeli
Graph Neural Networks (GNNs) are emerging as a formidable tool for processing non-euclidean data across various domains, ranging from social network analysis to bioinformatics. Despite their effectiveness, their adoption has not been pervasive because of scalability challenges associated with large-scale graph datasets, particularly when leveraging message passing. To tackle these challenges, we introduce
-
Deep multi-prototype capsule networks arXiv.cs.NE Pub Date : 2024-04-23 Saeid Abbassi, Kamaledin Ghiasi-Shirazi, Ahad Harati
Capsule networks are a type of neural network that identify image parts and form the instantiation parameters of a whole hierarchically. The goal behind the network is to perform an inverse computer graphics task, and the network parameters are the mapping weights that transform parts into a whole. The trainability of capsule networks in complex data with high intra-class or intra-part variation is
-
Evolutionary Reinforcement Learning via Cooperative Coevolution arXiv.cs.NE Pub Date : 2024-04-23 Chengpeng Hu, Jialin Liu, Xin Yao
Recently, evolutionary reinforcement learning has obtained much attention in various domains. Maintaining a population of actors, evolutionary reinforcement learning utilises the collected experiences to improve the behaviour policy through efficient exploration. However, the poor scalability of genetic operators limits the efficiency of optimising high-dimensional neural networks. To address this
-
A Survey of Decomposition-Based Evolutionary Multi-Objective Optimization: Part I-Past and Future arXiv.cs.NE Pub Date : 2024-04-22 Ke Li
Decomposition has been the mainstream approach in classic mathematical programming for multi-objective optimization and multi-criterion decision-making. However, it was not properly studied in the context of evolutionary multi-objective optimization (EMO) until the development of multi-objective evolutionary algorithm based on decomposition (MOEA/D). In this two-part survey series, we use MOEA/D as
-
A Survey of Decomposition-Based Evolutionary Multi-Objective Optimization: Part II -- A Data Science Perspective arXiv.cs.NE Pub Date : 2024-04-22 Mingyu Huang, Ke Li
This paper presents the second part of the two-part survey series on decomposition-based evolutionary multi-objective optimization where we mainly focus on discussing the literature related to multi-objective evolutionary algorithms based on decomposition (MOEA/D). Complementary to the first part, here we employ a series of advanced data mining approaches to provide a comprehensive anatomy of the enormous
-
Bridging the Gap Between Theory and Practice: Benchmarking Transfer Evolutionary Optimization arXiv.cs.NE Pub Date : 2024-04-20 Yaqing Hou, Wenqiang Ma, Abhishek Gupta, Kavitesh Kumar Bali, Hongwei Ge, Qiang Zhang, Carlos A. Coello Coello, Yew-Soon Ong
In recent years, the field of Transfer Evolutionary Optimization (TrEO) has witnessed substantial growth, fueled by the realization of its profound impact on solving complex problems. Numerous algorithms have emerged to address the challenges posed by transferring knowledge between tasks. However, the recently highlighted ``no free lunch theorem'' in transfer optimization clarifies that no single algorithm
-
Prove Symbolic Regression is NP-hard by Symbol Graph arXiv.cs.NE Pub Date : 2024-04-22 Jinglu Song, Qiang Lu, Bozhou Tian, Jingwen Zhang, Jake Luo, Zhiguang Wang
Symbolic regression (SR) is the task of discovering a symbolic expression that fits a given data set from the space of mathematical expressions. Despite the abundance of research surrounding the SR problem, there's a scarcity of works that confirm its NP-hard nature. Therefore, this paper introduces the concept of a symbol graph as a comprehensive representation of the entire mathematical expression
-
On the Temperature of Machine Learning Systems arXiv.cs.NE Pub Date : 2024-04-19 Dong Zhang
We develop a thermodynamic theory for machine learning (ML) systems. Similar to physical thermodynamic systems which are characterized by energy and entropy, ML systems possess these characteristics as well. This comparison inspire us to integrate the concept of temperature into ML systems grounded in the fundamental principles of thermodynamics, and establish a basic thermodynamic framework for machine
-
Leveraging Symbolic Regression for Heuristic Design in the Traveling Thief Problem arXiv.cs.NE Pub Date : 2024-04-19 Andrew Ni, Lee Spector
The Traveling Thief Problem is an NP-hard combination of the well known traveling salesman and knapsack packing problems. In this paper, we use symbolic regression to learn useful features of near-optimal packing plans, which we then use to design efficient metaheuristic genetic algorithms for the traveling thief algorithm. By using symbolic regression again to initialize the metaheuristic GA with
-
Near-Tight Runtime Guarantees for Many-Objective Evolutionary Algorithms arXiv.cs.NE Pub Date : 2024-04-19 Simon Wietheger, Benjamin Doerr
Despite significant progress in the field of mathematical runtime analysis of multi-objective evolutionary algorithms (MOEAs), the performance of MOEAs on discrete many-objective problems is little understood. In particular, the few existing bounds for the SEMO, global SEMO, and SMS-EMOA algorithms on classic benchmarks are all roughly quadratic in the size of the Pareto front. In this work, we prove
-
Breaching the Bottleneck: Evolutionary Transition from Reward-Driven Learning to Reward-Agnostic Domain-Adapted Learning in Neuromodulated Neural Nets arXiv.cs.NE Pub Date : 2024-04-19 Solvi Arnold, Reiji Suzuki, Takaya Arita, Kimitoshi Yamazaki
Advanced biological intelligence learns efficiently from an information-rich stream of stimulus information, even when feedback on behaviour quality is sparse or absent. Such learning exploits implicit assumptions about task domains. We refer to such learning as Domain-Adapted Learning (DAL). In contrast, AI learning algorithms rely on explicit externally provided measures of behaviour quality to acquire
-
Next Generation Loss Function for Image Classification arXiv.cs.NE Pub Date : 2024-04-19 Shakhnaz AkhmedovaCenter for Artificial Intelligence in Public Health Research, Robert Koch Institute, Berlin, Germany, Nils KörberCenter for Artificial Intelligence in Public Health Research, Robert Koch Institute, Berlin, Germany
Neural networks are trained by minimizing a loss function that defines the discrepancy between the predicted model output and the target value. The selection of the loss function is crucial to achieve task-specific behaviour and highly influences the capability of the model. A variety of loss functions have been proposed for a wide range of tasks affecting training and model performance. For classification
-
ESC: Evolutionary Stitched Camera Calibration in the Wild arXiv.cs.NE Pub Date : 2024-04-19 Grzegorz Rypeść, Grzegorz Kurzejamski
This work introduces a novel end-to-end approach for estimating extrinsic parameters of cameras in multi-camera setups on real-life sports fields. We identify the source of significant calibration errors in multi-camera environments and address the limitations of existing calibration methods, particularly the disparity between theoretical models and actual sports field characteristics. We propose the
-
How Population Diversity Influences the Efficiency of Crossover arXiv.cs.NE Pub Date : 2024-04-18 Sacha Cerf, Johannes Lengler
Our theoretical understanding of crossover is limited by our ability to analyze how population diversity evolves. In this study, we provide one of the first rigorous analyses of population diversity and optimization time in a setting where large diversity and large population sizes are required to speed up progress. We give a formal and general criterion which amount of diversity is necessary and sufficient
-
Self-Adjusting Evolutionary Algorithms Are Slow on Multimodal Landscapes arXiv.cs.NE Pub Date : 2024-04-18 Johannes Lengler, Konstantin Sturm
The one-fifth rule and its generalizations are a classical parameter control mechanism in discrete domains. They have also been transferred to control the offspring population size of the $(1, \lambda)$-EA. This has been shown to work very well for hill-climbing, and combined with a restart mechanism it was recently shown by Hevia Fajardo and Sudholt to improve performance on the multi-modal problem
-
Analysis of Evolutionary Diversity Optimisation for the Maximum Matching Problem arXiv.cs.NE Pub Date : 2024-04-17 Jonathan Gadea Harder, Aneta Neumann, Frank Neumann
This paper explores the enhancement of solution diversity in evolutionary algorithms (EAs) for the maximum matching problem, concentrating on complete bipartite graphs and paths. We adopt binary string encoding for matchings and use Hamming distance to measure diversity, aiming for its maximization. Our study centers on the $(\mu+1)$-EA and $2P-EA_D$, which are applied to optimize diversity. We provide
-
Evolutionary Multi-Objective Optimisation for Fairness-Aware Self Adjusting Memory Classifiers in Data Streams arXiv.cs.NE Pub Date : 2024-04-18 Pivithuru Thejan Amarasinghe, Diem Pham, Binh Tran, Su Nguyen, Yuan Sun, Damminda Alahakoon
This paper introduces a novel approach, evolutionary multi-objective optimisation for fairness-aware self-adjusting memory classifiers, designed to enhance fairness in machine learning algorithms applied to data stream classification. With the growing concern over discrimination in algorithmic decision-making, particularly in dynamic data stream environments, there is a need for methods that ensure
-
Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem arXiv.cs.NE Pub Date : 2024-04-17 Denis Antipov, Aneta Neumann, Frank Neumann. Andrew M. Sutton
The diversity optimization is the class of optimization problems, in which we aim at finding a diverse set of good solutions. One of the frequently used approaches to solve such problems is to use evolutionary algorithms which evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyse EDO on a 3-objective function LOTZ$_k$, which
-
Runtime Analyses of NSGA-III on Many-Objective Problems arXiv.cs.NE Pub Date : 2024-04-17 Andre Opris, Duc-Cuong Dang, Dirk Sudholt
NSGA-II and NSGA-III are two of the most popular evolutionary multi-objective algorithms used in practice. While NSGA-II is used for few objectives such as 2 and 3, NSGA-III is designed to deal with a larger number of objectives. In a recent breakthrough, Wietheger and Doerr (IJCAI 2023) gave the first runtime analysis for NSGA-III on the 3-objective OneMinMax problem, showing that this state-of-the-art
-
Runtime Analysis of a Multi-Valued Compact Genetic Algorithm on Generalized OneMax arXiv.cs.NE Pub Date : 2024-04-17 Sumit Adak, Carsten Witt
A class of metaheuristic techniques called estimation-of-distribution algorithms (EDAs) are employed in optimization as more sophisticated substitutes for traditional strategies like evolutionary algorithms. EDAs generally drive the search for the optimum by creating explicit probabilistic models of potential candidate solutions through repeated sampling and selection from the underlying search space
-
Trackable Agent-based Evolution Models at Wafer Scale arXiv.cs.NE Pub Date : 2024-04-16 Matthew Andres Moreno, Connor Yang, Emily Dolson, Luis Zaman
Continuing improvements in computing hardware are poised to transform capabilities for in silico modeling of cross-scale phenomena underlying major open questions in evolutionary biology and artificial life, such as transitions in individuality, eco-evolutionary dynamics, and rare evolutionary events. Emerging ML/AI-oriented hardware accelerators, like the 850,000 processor Cerebras Wafer Scale Engine
-
Towards free-response paradigm: a theory on decision-making in spiking neural networks arXiv.cs.NE Pub Date : 2024-04-16 Zhichao Zhu, Yang Qi, Wenlian Lu, Zhigang Wang, Lu Cao, Jianfeng Feng
The energy-efficient and brain-like information processing abilities of Spiking Neural Networks (SNNs) have attracted considerable attention, establishing them as a crucial element of brain-inspired computing. One prevalent challenge encountered by SNNs is the trade-off between inference speed and accuracy, which requires sufficient time to achieve the desired level of performance. Drawing inspiration
-
Hardware-aware training of models with synaptic delays for digital event-driven neuromorphic processors arXiv.cs.NE Pub Date : 2024-04-16 Alberto Patino-Saucedo, Roy Meijer, Amirreza Yousefzadeh, Manil-Dev Gomony, Federico Corradi, Paul Detteter, Laura Garrido-Regife, Bernabe Linares-Barranco, Manolis Sifalakis
Configurable synaptic delays are a basic feature in many neuromorphic neural network hardware accelerators. However, they have been rarely used in model implementations, despite their promising impact on performance and efficiency in tasks that exhibit complex (temporal) dynamics, as it has been unclear how to optimize them. In this work, we propose a framework to train and deploy, in digital neuromorphic
-
An Enhanced Differential Grouping Method for Large-Scale Overlapping Problems arXiv.cs.NE Pub Date : 2024-04-16 Maojiang Tian, Mingke Chen, Wei Du, Yang Tang, Yaochu Jin
Large-scale overlapping problems are prevalent in practical engineering applications, and the optimization challenge is significantly amplified due to the existence of shared variables. Decomposition-based cooperative coevolution (CC) algorithms have demonstrated promising performance in addressing large-scale overlapping problems. However, current CC frameworks designed for overlapping problems rely
-
Learning from Offline and Online Experiences: A Hybrid Adaptive Operator Selection Framework arXiv.cs.NE Pub Date : 2024-04-16 Jiyuan Pei, Jialin Liu, Yi Mei
In many practical applications, usually, similar optimisation problems or scenarios repeatedly appear. Learning from previous problem-solving experiences can help adjust algorithm components of meta-heuristics, e.g., adaptively selecting promising search operators, to achieve better optimisation performance. However, those experiences obtained from previously solved problems, namely offline experiences
-
Engineering software 2.0 by interpolating neural networks: unifying training, solving, and calibration arXiv.cs.NE Pub Date : 2024-04-16 Chanwook Park, Sourav Saha, Jiachen Guo, Xiaoyu Xie, Satyajit Mojumder, Miguel A. Bessa, Dong Qian, Wei Chen, Gregory J. Wagner, Jian Cao, Wing Kam Liu
The evolution of artificial intelligence (AI) and neural network theories has revolutionized the way software is programmed, shifting from a hard-coded series of codes to a vast neural network. However, this transition in engineering software has faced challenges such as data scarcity, multi-modality of data, low model accuracy, and slow inference. Here, we propose a new network based on interpolation
-
Multi-objective evolutionary GAN for tabular data synthesis arXiv.cs.NE Pub Date : 2024-04-15 Nian Ran, Bahrul Ilmi Nasution, Claire Little, Richard Allmendinger, Mark Elliot
Synthetic data has a key role to play in data sharing by statistical agencies and other generators of statistical data products. Generative Adversarial Networks (GANs), typically applied to image synthesis, are also a promising method for tabular data synthesis. However, there are unique challenges in tabular data compared to images, eg tabular data may contain both continuous and discrete variables
-
NeuroLGP-SM: Scalable Surrogate-Assisted Neuroevolution for Deep Neural Networks arXiv.cs.NE Pub Date : 2024-04-12 Fergal Stapleton, Edgar Galván
Evolutionary Algorithms (EAs) play a crucial role in the architectural configuration and training of Artificial Deep Neural Networks (DNNs), a process known as neuroevolution. However, neuroevolution is hindered by its inherent computational expense, requiring multiple generations, a large population, and numerous epochs. The most computationally intensive aspect lies in evaluating the fitness function
-
An Integrated Toolbox for Creating Neuromorphic Edge Applications arXiv.cs.NE Pub Date : 2024-04-12 Lars NiedermeierNiedermeier Consulting, Zurich, ZH, Switzerland, Jeffrey L. KrichmarDepartment of Cognitive Sciences, Department of Computer Science, University of California, Irvine, CA, USA
Spiking Neural Networks (SNNs) and neuromorphic models are more efficient and have more biological realism than the activation functions typically used in deep neural networks, transformer models and generative AI. SNNs have local learning rules, are able to learn on small data sets, and can adapt through neuromodulation. Although research has shown their advantages, there are still few compelling
-
Multi-scale Topology Optimization using Neural Networks arXiv.cs.NE Pub Date : 2024-04-11 Hongrui Chen, Xingchen Liu, Levent Burak Kara
A long-standing challenge is designing multi-scale structures with good connectivity between cells while optimizing each cell to reach close to the theoretical performance limit. We propose a new method for direct multi-scale topology optimization using neural networks. Our approach focuses on inverse homogenization that seamlessly maintains compatibility across neighboring microstructure cells. Our
-
LSROM: Learning Self-Refined Organizing Map for Fast Imbalanced Streaming Data Clustering arXiv.cs.NE Pub Date : 2024-04-14 Yongqi Xu, Yujian Lee, Rong Zou, Yiqun Zhang, Yiu-Ming Cheung
Streaming data clustering is a popular research topic in the fields of data mining and machine learning. Compared to static data, streaming data, which is usually analyzed in data chunks, is more susceptible to encountering the dynamic cluster imbalanced issue. That is, the imbalanced degree of clusters varies in different streaming data chunks, leading to corruption in either the accuracy or the efficiency
-
Uncertainty Quantification in Detecting Choroidal Metastases on MRI via Evolutionary Strategies arXiv.cs.NE Pub Date : 2024-04-12 Bala McRae-Posani, Andrei Holodny, Hrithwik Shalu, Joseph N Stember
Uncertainty quantification plays a vital role in facilitating the practical implementation of AI in radiology by addressing growing concerns around trustworthiness. Given the challenges associated with acquiring large, annotated datasets in this field, there is a need for methods that enable uncertainty quantification in small data AI approaches tailored to radiology images. In this study, we focused
-
Analyzing and Overcoming Local Optima in Complex Multi-Objective Optimization by Decomposition-Based Evolutionary Algorithms arXiv.cs.NE Pub Date : 2024-04-12 Ting Dong, Haoxin Wang, Hengxi Zhang, Wenbo Ding
When addressing the challenge of complex multi-objective optimization problems, particularly those with non-convex and non-uniform Pareto fronts, Decomposition-based Multi-Objective Evolutionary Algorithms (MOEADs) often converge to local optima, thereby limiting solution diversity. Despite its significance, this issue has received limited theoretical exploration. Through a comprehensive geometric
-
Evolutionary Preference Sampling for Pareto Set Learning arXiv.cs.NE Pub Date : 2024-04-12 Rongguang Ye, Longcan Chen, Jinyuan Zhang, Hisao Ishibuchi
Recently, Pareto Set Learning (PSL) has been proposed for learning the entire Pareto set using a neural network. PSL employs preference vectors to scalarize multiple objectives, facilitating the learning of mappings from preference vectors to specific Pareto optimal solutions. Previous PSL methods have shown their effectiveness in solving artificial multi-objective optimization problems (MOPs) with
-
RLEMMO: Evolutionary Multimodal Optimization Assisted By Deep Reinforcement Learning arXiv.cs.NE Pub Date : 2024-04-12 Hongqiao Lian, Zeyuan Ma, Hongshu Guo, Ting Huang, Yue-Jiao Gong
Solving multimodal optimization problems (MMOP) requires finding all optimal solutions, which is challenging in limited function evaluations. Although existing works strike the balance of exploration and exploitation through hand-crafted adaptive strategies, they require certain expert knowledge, hence inflexible to deal with MMOP with different properties. In this paper, we propose RLEMMO, a Meta-Black-Box
-
Auto-configuring Exploration-Exploitation Tradeoff in Evolutionary Computation via Deep Reinforcement Learning arXiv.cs.NE Pub Date : 2024-04-12 Zeyuan Ma, Jiacheng Chen, Hongshu Guo, Yining Ma, Yue-Jiao Gong
Evolutionary computation (EC) algorithms, renowned as powerful black-box optimizers, leverage a group of individuals to cooperatively search for the optimum. The exploration-exploitation tradeoff (EET) plays a crucial role in EC, which, however, has traditionally been governed by manually designed rules. In this paper, we propose a deep reinforcement learning-based framework that autonomously configures
-
Multi-Objective Evolutionary Algorithms with Sliding Window Selection for the Dynamic Chance-Constrained Knapsack Problem arXiv.cs.NE Pub Date : 2024-04-12 Kokila Kasuni Perera, Aneta Neumann
Evolutionary algorithms are particularly effective for optimisation problems with dynamic and stochastic components. We propose multi-objective evolutionary approaches for the knapsack problem with stochastic profits under static and dynamic weight constraints. The chance-constrained problem model allows us to effectively capture the stochastic profits and associate a confidence level to the solutions'
-
R2 Indicator and Deep Reinforcement Learning Enhanced Adaptive Multi-Objective Evolutionary Algorithm arXiv.cs.NE Pub Date : 2024-04-11 Farajollah Tahernezhad-Javazm, Debbie Rankin, Naomi Du Bois, Alice E. Smith, Damien Coyle
Choosing an appropriate optimization algorithm is essential to achieving success in optimization challenges. Here we present a new evolutionary algorithm structure that utilizes a reinforcement learning-based agent aimed at addressing these issues. The agent employs a double deep q-network to choose a specific evolutionary operator based on feedback it receives from the environment during optimization
-
Generalized Population-Based Training for Hyperparameter Optimization in Reinforcement Learning arXiv.cs.NE Pub Date : 2024-04-12 Hui Bai, Ran Cheng
Hyperparameter optimization plays a key role in the machine learning domain. Its significance is especially pronounced in reinforcement learning (RL), where agents continuously interact with and adapt to their environments, requiring dynamic adjustments in their learning trajectories. To cater to this dynamicity, the Population-Based Training (PBT) was introduced, leveraging the collective intelligence
-
Sup3r: A Semi-Supervised Algorithm for increasing Sparsity, Stability, and Separability in Hierarchy Of Time-Surfaces architectures arXiv.cs.NE Pub Date : 2024-04-15 Marco Rasetto, Himanshu Akolkar
The Hierarchy Of Time-Surfaces (HOTS) algorithm, a neuromorphic approach for feature extraction from event data, presents promising capabilities but faces challenges in accuracy and compatibility with neuromorphic hardware. In this paper, we introduce Sup3r, a Semi-Supervised algorithm aimed at addressing these challenges. Sup3r enhances sparsity, stability, and separability in the HOTS networks. It
-
Items or Relations -- what do Artificial Neural Networks learn? arXiv.cs.NE Pub Date : 2024-04-15 Renate Krause, Stefan Reimann
What has an Artificial Neural Network (ANN) learned after being successfully trained to solve a task - the set of training items or the relations between them? This question is difficult to answer for modern applied ANNs because of their enormous size and complexity. Therefore, here we consider a low-dimensional network and a simple task, i.e., the network has to reproduce a set of training items identically
-
Impact of Training Instance Selection on Automated Algorithm Selection Models for Numerical Black-box Optimization arXiv.cs.NE Pub Date : 2024-04-11 Konstantin Dietrich, Diederick Vermetten, Carola Doerr, Pascal Kerschke
The recently proposed MA-BBOB function generator provides a way to create numerical black-box benchmark problems based on the well-established BBOB suite. Initial studies on this generator highlighted its ability to smoothly transition between the component functions, both from a low-level landscape feature perspective, as well as with regard to algorithm performance. This suggests that MA-BBOB-generated
-
UAV-enabled Collaborative Beamforming via Multi-Agent Deep Reinforcement Learning arXiv.cs.NE Pub Date : 2024-04-11 Saichao Liu, Geng Sun, Jiahui Li, Shuang Liang, Qingqing Wu, Pengfei Wang, Dusit Niyato
In this paper, we investigate an unmanned aerial vehicle (UAV)-assistant air-to-ground communication system, where multiple UAVs form a UAV-enabled virtual antenna array (UVAA) to communicate with remote base stations by utilizing collaborative beamforming. To improve the work efficiency of the UVAA, we formulate a UAV-enabled collaborative beamforming multi-objective optimization problem (UCBMOP)
-
Collaborative Ground-Space Communications via Evolutionary Multi-objective Deep Reinforcement Learning arXiv.cs.NE Pub Date : 2024-04-11 Jiahui Li, Geng Sun, Qingqing Wu, Dusit Niyato, Jiawen Kang, Abbas Jamalipour, Victor C. M. Leung
In this paper, we propose a distributed collaborative beamforming (DCB)-based uplink communication paradigm for enabling ground-space direct communications. Specifically, DCB treats the terminals that are unable to establish efficient direct connections with the low Earth orbit (LEO) satellites as distributed antennas, forming a virtual antenna array to enhance the terminal-to-satellite uplink achievable
-
EasyACIM: An End-to-End Automated Analog CIM with Synthesizable Architecture and Agile Design Space Exploration arXiv.cs.NE Pub Date : 2024-04-12 Haoyi Zhang, Jiahao Song, Xiaohan Gao, Xiyuan Tang, Yibo Lin, Runsheng Wang, Ru Huang
Analog Computing-in-Memory (ACIM) is an emerging architecture to perform efficient AI edge computing. However, current ACIM designs usually have unscalable topology and still heavily rely on manual efforts. These drawbacks limit the ACIM application scenarios and lead to an undesired time-to-market. This work proposes an end-to-end automated ACIM based on a synthesizable architecture (EasyACIM). With
-
A Tight $O(4^k/p_c)$ Runtime Bound for a ($μ$+1) GA on Jump$_k$ for Realistic Crossover Probabilities arXiv.cs.NE Pub Date : 2024-04-10 Andre Opris, Johannes Lengler, Dirk Sudholt
The Jump$_k$ benchmark was the first problem for which crossover was proven to give a speedup over mutation-only evolutionary algorithms. Jansen and Wegener (2002) proved an upper bound of $O({\rm poly}(n) + 4^k/p_c)$ for the ($\mu$+1)~Genetic Algorithm ($(\mu+1)$ GA), but only for unrealistically small crossover probabilities $p_c$. To this date, it remains an open problem to prove similar upper bounds
-
Solving the Food-Energy-Water Nexus Problem via Intelligent Optimization Algorithms arXiv.cs.NE Pub Date : 2024-04-10 Qi Deng, Zheng Fan, Zhi Li, Xinna Pan, Qi Kang, MengChu Zhou
The application of evolutionary algorithms (EAs) to multi-objective optimization problems has been widespread. However, the EA research community has not paid much attention to large-scale multi-objective optimization problems arising from real-world applications. Especially, Food-Energy-Water systems are intricately linked among food, energy and water that impact each other. They usually involve a
-
Neural Optimizer Equation, Decay Function, and Learning Rate Schedule Joint Evolution arXiv.cs.NE Pub Date : 2024-04-10 Brandon Morgan, Dean Hougen
A major contributor to the quality of a deep learning model is the selection of the optimizer. We propose a new dual-joint search space in the realm of neural optimizer search (NOS), along with an integrity check, to automate the process of finding deep learning optimizers. Our dual-joint search space simultaneously allows for the optimization of not only the update equation, but also internal decay
-
Evolving Loss Functions for Specific Image Augmentation Techniques arXiv.cs.NE Pub Date : 2024-04-09 Brandon Morgan, Dean Hougen
Previous work in Neural Loss Function Search (NLFS) has shown a lack of correlation between smaller surrogate functions and large convolutional neural networks with massive regularization. We expand upon this research by revealing another disparity that exists, correlation between different types of image augmentation techniques. We show that different loss functions can perform well on certain image
-
Phylogeny-Informed Interaction Estimation Accelerates Co-Evolutionary Learning arXiv.cs.NE Pub Date : 2024-04-09 Jack Garbus, Thomas Willkens, Alexander Lalejini, Jordan Pollack
Co-evolution is a powerful problem-solving approach. However, fitness evaluation in co-evolutionary algorithms can be computationally expensive, as the quality of an individual in one population is defined by its interactions with many (or all) members of one or more other populations. To accelerate co-evolutionary systems, we introduce phylogeny-informed interaction estimation, which uses runtime
-
Temporal True and Surrogate Fitness Landscape Analysis for Expensive Bi-Objective Optimisation arXiv.cs.NE Pub Date : 2024-04-09 C. J. Rodriguez, S. L. Thomson, T. Alderliesten, P. A. N. Bosman
Many real-world problems have expensive-to-compute fitness functions and are multi-objective in nature. Surrogate-assisted evolutionary algorithms are often used to tackle such problems. Despite this, literature about analysing the fitness landscapes induced by surrogate models is limited, and even non-existent for multi-objective problems. This study addresses this critical gap by comparing landscapes
-
Emergent Braitenberg-style Behaviours for Navigating the ViZDoom `My Way Home' Labyrinth arXiv.cs.NE Pub Date : 2024-04-09 Caleidgh Bayer, Robert J. Smith, Malcolm I. Heywood
The navigation of complex labyrinths with tens of rooms under visual partially observable state is typically addressed using recurrent deep reinforcement learning architectures. In this work, we show that navigation can be achieved through the emergent evolution of a simple Braitentberg-style heuristic that structures the interaction between agent and labyrinth, i.e. complex behaviour from simple heuristics
-
An Enhanced Grey Wolf Optimizer with Elite Inheritance and Balance Search Mechanisms arXiv.cs.NE Pub Date : 2024-04-09 Jianhua Jiang, Ziying Zhao, Weihua Li, Keqin Li
The Grey Wolf Optimizer (GWO) is recognized as a novel meta-heuristic algorithm inspired by the social leadership hierarchy and hunting mechanism of grey wolves. It is well-known for its simple parameter setting, fast convergence speed, and strong optimization capability. In the original GWO, there are two significant design flaws in its fundamental optimization mechanisms. Problem (1): the algorithm
-
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention arXiv.cs.NE Pub Date : 2024-04-10 Tsendsuren Munkhdalai, Manaal Faruqui, Siddharth Gopal
This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation. A key component in our proposed approach is a new attention technique dubbed Infini-attention. The Infini-attention incorporates a compressive memory into the vanilla attention mechanism and builds in both masked local attention and long-term
-
Exploring the True Potential: Evaluating the Black-box Optimization Capability of Large Language Models arXiv.cs.NE Pub Date : 2024-04-09 Beichen Huang, Xingyu Wu, Yu Zhou, Jibin Wu, Liang Feng, Ran Cheng, Kay Chen Tan
Large language models (LLMs) have gained widespread popularity and demonstrated exceptional performance not only in natural language processing (NLP) tasks but also in non-linguistic domains. Their potential as artificial general intelligence extends beyond NLP, showcasing promising capabilities in diverse optimization scenarios. Despite this rising trend, whether the integration of LLMs into these
-
Using 3-Objective Evolutionary Algorithms for the Dynamic Chance Constrained Knapsack Problem arXiv.cs.NE Pub Date : 2024-04-09 Ishara Hewa Pathiranage, Frank Neumann, Denis Antipov, Aneta Neumann
Real-world optimization problems often involve stochastic and dynamic components. Evolutionary algorithms are particularly effective in these scenarios, as they can easily adapt to uncertain and changing environments but often uncertainty and dynamic changes are studied in isolation. In this paper, we explore the use of 3-objective evolutionary algorithms for the chance constrained knapsack problem