-
Probing the Structure and Functional Properties of the Dropout-Induced Correlated Variability in Convolutional Neural Networks Neural Comput. (IF 2.9) Pub Date : 2024-03-21 Xu Pan, Ruben Coen-Cagli, Odelia Schwartz
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been
-
Vector Symbolic Finite State Machines in Attractor Neural Networks Neural Comput. (IF 2.9) Pub Date : 2024-03-21 Madison Cotteret, Hugh Greatorex, Martin Ziegler, Elisabetta Chicca
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions
-
Toward Improving the Generation Quality of Autoregressive Slot VAEs Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Patrick Emami, Pan He, Sanjay Ranka, Anand Rangarajan
Unconditional scene inference and generation are challenging to learn jointly with a single compositional model. Despite encouraging progress on models that extract object-centric representations (“slots”) from images, unconditional generation of scenes from slots has received less attention. This is primarily because learning the multiobject relations necessary to imagine coherent scenes is difficult
-
CA3 Circuit Model Compressing Sequential Information in Theta Oscillation and Replay Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Satoshi Kuroki, Kenji Mizuseki
The hippocampus plays a critical role in the compression and retrieval of sequential information. During wakefulness, it achieves this through theta phase precession and theta sequences. Subsequently, during periods of sleep or rest, the compressed information reactivates through sharp-wave ripple events, manifesting as memory replay. However, how these sequential neuronal activities are generated
-
Instance-Specific Model Perturbation Improves Generalized Zero-Shot Learning Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Guanyu Yang, Kaizhu Huang, Rui Zhang, Xi Yang
Zero-shot learning (ZSL) refers to the design of predictive functions on new classes (unseen classes) of data that have never been seen during training. In a more practical scenario, generalized zero-shot learning (GZSL) requires predicting both seen and unseen classes accurately. In the absence of target samples, many GZSL models may overfit training data and are inclined to predict individuals as
-
Learning Korobov Functions by Correntropy and Convolutional Neural Networks Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Zhiying Fang, Tong Mao, Jun Fan
Combining information-theoretic learning with deep learning has gained significant attention in recent years, as it offers a promising approach to tackle the challenges posed by big data. However, the theoretical understanding of convolutional structures, which are vital to many structured deep learning models, remains incomplete. To partially bridge this gap, this letter aims to develop generalization
-
Frequency Propagation: Multimechanism Learning in Nonlinear Physical Networks Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Vidyesh Rao Anisetti, Ananth Kandala, Benjamin Scellier, J. M. Schwarz
We introduce frequency propagation, a learning algorithm for nonlinear physical networks. In a resistive electrical circuit with variable resistors, an activation current is applied at a set of input nodes at one frequency and an error current is applied at a set of output nodes at another frequency. The voltage response of the circuit to these boundary currents is the superposition of an activation
-
Column Row Convolutional Neural Network: Reducing Parameters for Efficient Image Processing Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Seongil Im, Jae-Seung Jeong, Junseo Lee, Changhwan Shin, Jeong Ho Cho, Hyunsu Ju
Recent advancements in deep learning have achieved significant progress by increasing the number of parameters in a given model. However, this comes at the cost of computing resources, prompting researchers to explore model compression techniques that reduce the number of parameters while maintaining or even improving performance. Convolutional neural networks (CNN) have been recognized as more efficient
-
An Overview of the Free Energy Principle and Related Research Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Zhengquan Zhang, Feng Xu
The free energy principle and its corollary, the active inference framework, serve as theoretical foundations in the domain of neuroscience, explaining the genesis of intelligent behavior. This principle states that the processes of perception, learning, and decision making—within an agent—are all driven by the objective of “minimizing free energy,” evincing the following behaviors: learning and employing
-
Mathematical Modeling of PI3K/Akt Pathway in Microglia Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Alireza Poshtkohi, John Wade, Liam McDaid, Junxiu Liu, Mark L. Dallas, Angela Bithell
The motility of microglia involves intracellular signaling pathways that are predominantly controlled by changes in cytosolic Ca2+ and activation of PI3K/Akt (phosphoinositide-3-kinase/protein kinase B). In this letter, we develop a novel biophysical model for cytosolic Ca2+ activation of the PI3K/Akt pathway in microglia where Ca2+ influx is mediated by both P2Y purinergic receptors (P2YR) and P2X
-
Object-Centric Scene Representations Using Active Inference Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Toon Van de Maele, Tim Verbelen, Pietro Mazzaglia, Stefano Ferraro, Bart Dhoedt
Representing a scene and its constituent objects from raw sensory data is a core ability for enabling robots to interact with their environment. In this letter, we propose a novel approach for scene understanding, leveraging an object-centric generative model that enables an agent to infer object category and pose in an allocentric reference frame using active inference, a neuro-inspired framework
-
Obtaining Lower Query Complexities Through Lightweight Zeroth-Order Proximal Gradient Algorithms Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Bin Gu, Xiyuan Wei, Hualin Zhang, Yi Chang, Heng Huang
Zeroth-order (ZO) optimization is one key technique for machine learning problems where gradient calculation is expensive or impossible. Several variance, reduced ZO proximal algorithms have been proposed to speed up ZO optimization for nonsmooth problems, and all of them opted for the coordinated ZO estimator against the random ZO estimator when approximating the true gradient, since the former is
-
Lateral Connections Improve Generalizability of Learning in a Simple Neural Network Neural Comput. (IF 2.9) Pub Date : 2024-03-08 Garrett Crutcher
To navigate the world around us, neural circuits rapidly adapt to their environment learning generalizable strategies to decode information. When modeling these learning strategies, network models find the optimal solution to satisfy one task condition but fail when introduced to a novel task or even a different stimulus in the same space. In the experiments described in this letter, I investigate
-
Active Learning for Discrete Latent Variable Models Neural Comput. (IF 2.9) Pub Date : 2024-02-16 Aditi Jha, Zoe C. Ashwood, Jonathan W. Pillow
Active learning seeks to reduce the amount of data required to fit the parameters of a model, thus forming an important class of techniques in modern machine learning. However, past work on active learning has largely overlooked latent variable models, which play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines. Here we address this gap by proposing
-
Evidence for Multiscale Multiplexed Representation of Visual Features in EEG Neural Comput. (IF 2.9) Pub Date : 2024-02-16 Hamid Karimi-Rouzbahani
Distinct neural processes such as sensory and memory processes are often encoded over distinct timescales of neural activations. Animal studies have shown that this multiscale coding strategy is also implemented for individual components of a single process, such as individual features of a multifeature stimulus in sensory coding. However, the generalizability of this encoding strategy to the human
-
Quantifying and Maximizing the Information Flux in Recurrent Neural Networks Neural Comput. (IF 2.9) Pub Date : 2024-02-16 Claus Metzner, Marius E. Yamakou, Dennis Voelkl, Achim Schilling, Patrick Krauss
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network’s connection weights, it is unclear how to maximize I systematically and how to quantify the flux
-
Learning Only on Boundaries: A Physics-Informed Neural Operator for Solving Parametric Partial Differential Equations in Complex Geometries Neural Comput. (IF 2.9) Pub Date : 2024-02-16 Zhiwei Fang, Sifan Wang, Paris Perdikaris
Recently, deep learning surrogates and neural operators have shown promise in solving partial differential equations (PDEs). However, they often require a large amount of training data and are limited to bounded domains. In this work, we present a novel physics-informed neural operator method to solve parameterized boundary value problems without labeled data. By reformulating the PDEs into boundary
-
Advantages of Persistent Cohomology in Estimating Animal Location From Grid Cell Population Activity Neural Comput. (IF 2.9) Pub Date : 2024-02-16 Daisuke Kawahara, Shigeyoshi Fujisawa
Many cognitive functions are represented as cell assemblies. In the case of spatial navigation, the population activity of place cells in the hippocampus and grid cells in the entorhinal cortex represents self-location in the environment. The brain cannot directly observe self-location information in the environment. Instead, it relies on sensory information and memory to estimate self-location. Therefore
-
Cooperativity, Information Gain, and Energy Cost During Early LTP in Dendritic Spines Neural Comput. (IF 2.9) Pub Date : 2024-01-18 Jan Karbowski, Paulina Urban
We investigate a mutual relationship between information and energy during the early phase of LTP induction and maintenance in a large-scale system of mutually coupled dendritic spines, with discrete internal states and probabilistic dynamics, within the framework of nonequilibrium stochastic thermodynamics. In order to analyze this computationally intractable stochastic multidimensional system, we
-
Q&A Label Learning Neural Comput. (IF 2.9) Pub Date : 2023-12-15 Kota Kawamoto, Masato Uchida
Assigning labels to instances is crucial for supervised machine learning. In this letter, we propose a novel annotation method, Q&A labeling, which involves a question generator that asks questions about the labels of the instances to be assigned and an annotator that answers the questions and assigns the corresponding labels to the instances. We derived a generative model of labels assigned according
-
Emergence of Universal Computations Through Neural Manifold Dynamics Neural Comput. (IF 2.9) Pub Date : 2023-12-15 Joan Gort
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology,
-
Efficient Decoding of Large-Scale Neural Population Responses With Gaussian-Process Multiclass Regression Neural Comput. (IF 2.9) Pub Date : 2023-12-15 C. Daniel Greenidge, B. Scholl, Jacob L. Yates, Jonathan W. Pillow
Neural decoding methods provide a powerful tool for quantifying the information content of neural population codes and the limits imposed by correlations in neural activity. However, standard decoding methods are prone to overfitting and scale poorly to high-dimensional settings. Here, we introduce a novel decoding method to overcome these limitations. Our approach, the gaussian process multiclass
-
Cocaine Use Prediction With Tensor-Based Machine Learning on Multimodal MRI Connectome Data Neural Comput. (IF 2.9) Pub Date : 2024-01-01 Anru R. Zhang, Ryan P. Bell, Chen An, Runshi Tang, Shana A. Hall, Cliburn Chan, Kareem Al-Khalil, Christina S. Meade
This letter considers the use of machine learning algorithms for predicting cocaine use based on magnetic resonance imaging (MRI) connectomic data. The study used functional MRI (fMRI) and diffusion MRI (dMRI) data collected from 275 individuals, which was then parcellated into 246 regions of interest (ROIs) using the Brainnetome atlas. After data preprocessing, the data sets were transformed into
-
Active Predictive Coding: A Unifying Neural Model for Active Perception, Compositional Learning, and Hierarchical Planning Neural Comput. (IF 2.9) Pub Date : 2024-01-01 Rajesh P. N. Rao, Dimitrios C. Gklezakos, Vishwas Sathish
There is growing interest in predictive coding as a model of how the brain learns through predictions and prediction errors. Predictive coding models have traditionally focused on sensory coding and perception. Here we introduce active predictive coding (APC) as a unifying model for perception, action, and cognition. The APC model addresses important open problems in cognitive science and AI, including
-
Modeling the Role of Contour Integration in Visual Inference Neural Comput. (IF 2.9) Pub Date : 2024-01-01 Salman Khan, Alexander Wong, Bryan Tripp
Under difficult viewing conditions, the brain’s visual system uses a variety of recurrent modulatory mechanisms to augment feedforward processing. One resulting phenomenon is contour integration, which occurs in the primary visual (V1) cortex and strengthens neural responses to edges if they belong to a larger smooth contour. Computational models have contributed to an understanding of the circuit
-
The Limiting Dynamics of SGD: Modified Loss, Phase-Space Oscillations, and Anomalous Diffusion Neural Comput. (IF 2.9) Pub Date : 2024-01-01 Daniel Kunin, Javier Sagastuy-Brena, Lauren Gillespie, Eshed Margalit, Hidenori Tanaka, Surya Ganguli, Daniel L. K. Yamins
In this work, we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). As observed previously, long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance traveled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction
-
Performance Evaluation of Matrix Factorization for fMRI Data Neural Comput. (IF 2.9) Pub Date : 2024-01-01 Yusuke Endo, Koujin Takeda
A hypothesis in the study of the brain is that sparse coding is realized in information representation of external stimuli, which has been experimentally confirmed for visual stimulus recently. However, unlike the specific functional region in the brain, sparse coding in information processing in the whole brain has not been clarified sufficiently. In this study, we investigate the validity of sparse
-
Synchronization and Clustering in Complex Quadratic Networks Neural Comput. (IF 2.9) Pub Date : 2023-12-05 Anca Rǎdulescu, Danae Evans, Amani-Dasia Augustin, Anthony Cooper, Johan Nakuci, Sarah Muldoon
Synchronization and clustering are well studied in the context of networks of oscillators, such as neuronal networks. However, this relationship is notoriously difficult to approach mathematically in natural, complex networks. Here, we aim to understand it in a canonical framework, using complex quadratic node dynamics, coupled in networks that we call complex quadratic networks (CQNs). We review previously
-
Adaptive Filter Model of Cerebellum for Biological Muscle Control With Spike Train Inputs Neural Comput. (IF 2.9) Pub Date : 2023-11-07 Emma Wilson
Prior applications of the cerebellar adaptive filter model have included a range of tasks within simulated and robotic systems. However, this has been limited to systems driven by continuous signals. Here, the adaptive filter model of the cerebellum is applied to the control of a system driven by spiking inputs by considering the problem of controlling muscle force. The performance of the standard
-
Predictive Coding as a Neuromorphic Alternative to Backpropagation: A Critical Evaluation Neural Comput. (IF 2.9) Pub Date : 2023-11-07 Umais Zahid, Qinghai Guo, Zafeirios Fountas
Backpropagation has rapidly become the workhorse credit assignment algorithm for modern deep learning methods. Recently, modified forms of predictive coding (PC), an algorithm with origins in computational neuroscience, have been shown to result in approximately or exactly equal parameter updates to those under backpropagation. Due to this connection, it has been suggested that PC can act as an alternative
-
Robustness to Transformations Across Categories: Is Robustness Driven by Invariant Neural Representations? Neural Comput. (IF 2.9) Pub Date : 2023-11-07 Hojin Jang, Syed Suleman Abbas Zaidi, Xavier Boix, Neeraj Prasad, Sharon Gilad-Gutnick, Shlomit Ben-Ami, Pawan Sinha
Deep convolutional neural networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations (e.g., blur or noise) when these transformations are included in the training set. A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed. However, to what extent this hypothesis holds
-
Training a Hyperdimensional Computing Classifier Using a Threshold on Its Confidence Neural Comput. (IF 2.9) Pub Date : 2023-11-07 Laura Smets, Werner Van Leekwijck, Ing Jyh Tsang, Steven Latré
Hyperdimensional computing (HDC) has become popular for light-weight and energy-efficient machine learning, suitable for wearable Internet-of-Things devices and near-sensor or on-device processing. HDC is computationally less complex than traditional deep learning algorithms and achieves moderate to good classification performance. This letter proposes to extend the training procedure in HDC by taking
-
Generalized Low-Rank Update: Model Parameter Bounds for Low-Rank Training Data Modifications Neural Comput. (IF 2.9) Pub Date : 2023-10-16 Hiroyuki Hanada, Noriaki Hashimoto, Kouichi Taji, Ichiro Takeuchi
In this study, we have developed an incremental machine learning (ML) method that efficiently obtains the optimal model when a small number of instances or features are added or removed. This problem holds practical importance in model selection, such as cross-validation (CV) and feature selection. Among the class of ML methods known as linear estimators, there exists an efficient model update framework
-
Winning the Lottery With Neural Connectivity Constraints: Faster Learning Across Cognitive Tasks With Spatially Constrained Sparse RNNs Neural Comput. (IF 2.9) Pub Date : 2023-10-10 Mikail Khona, Sarthak Chandra, Joy J. Ma, Ila R. Fiete
Recurrent neural networks (RNNs) are often used to model circuits in the brain and can solve a variety of difficult computational problems requiring memory, error correction, or selection (Hopfield, 1982; Maass et al., 2002; Maass, 2011). However, fully connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (about 0.1%). Motivated by the neocortex, where
-
A Tutorial on the Spectral Theory of Markov Chains Neural Comput. (IF 2.9) Pub Date : 2023-10-10 Eddie Seabrook, Laurenz Wiskott
Markov chains are a class of probabilistic models that have achieved widespread application in the quantitative sciences. This is in part due to their versatility, but is compounded by the ease with which they can be probed analytically. This tutorial provides an in-depth introduction to Markov chains and explores their connection to graphs and random walks. We use tools from linear algebra and graph
-
Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies Neural Comput. (IF 2.9) Pub Date : 2023-10-10 Yang Shen, Sanjoy Dasgupta, Saket Navlakha
Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however,
-
Self-Organization of Nonlinearly Coupled Neural Fluctuations Into Synergistic Population Codes Neural Comput. (IF 2.9) Pub Date : 2023-10-10 Hengyuan Ma, Yang Qi, Pulin Gong, Jie Zhang, Wen-lian Lu, Jianfeng Feng
Neural activity in the brain exhibits correlated fluctuations that may strongly influence the properties of neural population coding. However, how such correlated neural fluctuations may arise from the intrinsic neural circuit dynamics and subsequently affect the computational properties of neural population activity remains poorly understood. The main difficulty lies in resolving the nonlinear coupling
-
Optimal Feedback Control for the Proportion of Energy Cost in the Upper-Arm Reaching Movement Neural Comput. (IF 2.9) Pub Date : 2023-09-19 Yoshiaki Taniai
The minimum expected energy cost model, which has been proposed as one of the optimization principles for movement planning, can reproduce many characteristics of the human upper-arm reaching movement when signal-dependent noise and the co-contraction of the antagonist’s muscles are considered. Regarding the optimization principles, discussion has been mainly based on feedforward control; however,
-
Learning Intention-Aware Policies in Deep Reinforcement Learning Neural Comput. (IF 2.9) Pub Date : 2023-09-08 T. Zhao, S. Wu, G. Li, Y. Chen, G. Niu, Masashi Sugiyama
Deep reinforcement learning (DRL) provides an agent with an optimal policy so as to maximize the cumulative rewards. The policy defined in DRL mainly depends on the state, historical memory, and policy model parameters. However, we humans usually take actions according to our own intentions, such as moving fast or slow, besides the elements included in the traditional policy models. In order to make
-
Grid Cell Percolation Neural Comput. (IF 2.9) Pub Date : 2023-09-08 Yuri Dabaghian
Grid cells play a principal role in enabling cognitive representations of ambient environments. The key property of these cells—the regular arrangement of their firing fields—is commonly viewed as a means for establishing spatial scales or encoding specific locations. However, using grid cells’ spiking outputs for deducing geometric orderliness proves to be a strenuous task due to fairly irregular
-
Transfer Learning With Singular Value Decomposition of Multichannel Convolution Matrices Neural Comput. (IF 2.9) Pub Date : 2023-09-08 Tak Shing Au Yeung, Ka Chun Cheung, Michael K. Ng, Simon See, Andy Yip
The task of transfer learning using pretrained convolutional neural networks is considered. We propose a convolution-SVD layer to analyze the convolution operators with a singular value decomposition computed in the Fourier domain. Singular vectors extracted from the source domain are transferred to the target domain, whereas the singular values are fine-tuned with a target data set. In this way, dimension
-
Composite Optimization Algorithms for Sigmoid Networks Neural Comput. (IF 2.9) Pub Date : 2023-08-07 Huixiong Chen, Qi Ye
In this letter, we use composite optimization algorithms to solve sigmoid networks. We equivalently transfer the sigmoid networks to a convex composite optimization and propose the composite optimization algorithms based on the linearized proximal algorithms and the alternating direction method of multipliers. Under the assumptions of the weak sharp minima and the regularity condition, the algorithm
-
On an Interpretation of ResNets via Gate-Network Control Neural Comput. (IF 2.9) Pub Date : 2023-08-07 Changcun Huang
This letter first constructs a typical solution of ResNets for multicategory classifications based on the idea of the gate control of LSTMs, from which a general interpretation of the ResNet architecture is given and the performance mechanism is explained. We also use more solutions to further demonstrate the generality of that interpretation. The classification result is then extended to the universal-approximation
-
Mirror Descent of Hopfield Model Neural Comput. (IF 2.9) Pub Date : 2023-08-07 Hyungjoon Soh, Dongyeob Kim, Juno Hwang, Junghyo Jo
Mirror descent is an elegant optimization technique that leverages a dual space of parametric models to perform gradient descent. While originally developed for convex optimization, it has increasingly been applied in the field of machine learning. In this study, we propose a novel approach for using mirror descent to initialize the parameters of neural networks. Specifically, we demonstrate that by
-
A Noise-Based Novel Strategy for Faster SNN Training Neural Comput. (IF 2.9) Pub Date : 2023-08-07 Chunming Jiang, Yilei Zhang
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bioplausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have advantages and limitations. ANN-to-SNN conversion requires a long inference time to approximate the accuracy of
-
Mean-Field Approximations With Adaptive Coupling for Networks With Spike-Timing-Dependent Plasticity Neural Comput. (IF 2.9) Pub Date : 2023-08-07 Benoit Duchet, Christian Bick, Áine Byrne
Understanding the effect of spike-timing-dependent plasticity (STDP) is key to elucidating how neural networks change over long timescales and to design interventions aimed at modulating such networks in neurological disorders. However, progress is restricted by the significant computational cost associated with simulating neural network models with STDP and by the lack of low-dimensional description
-
Exploring Trade-Offs in Spiking Neural Networks Neural Comput. (IF 2.9) Pub Date : 2023-07-31 Florian Bacho, Dominique Chu
Spiking neural networks (SNNs) have emerged as a promising alternative to traditional deep neural networks for low-power computing. However, the effectiveness of SNNs is not solely determined by their performance but also by their energy consumption, prediction speed, and robustness to noise. The recent method Fast & Deep, along with others, achieves fast and energy-efficient computation by constraining
-
Maximal Memory Capacity Near the Edge of Chaos in Balanced Cortical E-I Networks Neural Comput. (IF 2.9) Pub Date : 2023-07-11 Takashi Kanamaru, Takao K. Hensch, Kazuyuki Aihara
We examine the efficiency of information processing in a balanced excitatory and inhibitory (E-I) network during the developmental critical period, when network plasticity is heightened. A multimodule network composed of E-I neurons was defined, and its dynamics were examined by regulating the balance between their activities. When adjusting E-I activity, both transitive chaotic synchronization with
-
Graph-Regularized Tensor Regression: A Domain-Aware Framework for Interpretable Modeling of Multiway Data on Graphs Neural Comput. (IF 2.9) Pub Date : 2023-07-11 Yao Lei Xu, Kriton Konstantinidis, Danilo P. Mandic
Modern data analytics applications are increasingly characterized by exceedingly large and multidimensional data sources. This represents a challenge for traditional machine learning models, as the number of model parameters needed to process such data grows exponentially with the data dimensions, an effect known as the curse of dimensionality. Recently, tensor decomposition (TD) techniques have shown
-
Optimal Burstiness in Populations of Spiking Neurons Facilitates Decoding of Decreases in Tonic Firing Neural Comput. (IF 2.9) Pub Date : 2023-07-11 Sylvia C. L. Durian, Mark Agrios, Gregory W. Schwartz
A stimulus can be encoded in a population of spiking neurons through any change in the statistics of the joint spike pattern, yet we commonly summarize single-trial population activity by the summed spike rate across cells: the population peristimulus time histogram (pPSTH). For neurons with a low baseline spike rate that encode a stimulus with a rate increase, this simplified representation works
-
Attention in a Family of Boltzmann Machines Emerging From Modern Hopfield Networks Neural Comput. (IF 2.9) Pub Date : 2023-07-11 Toshihiro Ota, Ryo Karakida
Hopfield networks and Boltzmann machines (BMs) are fundamental energy-based neural network models. Recent studies on modern Hopfield networks have broadened the class of energy functions and led to a unified perspective on general Hopfield networks, including an attention module. In this letter, we consider the BM counterparts of modern Hopfield networks using the associated energy functions and study
-
Posterior Covariance Information Criterion for Weighted Inference Neural Comput. (IF 2.9) Pub Date : 2023-06-12 Yukito Iba, Keisuke Yano
For predictive evaluation based on quasi-posterior distributions, we develop a new information criterion, the posterior covariance information criterion (PCIC). PCIC generalizes the widely applicable information criterion (WAIC) so as to effectively handle predictive scenarios where likelihoods for the estimation and the evaluation of the model may be different. A typical example of such scenarios
-
Dynamic Modeling of Spike Count Data With Conway-Maxwell Poisson Variability Neural Comput. (IF 2.9) Pub Date : 2023-06-12 Ganchao Wei, Ian H. Stevenson
In many areas of the brain, neural spiking activity covaries with features of the external world, such as sensory stimuli or an animal's movement. Experimental findings suggest that the variability of neural activity changes over time and may provide information about the external world beyond the information provided by the average neural activity. To flexibly track time-varying neural response properties
-
Deep Clustering With a Constraint for Topological Invariance Based on Symmetric InfoNCE Neural Comput. (IF 2.9) Pub Date : 2023-06-12 Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, Takafumi Kanamori
We consider the scenario of deep clustering, in which the available prior knowledge is limited. In this scenario, few existing state-of-the-art deep clustering methods can perform well for both noncomplex topology and complex topology data sets. To address the problem, we propose a constraint utilizing symmetric InfoNCE, which helps an objective of the deep clustering method in the scenario of training
-
Optimization and Learning With Randomly Compressed Gradient Updates Neural Comput. (IF 2.9) Pub Date : 2023-06-12 Zhanliang Huang, Yunwen Lei, Ata Kabán
Gradient descent methods are simple and efficient optimization algorithms with widespread applications. To handle high-dimensional problems, we study compressed stochastic gradient descent (SGD) with low-dimensional gradient updates. We provide a detailed analysis in terms of both optimization rates and generalization rates. To this end, we develop uniform stability bounds for CompSGD for both smooth
-
Conductance-Based Phenomenological Nonspiking Model: A Dimensionless and Simple Model That Reliably Predicts the Effects of Conductance Variations on Nonspiking Neuronal Dynamics Neural Comput. (IF 2.9) Pub Date : 2023-06-12 Loïs Naudin, Laetitia Raison-Aubry, Laure Buhry
The modeling of single neurons has proven to be an indispensable tool in deciphering the mechanisms underlying neural dynamics and signal processing. In that sense, two types of single-neuron models are extensively used: the conductance-based models (CBMs) and the so-called phenomenological models, which are often opposed in their objectives and their use. Indeed, the first type aims to describe the
-
Efficient Decoding of Compositional Structure in Holistic Representations Neural Comput. (IF 2.9) Pub Date : 2023-05-15 Denis Kleyko, Connor Bybee, Ping-Chen Huang, Christopher J. Kymn, Bruno A. Olshausen, E. Paxon Frady, Friedrich T. Sommer
We investigate the task of retrieving information from compositional distributed representations formed by hyperdimensional computing/vector symbolic architectures and present novel techniques that achieve new information rate bounds. First, we provide an overview of the decoding techniques that can be used to approach the retrieval task. The techniques are categorized into four groups. We then evaluate
-
Automatic Hyperparameter Tuning in Sparse Matrix Factorization Neural Comput. (IF 2.9) Pub Date : 2023-05-12 Ryota Kawasumi, Koujin Takeda
We study the problem of hyperparameter tuning in sparse matrix factorization under a Bayesian framework. In prior work, an analytical solution of sparse matrix factorization with Laplace prior was obtained by a variational Bayes method under several approximations. Based on this solution, we propose a novel numerical method of hyperparameter tuning by evaluating the zero point of the normalization
-
Scalable Variational Inference for Low-Rank Spatiotemporal Receptive Fields Neural Comput. (IF 2.9) Pub Date : 2023-05-12 Lea Duncker, Kiersten M. Ruda, Greg D. Field, Jonathan W. Pillow
An important problem in systems neuroscience is to characterize how a neuron integrates sensory inputs across space and time. The linear receptive field provides a mathematical characterization of this weighting function and is commonly used to quantify neural response properties and classify cell types. However, estimating receptive fields is difficult in settings with limited data and correlated
-
Deep Learning Solution of the Eigenvalue Problem for Differential Operators Neural Comput. (IF 2.9) Pub Date : 2023-05-12 Ido Ben-Shaul, Leah Bar, Dalia Fishelov, Nir Sochen
Solving the eigenvalue problem for differential operators is a common problem in many scientific fields. Classical numerical methods rely on intricate domain discretization and yield nonanalytic or nonsmooth approximations. We introduce a novel neural network–based solver for the eigenvalue problem of differential self-adjoint operators, where the eigenpairs are learned in an unsupervised end-to-end