当前位置: X-MOL 学术Neural Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Reducing Catastrophic Forgetting With Associative Learning: A Lesson From Fruit Flies
Neural Computation ( IF 2.9 ) Pub Date : 2023-10-10 , DOI: 10.1162/neco_a_01615
Yang Shen 1 , Sanjoy Dasgupta 2 , Saket Navlakha 1
Affiliation  

Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences. In the first layer, inputs (odors) are encoded using sparse, high-dimensional representations, which reduces memory interference by activating nonoverlapping populations of neurons for different odors. In the second layer, only the synapses between odor-activated neurons and the odor’s associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. We prove theoretically that these two perceptron-like layers help reduce catastrophic forgetting compared to the original perceptron algorithm, under continual learning. We then show empirically on benchmark data sets that this simple and lightweight architecture outperforms other popular neural-inspired algorithms when also using a two-layer feedforward architecture. Overall, fruit flies evolved an efficient continual associative learning algorithm, and circuit mechanisms from neuroscience can be translated to improve machine computation.



中文翻译:

通过联想学习减少灾难性遗忘:果蝇的教训

灾难性遗忘仍然是持续学习中的一个突出挑战。最近,受大脑启发的方法,例如持续表征学习和记忆重放,已被用来对抗灾难性遗忘。联想学习(即使在学习了良好的表征之后,仍保留输入和输出之间的关联)在大脑中发挥着重要作用。然而,它在持续学习中的作用尚未得到仔细研究。在这里,我们在果蝇嗅觉系统中发现了一个两层神经回路,它在气味及其相关价之间进行持续的关联学习。在第一层中,输入(气味)使用稀疏的高维表示进行编码,这通过激活不同气味的非重叠神经元群体来减少记忆干扰。在第二层中,只有气味激活神经元和气味相关输出神经元之间的突触在学习过程中被修改;其余的权重被冻结,以防止不相关的记忆被覆盖。我们从理论上证明,在持续学习下,与原始感知器算法相比,这两个类似感知器的层有助于减少灾难性遗忘。然后,我们在基准数据集上凭经验证明,在使用两层前馈架构时,这种简单且轻量级的架构优于其他流行的神经启发算法。总体而言,果蝇进化出了一种高效的持续联想学习算法,并且神经科学的电路机制可以转化为改进机器计算。

更新日期:2023-10-12
down
wechat
bug