当前位置: X-MOL 学术Knowl. Inf. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attentive neural controlled differential equations for time-series classification and forecasting
Knowledge and Information Systems ( IF 2.7 ) Pub Date : 2023-11-01 , DOI: 10.1007/s10115-023-01977-5
Sheo Yon Jhin , Heejoo Shin , Sujie Kim , Seoyoung Hong , Minju Jo , Solhee Park , Noseong Park , Seungbeom Lee , Hwiyoung Maeng , Seungmin Jeon

Neural networks inspired by differential equations have proliferated for the past several years, of which neural ordinary differential equations (NODEs) and neural controlled differential equations (NCDEs) are two representative examples. In theory, NCDEs exhibit better representation learning capability for time-series data than NODEs. In particular, it is known that NCDEs are suitable for processing irregular time-series data. Whereas NODEs have been successfully extended to adopt attention, methods to integrate attention into NCDEs have not yet been studied. To this end, we present attentive neural controlled differential equations (ANCDEs) for time-series classification and forecasting, where dual NCDEs are used: one for generating attention values and the other for evolving hidden vectors for a downstream machine learning task. We conduct experiments on 5 real-world time-series datasets and 11 baselines. After dropping some values, we also conduct experiments on irregular time-series. Our method consistently shows the best accuracy in all cases by non-trivial margins. Our visualizations also show that the presented attention mechanism works as intended by focusing on crucial information.



中文翻译:

用于时间序列分类和预测的注意力神经控制微分方程

受微分方程启发的神经网络在过去几年中激增,其中神经常微分方程(NODE)和神经控制微分方程(NCDE)是两个代表性的例子。理论上,NCDE 对时间序列数据表现出比 NODE 更好的表示学习能力。特别是,众所周知,NCDE 适合处理不规则的时间序列数据。尽管 NODE 已成功扩展到采用注意力,但尚未研究将注意力整合到 NCDE 中的方法。为此,我们提出了用于时间序列分类和预测的注意力神经控制微分方程(ANCDE),其中使用双 NCDE:一个用于生成注意力值,另一个用于为下游机器学习任务演化隐藏向量。我们对 5 个真实时间序列数据集和 11 个基线进行了实验。去掉一些值后,我们还对不规则时间序列进行实验。我们的方法在所有情况下都始终以不平凡的裕度显示出最佳准确性。我们的可视化还表明,所提出的注意力机制通过关注关键信息来按预期工作。

更新日期:2023-11-01
down
wechat
bug