当前位置: X-MOL 学术Adv. Eng. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal joint prediction of traffic spatial-temporal data with graph sparse attention mechanism and bidirectional temporal convolutional network
Advanced Engineering Informatics ( IF 8.8 ) Pub Date : 2024-04-18 , DOI: 10.1016/j.aei.2024.102533
Dongran Zhang , Jiangnan Yan , Kemal Polat , Adi Alhudhaif , Jun Li

Traffic flow prediction plays a crucial role in the management and operation of urban transportation systems. While extensive research has been conducted on predictions for individual transportation modes, there is relatively limited research on joint prediction across different transportation modes. Furthermore, existing multimodal traffic joint modeling methods often lack flexibility in spatial–temporal feature extraction. To address these issues, we propose a method called Graph Sparse Attention Mechanism with Bidirectional Temporal Convolutional Network (GSABT) for multimodal traffic spatial–temporal joint prediction. First, we use a multimodal graph multiplied by self-attention weights to capture spatial local features, and then employ the Top-U sparse attention mechanism to obtain spatial global features. Second, we utilize a bidirectional temporal convolutional network to enhance the temporal feature correlation between the output and input data, and extract inter-modal and intra-modal temporal features through the share-unique module. Finally, we have designed a multimodal joint prediction framework that can be flexibly extended to both spatial and temporal dimensions. Extensive experiments conducted on three real datasets indicate that the proposed model consistently achieves state-of-the-art predictive performance.

中文翻译:

图稀疏注意力机制和双向时空卷积网络的交通时空数据多模态联合预测

交通流量预测在城市交通系统的管理和运行中起着至关重要的作用。虽然对单个交通方式的预测进行了广泛的研究,但对不同交通方式的联合预测的研究相对有限。此外,现有的多模式交通联合建模方法在时空特征提取方面往往缺乏灵活性。为了解决这些问题,我们提出了一种称为双向时间卷积网络的图稀疏注意力机制(GSABT)的方法,用于多模态交通时空联合预测。首先,我们使用多模态图乘以自注意力权重来捕获空间局部特征,然后采用Top-U稀疏注意力机制来获得空间全局特征。其次,我们利用双向时间卷积网络来增强输出和输入数据之间的时间特征相关性,并通过 share-unique 模块提取模态间和模内时间特征。最后,我们设计了一个可以灵活扩展到空间和时间维度的多模态联合预测框架。对三个真实数据集进行的广泛实验表明,所提出的模型始终能够实现最先进的预测性能。
更新日期:2024-04-18
down
wechat
bug