当前位置: X-MOL 学术J. Mar. Sci. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An attention mechanism model based on positional encoding for the prediction of ship maneuvering motion in real sea state
Journal of Marine Science and Technology ( IF 2.6 ) Pub Date : 2024-01-04 , DOI: 10.1007/s00773-023-00978-x
Lei Dong , Hongdong Wang , Jiankun Lou

This paper proposes an positional encoding-based attention mechanism model which can quantify the temporal correlation of ship maneuvering motion to predict the future ship motion in real sea state. To represent the temporal information of the sequential motion status, the positional encoding consisted by sine and cosine functions of different frequencies is chosen as the input of the model. First, the reasonableness of the improved architecture of the model is validated on the standard turning test datasets of an unmanned surface vehicle. Then, the absolute positional encoding based-scaled-dot product attention mechanism model is compared with other two attention mechanism models with different positional encoding and attention calculation methods and its superiority is verified. As demonstrated by exhaustive experiments, the model has the highest prediction accuracy when the input sequence length equals the output sequence length and the accuracy defined in this paper of the model will drop to less than 90% when the predicted length exceeds 45. Finally, the attention mechanism model is compared with the LSTM model with different lengths of input sequences to demonstrate that the attention mechanism model has a faster training speed when dealing with long sequences.



中文翻译:

基于位置编码的真实海况下船舶操纵运动预测的注意力机制模型

本文提出了一种基于位置编码的注意力机制模型,可以量化船舶操纵运动的时间相关性,以预测真实海况下的未来船舶运动。为了表示顺序运动状态的时间信息,选择由不同频率的正弦和余弦函数组成的位置编码作为模型的输入。首先,在无人地面车辆的标准转弯测试数据集上验证了模型改进架构的合理性。然后,将基于绝对位置编码的缩放点积注意力机制模型与其他两种具有不同位置编码和注意力计算方法的注意力机制模型进行比较,验证了其优越性。详尽的实验表明,当输入序列长度等于输出序列长度时,该模型具有最高的预测精度,而当预测长度超过45时,本文定义的模型精度将下降到90%以下。将注意力机制模型与不同长度输入序列的LSTM模型进行比较,证明注意力机制模型在处理长序列时具有更快的训练速度。

更新日期:2024-01-04
down
wechat
bug