当前位置: X-MOL 学术VLDB J. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Sub-trajectory clustering with deep reinforcement learning
The VLDB Journal ( IF 4.2 ) Pub Date : 2024-01-25 , DOI: 10.1007/s00778-023-00833-w
Anqi Liang , Bin Yao , Bo Wang , Yinpei Liu , Zhida Chen , Jiong Xie , Feifei Li

Sub-trajectory clustering is a fundamental problem in many trajectory applications. Existing approaches usually divide the clustering procedure into two phases: segmenting trajectories into sub-trajectories and then clustering these sub-trajectories. However, researchers need to develop complex human-crafted segmentation rules for specific applications, making the clustering results sensitive to the segmentation rules and lacking in generality. To solve this problem, we propose a novel algorithm using the clustering results to guide the segmentation, which is based on reinforcement learning (RL). The novelty is that the segmentation and clustering components cooperate closely and improve each other continuously to yield better clustering results. To devise our RL-based algorithm, we model the procedure of trajectory segmentation as a Markov decision process (MDP). We apply Deep-Q-Network (DQN) learning to train an RL model for the segmentation and achieve excellent clustering results. Experimental results on real datasets demonstrate the superior performance of the proposed RL-based approach over state-of-the-art methods.



中文翻译:

深度强化学习的子轨迹聚类

子轨迹聚类是许多轨迹应用中的一个基本问题。现有的方法通常将聚类过程分为两个阶段:将轨迹分割成子轨迹,然后对这些子轨迹进行聚类。然而,研究人员需要针对特定​​应用开发复杂的人为分割规则,使得聚类结果对分割规则敏感且缺乏通用性。为了解决这个问题,我们提出了一种基于强化学习(RL)的新算法,使用聚类结果来指导分割。新颖之处在于分割和聚类组件紧密配合并不断相互改进以产生更好的聚类结果。为了设计基于强化学习的算法,我们将轨迹分割过程建模为马尔可夫决策过程 (MDP)。我们应用 Deep-Q-Network (DQN) 学习来训练用于分割的 RL 模型,并取得出色的聚类结果。真实数据集上的实验结果证明了所提出的基于强化学习的方法比最先进的方法具有优越的性能。

更新日期:2024-01-25
down
wechat
bug