当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Pessimistic value iteration for multi-task data sharing in Offline Reinforcement Learning
Artificial Intelligence ( IF 14.4 ) Pub Date : 2023-11-20 , DOI: 10.1016/j.artint.2023.104048
Chenjia Bai , Lingxiao Wang , Jianye Hao , Zhuoran Yang , Bin Zhao , Zhen Wang , Xuelong Li

Offline Reinforcement Learning (RL) has shown promising results in learning a task-specific policy from a fixed dataset. However, successful offline RL often relies heavily on the coverage and quality of the given dataset. In scenarios where the dataset for a specific task is limited, a natural approach is to improve offline RL with datasets from other tasks, namely, to conduct Multi-Task Data Sharing (MTDS). Nevertheless, directly sharing datasets from other tasks exacerbates the distribution shift in offline RL. In this paper, we propose an uncertainty-based MTDS approach that shares the entire dataset without data selection. Given ensemble-based uncertainty quantification, we perform pessimistic value iteration on the shared offline dataset, which provides a unified framework for single- and multi-task offline RL. We further provide theoretical analysis, which shows that the optimality gap of our method is only related to the expected data coverage of the shared dataset, thus resolving the distribution shift issue in data sharing. Empirically, we release an MTDS benchmark and collect datasets from three challenging domains. The experimental results show our algorithm outperforms the previous state-of-the-art methods in challenging MTDS problems.



中文翻译:

离线强化学习中多任务数据共享的悲观值迭代

离线强化学习(RL)在从固定数据集中学习特定于任务的策略方面显示出了有希望的结果。然而,成功的离线强化学习通常在很大程度上依赖于给定数据集的覆盖范围和质量。在特定任务的数据集有限的情况下,一种自然的方法是利用其他任务的数据集来改进离线强化学习,即进行多任务数据共享(MTDS)。然而,直接共享其他任务的数据集会加剧离线强化学习的分布变化。在本文中,我们提出了一种基于不确定性的 MTDS 方法,该方法无需数据选择即可共享整个数据集。考虑到基于集成的不确定性量化,我们在共享离线数据集上执行悲观值迭代,这为单任务和多任务离线强化学习提供了统一的框架。我们进一步提供了理论分析,表明我们的方法的最优性差距仅与共享数据集的预期数据覆盖范围有关,从而解决了数据共享中的分布偏移问题。根据经验,我们发布了 MTDS 基准并从三个具有挑战性的领域收集数据集。实验结果表明,我们的算法在解决 MTDS 问题时优于以前最先进的方法。

更新日期:2023-11-22
down
wechat
bug