当前位置: X-MOL 学术Int. J. Intell. Robot. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments
International Journal of Intelligent Robotics and Applications Pub Date : 2023-11-14 , DOI: 10.1007/s41315-023-00302-1
Arindam Saha , Bibhas Chandra Dhara

An accurate perception with a rapid response is fundamental for any autonomous vehicle to navigate safely. Light detection and ranging (LiDAR) sensors provide an accurate estimation of the surroundings in the form of 3D point clouds. Autonomous vehicles use LiDAR to realize obstacles in the surroundings and feed the information to the control units that guarantee collision avoidance and motion planning. In this work, we propose an obstacle estimation (i.e., detection and tracking) approach for autonomous vehicles or robots that carry a three-dimensional (3D) LiDAR and an inertial measurement unit to navigate in dynamic environments. The success of u-depth and restricted v-depth maps, computed from depth images, for obstacle estimation in the existing literature, influences us to explore the same techniques with LiDAR point clouds. Therefore, the proposed system computes u-depth and restricted v-depth representations from point clouds captured with the 3D LiDAR and estimates long-range obstacles using these multiple depth representations. Obstacle estimation using the proposed u-depth and restricted v-depth representations removes the requirement for some of the high computation modules (e.g., ground plane segmentation and 3D clustering) in the existing obstacle detection approaches from 3D LiDAR point clouds. We track all static and dynamic obstacles until they are on the frontal side of the autonomous vehicle and may create obstructions in the movement. We evaluate the performance of the proposed system on multiple open data sets of ground and aerial vehicles and self-captured simulated data sets. We also evaluate the performance of the proposed system with real-time captured data using ground robots. The proposed method is faster than the state-of-the-art (SoA) methods, though the performance of the proposed method is comparable with the SoA methods in terms of dynamic obstacle detection and estimation of their states.



中文翻译:

基于 3D LiDAR 的障碍物检测和跟踪,用于动态环境中的自主导航

准确的感知和快速的响应是任何自动驾驶车辆安全行驶的基础。光探测和测距 (LiDAR) 传感器以 3D 点云的形式提供对周围环境的准确估计。自动驾驶车辆使用激光雷达来识别周围的障碍物,并将信息提供给控制单元,以保证避免碰撞和运动规划。在这项工作中,我们提出了一种用于携带三维 (3D) LiDAR 和惯性测量单元在动态环境中导航的自动驾驶车辆或机器人的障碍物估计(即检测和跟踪)方法。现有文献中根据深度图像计算出的 u 深度图和受限 v 深度图用于障碍物估计的成功影响了我们探索 LiDAR 点云的相同技术。因此,所提出的系统根据 3D LiDAR 捕获的点云计算 u 深度和受限 v 深度表示,并使用这些多个深度表示来估计远程障碍物。使用所提出的 u 深度和受限 v 深度表示的障碍物估计消除了现有 3D LiDAR 点云障碍物检测方法中对一些高计算模块(例如地平面分割和 3D 聚类)的要求。我们跟踪所有静态和动态障碍物,直到它们位于自动驾驶车辆的前侧并可能在运动中造成障碍。我们评估了所提出的系统在地面和飞行器的多个开放数据集以及自捕获模拟数据集上的性能。我们还使用地面机器人实时捕获的数据来评估所提出系统的性能。尽管所提出的方法在动态障碍物检测和状态估计方面的性能与 SoA 方法相当,但所提出的方法比最先进的 (SoA) 方法更快。

更新日期:2023-11-16
down
wechat
bug