Skip to main content
Log in

Attention-based efficient robot grasp detection network

基于注意力的高效机器人抓取检测网络

  • Research Article
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

To balance the inference speed and detection accuracy of a grasp detection algorithm, which are both important for robot grasping tasks, we propose an encoder–decoder structured pixel-level grasp detection neural network named the attention-based efficient robot grasp detection network (AE-GDN). Three spatial attention modules are introduced in the encoder stages to enhance the detailed information, and three channel attention modules are introduced in the decoder stages to extract more semantic information. Several lightweight and efficient DenseBlocks are used to connect the encoder and decoder paths to improve the feature modeling capability of AE-GDN. A high intersection over union (IoU) value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration, but might cause a collision. This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers. We design a new IoU loss calculation method based on an hourglass box matching mechanism, which will create good correspondence between high IoUs and high-quality grasp configurations. AEGDN achieves the accuracy of 98.9% and 96.6% on the Cornell and Jacquard datasets, respectively. The inference speed reaches 43.5 frames per second with only about 1.2 × 106 parameters. The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well. Codes are available at https://github.com/robvincen/robot_gradet.

摘要

为平衡抓取检测算法的推理速度和检测精度, 本文提出一种编码器–解码器结构的像素级抓取检测神经网络, 称为基于注意力的高效机器人抓取检测网络(AE-GDN). 在编码器阶段引入3个空间注意模块以增强细节信息, 在解码器阶段引入3个通道注意模块以提取更多语义信息. 采用多个轻量高效的DenseBlocks连接编码器和解码器, 提高AE-GDN的特征建模能力. 预测得到的抓取矩形框与标签抓取框之间的高交并比(IoU)值并不意味着高质量的抓取配置, 但可能会导致碰撞. 这是因为传统IoU损失计算方法将预测抓取框中心部分像素与夹爪附近像素视为同等重要. 本文设计了一种新的基于沙漏形匹配机制的IoU损失计算方法, 该方法可在高IoU和高质量抓取配置之间建立良好对应关系. AE-GDN在Cornell和Jacquard数据集上的准确率分别达到98.9%和96.6%. 推理速度达到每秒43.5帧, 参数仅约1.2×106. 本文提出的AE-GDN已实际部署在机械臂抓取系统中, 并实现良好抓取性能. 代码可在https://github.com/robvincen/robot_gradet获得.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Data availability

The data that support the findings of this study are openly available in the repository at https://github.com/robvincen/robot_gradet.

References

Download references

Author information

Authors and Affiliations

Authors

Contributions

Xiaofei QIN and Wenkai HU proposed the design. Xiaofei QIN and Chen XIAO conducted the experiments. Xiaofei QIN and Changxiang HE analyzed the results. Wenkai HU drafted the paper. Xiaofei QIN, Songwen PEI, and Xuedian ZHANG helped organize the paper. All authors revised and finalized the paper.

Corresponding author

Correspondence to Xuedian Zhang  (张学典).

Ethics declarations

Xiaofei QIN, Wenkai HU, Chen XIAO, Changxiang HE, Songwen PEI, and Xuedian ZHANG declare that they have no conflict of interest.

Additional information

Project supported by the National Natural Science Foundation of China (No. 92048205) and the China Scholarship Council (No. 202008310014)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qin, X., Hu, W., Xiao, C. et al. Attention-based efficient robot grasp detection network. Front Inform Technol Electron Eng 24, 1430–1444 (2023). https://doi.org/10.1631/FITEE.2200502

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2200502

Key words

关键词

CLC number

Navigation