Abstract
Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.
Article PDF
Similar content being viewed by others
Availability of data and materials
Not applicable
Code availability
Not applicable
References
Murphy, R.R., Nomura, T., Billard, A., Burke, J.L.: Human-robot interaction. IEEE Robot. Autom. Mag. 17(2), 85–89 (2010). https://doi.org/10.1109/MRA.2010.936953
Kosuge, K., Hirata, Y.: Human-robot interaction. In: 2004 IEEE International Conference on Robotics and Biomimetics, pp. 8–11 (2004). https://doi.org/10.1109/ROBIO.2004.1521743
Islam, M.J., Hong, J., Sattar, J.: Person-following by autonomous robots: A categorical overview. Int J Robot Res 38(14), 1581–1618 (2019). https://doi.org/10.1177/0278364919881683
Chen, B.X., Sahdev, R., Tsotsos, J.K.: Integrating stereo vision with a cnn tracker for a person-following robot. In: Liu, M., Chen, H., Vincze, M. (eds.) Computer Vision Systems, pp. 300–313. Springer, Cham (2017)
Sarmento, J., Aguiar, A.S., Santos, F.N.d., Sousa, A.J.: Robot navigation in vineyards based on the visual vanish point concept. In: 2021 International Symposium of Asian Control Association on Intelligent Robotics and Industrial Automation (IRIA), pp. 406–413 (2021). https://doi.org/10.1109/IRIA53009.2021.9588722
Aguiar, A.S., dos Santos, F.N., Cunha, J.B., Sobreira, H., Sousa, A.J.: Localization and mapping for robots in agriculture and forestry: A survey. Robotics 9(4) (2020)
Deremetz, M., Lenain, R., Laneurit, J., Debain, C., Peynot, T.: Autonomous human tracking using uwb sensors for mobile robots: An observer-based approach to follow the human path. In: 2020 IEEE Conference on Control Technology and Applications (CCTA), pp. 372–379 (2020). https://doi.org/10.1109/CCTA41146.2020.9206153
Conejero, M.N., Montes, H., Andujar, D., Bengochea-Guevara, J.M., Ribeiro, A.: Collaborative Harvest Robot. Lecture Notes in Networks and Systems 590 LNNS, 415–426 (2023). https://doi.org/10.1007/978-3-031-21062-4_34
Sarmento, J., Santos, F.N.D., Aguiar, A.S., Sobreira, H., Regueiro, C.V., Valente, A.: FollowMe - A Pedestrian Following Algorithm for Agricultural Logistic Robots, pp. 179–185 (2022). https://doi.org/10.1109/ICARSC55462.2022.9784791. Cited by: 0. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85132993872 &doi=10.1109%2fICARSC55462.2022.9784791 &partnerID=40 &md5=7bab52d61c081d0e5b4368e5454cbe42
Montesdeoca, J., Toibero, J.M., Jordan, J., Zell, A., Carelli, R.: Person-following controller with socially acceptable robot motion. Robotics and Autonomous Systems 153, 104075 (2022). https://doi.org/10.1016/j.robot.2022.104075
Zhang, R., Zhang, Y., Zhang, X.: Tracking in-cabin astronauts using deep learning and head motion clues. IEEE Access 9, 2680–2693 (2021). https://doi.org/10.1109/ACCESS.2020.3046730
Algabri, R., Choi, M.T.: Target recovery for robust deep learning-based person following in mobile robots: Online trajectory prediction. Applied Sciences (Switzerland) 11 (2021). https://doi.org/10.3390/app11094165
Guffanti, D., Brunete, A., Hernando, M., Rueda, J., Navarro, E.: Robogait: A mobile robotic platform for human gait analysis in clinical environments. Sensors 21 (2021). https://doi.org/10.3390/s21206786
Zhang, Z., Yan, J., Kong, X., Zhai, G., Liu, Y.: Efficient motion planning based on kinodynamic model for quadruped robots following persons in confined spaces. IEEE/ASME Transactions on Mechatronics 26, 1997–2006 (2021). https://doi.org/10.1109/TMECH.2021.3083594
Cha, D., Chung, W.: Human-leg detection in 3d feature space for a person-following mobile robot using 2d lidars. International Journal of Precision Engineering and Manufacturing 21, 1299–1307 (2020). https://doi.org/10.1007/s12541-020-00343-7
Chen, X., Liu, J., Wu, J., Wang, C., Song, R.: LoPF : An Online LiDAR-Only Person-Following Framework 71 (2022)
Bharadwaj, R., Alomainy, A., Koul, S.K.: Experimental Investigation of Body-Centric Indoor Localization Using Compact Wearable Antennas and Machine Learning Algorithms. IEEE Transactions on Antennas and Propagation 70(2), 1344–1354 (2022). https://doi.org/10.1109/TAP.2021.3111308
Otim, T., Bahillo, A., Diez, L.E., Lopez-Iturri, P., Falcone, F.: Impact of Body Wearable Sensor Positions on UWB Ranging. IEEE Sensors Journal 19(23), 11449–11457 (2019). https://doi.org/10.1109/JSEN.2019.2935634
Su, Z., Pahlavan, K., Agu, E., Wei, H.: Proximity Detection During Epidemics: Direct UWB TOA Versus Machine Learning Based RSSI. International Journal of Wireless Information Networks 29(4), 480–490 (2022). https://doi.org/10.1007/s10776-022-00577-4
Guler, S., Jiang, J., Alghamdi, A.A., Masoud, R.I., Shamma, J.S.: Real Time Onboard Ultrawideband Localization Scheme for an Autonomous Two-robot System. 2018 IEEE Conference on Control Technology and Applications, CCTA 2018, 1151–1158 (2018). https://doi.org/10.1109/CCTA.2018.8511568
Hepp, B., Tobias, N.: Omni-directional person tracking on a flying robot using occlusion-robust ultra-wideband signals (2016)
Qiu, R., Xu, M., Yan, Y., Smith, J.S.: A methodology review on multi-view pedestrian detection. Studies in Big Data 106, 317–339 (2022). Cited by: 2. https://doi.org/10.1007/978-3-030-95239-6_12
Shen, L., Tao, H., Ni, Y., Wang, Y., Stojanovic, V.: Improved yolov3 model with feature map cropping for multi-scale road object detection. Measurement Science and Technology 34(4), 045406 (2023). https://doi.org/10.1088/1361-6501/acb075
Liu, J., Chen, X., Wang, C., Zhang, G., Song, R.: A person-following method based on monocular camera for quadruped robots. Biomimetic Intelligence and Robotics 2(3), 100058 (2022). https://doi.org/10.1016/j.birob.2022.100058
Jin, D., Fang, Z., Zeng, J.: A robust autonomous following method for mobile robots in dynamic environments. IEEE Access 8, 150311–150325 (2020). https://doi.org/10.1109/ACCESS.2020.3016472
Liu, F., Zhang, J., Wang, J., Han, H., Yang, D.: An uwb/vision fusion scheme for determining pedestrians indoor location. Sensors (Switzerland) 20 (2020). https://doi.org/10.3390/s20041139
Luchetti, A., Carollo, A., Santoro, L., Nardello, M., Brunelli, D., Bosetti, P., Cecco, M.D., Montanini, R.: Acta imeko human identification and tracking using ultra-wideband-vision data fusion in unstructured environments 10 (2021)
Furgale, P., Rehder, J., Siegwart, R.: Unified temporal and spatial calibration for multi-sensor systems. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1280–1286 (2013). https://doi.org/10.1109/IROS.2013.6696514
Rehder, J., Nikolic, J., Schneider, T., Hinzmann, T., Siegwart, R.: Extending kalibr: Calibrating the extrinsics of multiple imus and of individual axes. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 4304–4311 (2016). https://doi.org/10.1109/ICRA.2016.7487628
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C.: MobileNetV2: Inverted Residuals and Linear Bottlenecks (2019)
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: SSD: Single shot MultiBox detector. In: Computer Vision – ECCV 2016, pp. 21–37. Springer, ??? (2016). https://doi.org/10.1007/978-3-319-46448-0_2
Funding
Open access funding provided by FCT|FCCN (b-on). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements No 101004085 and 101000554.
Disclaimer: The sole responsibility for the content on this [website/newsletter/publication] lies with the authors. It does not necessarily reflect the opinion of the European GNSS Agency (GSA) or the European Commission (EC). The GSA or the EC are not responsible for any use that may be made of the information contained therein.
Author information
Authors and Affiliations
Contributions
Not applicable
Corresponding author
Ethics declarations
Ethics approval
Not applicable
Consent to participate
Not applicable
Consent for publication
Not applicable
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sarmento, J., Neves dos Santos, F., Silva Aguiar, A. et al. Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower. J Intell Robot Syst 110, 30 (2024). https://doi.org/10.1007/s10846-023-02037-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-023-02037-4