Skip to main content
Log in

Predictive modeling of gaze patterns in drivers: a machine learning approach with tobii glass 2

  • Original Research
  • Published:
International Journal of Information Technology Aims and scope Submit manuscript

Abstract

Understanding and predicting drivers' gaze patterns is essential for improving road safety and optimizing in-vehicle displays. This study delves into the nuanced dynamics of drivers’ visual attention across varied road segments, employing both statistical analyses and machine learning models. Ten participants, spanning diverse demographics, participated in a real driving experiment, navigating curves and straight stretches while their eye movements were tracked using Tobii Pro Glasses 2. Statistical analysis unveiled significant variations in gaze behavior, emphasizing specific Areas of Interest (AOIs) like the instrumental panel, left view, main view, and right view during curves. Machine learning models, including XGBoost (XGB), Adaboost, Support Vector Machine (SVM), and an Ensemble Model, were deployed to predict gaze patterns. Adaboost emerged as the top-performing model, showcasing robust accuracy (82.50%). The Ensemble Model, capitalizing on the strengths of individual models, demonstrated a well-balanced performance with a remarkable training accuracy of 99.36% and testing accuracy of 82.50%, coupled with an F1-Score of 83.72%. Despite participant-related limitations, this study provides indispensable insights into the intricate dynamics of driver gaze behavior. It underscores the effectiveness of machine learning in understanding and predicting drivers' gaze patterns, offering valuable implications for applications aimed at fortifying road safety measures and optimizing in-vehicle displays. The balanced performance of the Ensemble Model affirms the potential of amalgamating diverse models, presenting a promising avenue for future research and practical applications in the realm of driver behavior analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig.2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Kumar A, Saini T, Pandey PB et al (2022) Vision-based outdoor navigation of self-driving car using lane detection. Int J Inf Technol 14:215–227. https://doi.org/10.1007/s41870-021-00747-2

    Article  Google Scholar 

  2. World Health Organization. Road Traffic Injuries. [Internet]. 2022 [cited 2023 Sep 24]. Available from: https://www.who.int/news-room/fact-sheets/detail/road-traffic-injuries.

  3. Farhat W, Ben Rhaiem O, Faiedh H et al (2023) A novel cooperative collision avoidance system for vehicular communication based on deep learning. Int J Inf Technol. https://doi.org/10.1007/s41870-023-01574-3

    Article  Google Scholar 

  4. Chen S, Kuhn M, Prettner K, Bloom DE (2019) The global macroeconomic burden of road injuries: estimates and projections for 166 countries. Lancet Planet Health. 3(9):e390–e398. https://doi.org/10.1016/s2542-5196(19)30170-6

    Article  Google Scholar 

  5. Antin J, Lee S, Hankey J, Dingus T, Transportation Research Board, Strategic Highway Research Program Safety Focus Area et al (2011) Design of the in-vehicle driving behavior and crash risk study: In support of the SHRP 2 naturalistic driving study. National Academies Press, Washington, DC

    Book  Google Scholar 

  6. Zonfrillo MR, Locey CM, Scarfone SR, Arbogast KB (2014) Motor vehicle crash–related injury causation scenarios for spinal injuries in restrained children and adolescents. Traffic Inj Prev 15(sup1):S49-55. https://doi.org/10.1080/15389588.2014.934959

    Article  Google Scholar 

  7. Theeuwes J, Kramer AF, Hahn S, Irwin DE (1998) Our eyes do not always go where we want them to go: capture of the eyes by new objects. Psychol Sci 9(5):379–385. https://doi.org/10.1111/1467-9280.00071

    Article  Google Scholar 

  8. Wolfe B, Sawyer BD, Rosenholtz R (2022) Toward a theory of visual information acquisition in driving. Hum Factors 64(4):694–713. https://doi.org/10.1177/0018720820939693

    Article  Google Scholar 

  9. Reyes ML, Lee JD (2008) Effects of cognitive load presence and duration on driver eye movements and event detection performance. Transp Res Part F Traffic Psychol Behav. 11(6):391–402. https://doi.org/10.1016/j.trf.2008.03.004

    Article  Google Scholar 

  10. Ahlström C, Kircher K, Nyström M, Wolfe B (2021) Eye tracking in driver attention research—how gaze data interpretations influence what we learn. Front Neuroergonomics. https://doi.org/10.3389/fnrgo.2021.778043

    Article  Google Scholar 

  11. Eby DW, Silverstein NM, Molnar LJ, LeBlanc D, Adler G (2012) Driving behaviors in early stage dementia: a study using in-vehicle technology. Accid Anal Prev 49:330–337. https://doi.org/10.1016/j.aap.2011.11.021

    Article  Google Scholar 

  12. Schagen I, Welsh R, Backer-Grondahl A, Hoedemaeker M, Lotan T, Morris A, et al (2011) Towards a large scale European Naturalistic Driving study: final report of PROLOGUE: deliverable D4.2. [cited 2023 Sep 24]. Available from: https://api.semanticscholar.org/CorpusID:127426801.

  13. De Silva S, Dayarathna S, Ariyarathne G, Meedeniya D, Jayarathna S, Michalek AMP (2020) Computational decision support system for ADHD identification. Int J Autom Comput 18(2):233–255. https://doi.org/10.1007/s11633-020-1252-1

    Article  Google Scholar 

  14. Velichkovsky BB, Rumyantsev MA, Morozov MA (2014) New solution to the midas touch problem: identification of visual commands via extraction of focal fixations. Procedia Comput Sci 39:75–82. https://doi.org/10.1016/j.procs.2014.11.012

    Article  Google Scholar 

  15. Kar A, Corcoran P (2017) A review and analysis of eye-gaze estimation systems, algorithms and performance evaluation methods in consumer platforms. IEEE Access 5:16495–16519

    Article  Google Scholar 

  16. Bocklisch F, Bocklisch SF, Beggiato M, Krems JF (2017) Adaptive fuzzy pattern classification for the online detection of driver lane change intention. Neurocomputing 262:148–158. https://doi.org/10.1016/j.neucom.2017.02.089

    Article  Google Scholar 

  17. Martin S, Vora S, Yuen K, Trivedi MM (2018) Dynamics of driver’s gaze: explorations in behavior modeling and maneuver prediction. IEEE Trans Intell Veh 3(2):141–150. https://doi.org/10.1109/tiv.2018.2804160

    Article  Google Scholar 

  18. Yan Q, Zhang W, Hu W, Cui G, Wei D, Xu J (2021) Gaze dynamics with spatiotemporal guided feature descriptor for prediction of driver’s maneuver behavior. Proceedings of the Institution of Mechanical Engineers, Part D: J Automobile Eng 235(12):3051–3065. https://doi.org/10.1177/09544070211007807

    Article  Google Scholar 

  19. Khan A, Li JP, Khan MY et al (2020) Complex environment perception and positioning based visual information retrieval. Int J Inf Technol 12:409–417. https://doi.org/10.1007/s41870-020-00434-8

    Article  Google Scholar 

  20. Tsukada A, Shino M, Devyver M, Kanade T (2011) Illumination-free gaze estimation method for first-person vision wearable device. In: 2011 IEEE international conference on computer vision workshops (ICCV Workshops). Doi:https://doi.org/10.1109/iccvw.2011.6130505

  21. Khan MQ, Lee S (2019) Gaze and eye tracking: techniques and applications in ADAS. Sensors 19(24):5540. https://doi.org/10.3390/s19245540

    Article  Google Scholar 

  22. Akinyelu AA, Blignaut P (2020) Convolutional neural network-based methods for eye gaze estimation: a survey. IEEE Access 8:142581–142605. https://doi.org/10.1109/access.2020.3013540

    Article  Google Scholar 

  23. Carrasco M (2011) Visual attention: the past 25 years. Vis Res 51(13):1484–1525. https://doi.org/10.1016/j.visres.2011.04.012

    Article  Google Scholar 

  24. Borji A, Itti L (2013) State-of-the-art in visual attention modeling. IEEE Trans Pattern Anal Mach Intell 35(1):185–207. https://doi.org/10.1109/tpami.2012.89

    Article  Google Scholar 

  25. Sullivan BT, Johnson L, Rothkopf CA, Ballard D, Hayhoe M (2012) The role of uncertainty and reward on eye movements in a virtual driving task. J Vis 12(13):19–19. https://doi.org/10.1167/12.13.19

    Article  Google Scholar 

  26. Orquin JL, Mueller LS (2013) Attention and choice: a review on eye movements in decision making. Acta Physiol (Oxf) 144(1):190–206. https://doi.org/10.1016/j.actpsy.2013.06.003

    Article  Google Scholar 

  27. Krajbich I, Lu D, Camerer C, Rangel A (2012) The attentional drift-diffusion model extends to simple purchasing decisions. Front Psychol. https://doi.org/10.3389/fpsyg.2012.00193

    Article  Google Scholar 

  28. Oroni CZ, Zhu Y, & Shen N (2021) Eye movement driving analysis during parallel parking along roadways: comparison of experienced and novice drivers. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). Doi: https://doi.org/10.1109/dasc-picom-cbdcom-cyberscitech52372.2021.00152

  29. Ansari S, Naghdy F, Du H (2022) Human-machine shared driving: challenges and future directions. IEEE Trans Intell Veh 7(3):499–519. https://doi.org/10.1109/tiv.2022.3154426

    Article  Google Scholar 

  30. Vora S, Rangesh A, Trivedi MM (2018) Driver gaze zone estimation using convolutional neural networks: a general framework and ablative analysis. IEEE Trans Intell Veh 3(3):254–265. https://doi.org/10.1109/tiv.2018.2843120

    Article  Google Scholar 

  31. Tawari A (2014) Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos. https://api.semanticscholar.org/CorpusID:15965295.

  32. Keshava A, Nezami FN, Neumann H, Izdebski K, Schüler T, König P (2020) Low-level action schemas support gaze guidance behavior for action planning and execution in novel tasks. Doi: https://doi.org/10.1101/2021.01.29.428782

  33. Oh J, Lee Y, Yoo J, Kwon S (2022) Improved feature-based gaze estimation using self-attention module and synthetic eye images. Sensors 22(11):4026. https://doi.org/10.3390/s22114026

    Article  Google Scholar 

  34. Wade NJ (2020) Looking at Buswell’s pictures. J Eye Mov Res. https://doi.org/10.16910/jemr.13.2.4

    Article  Google Scholar 

  35. Ashwin SH, Naveen RR (2023) Deep reinforcement learning for autonomous vehicles: lane keep and overtaking scenarios with collision avoidance. Int J Inf Technol 15:3541–3553. https://doi.org/10.1007/s41870-023-01412-6

    Article  Google Scholar 

  36. Atiquzzaman M, Qi Y, Fries R (2018) Real-time detection of drivers’ texting and eating behavior based on vehicle dynamics. Transport Res F: Traffic Psychol Behav 58:594–604. https://doi.org/10.1016/j.trf.2018.06.027

    Article  Google Scholar 

  37. Pandey NN, Muppalaneni NB (2021) A novel algorithmic approach of open eye analysis for drowsiness detection. Int J Inf Technol 13(6):2199–2208. https://doi.org/10.1007/s41870-021-00811-x

    Article  Google Scholar 

  38. Billah T, Mahbubur Rahman SM (2016) Tracking-based detection of driving distraction from vehicular interior video. In: 2016 13th IEEE international conference on advanced video and signal based surveillance (AVSS). https://doi.org/10.1109/avss.2016.7738077

  39. Liu T, Yang Y, Huang GB, Yeo YK, Lin Z (2016) Driver distraction detection using semi-supervised machine learning. IEEE Trans Intell Transp Syst 17(4):1108–1120. https://doi.org/10.1109/tits.2015.2496157

    Article  Google Scholar 

  40. Berri RA, Silva AG, Parpinelli RS, Girardi E, Arthur R (2014) A pattern recognition system for detecting use of mobile phones while driving. Doi:https://doi.org/10.1109/BRACIS.2014.26

  41. Ferreira AJ, Figueiredo MAT (2012) Boosting algorithms: a review of methods, theory, and applications. Ensemble Mach Learn. https://doi.org/10.1007/978-1-4419-9326-7_2

    Article  Google Scholar 

  42. Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139. https://doi.org/10.1006/jcss.1997.1504

    Article  MathSciNet  Google Scholar 

  43. Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140. https://doi.org/10.1007/bf00058655

    Article  Google Scholar 

  44. Hasan AS, Kabir MAB, Jalayer M (2021) Severity analysis of heavy vehicle crashes using machine learning models: a case study in New Jersey. Int Conf Transp Dev. https://doi.org/10.1061/9780784483534.025

    Article  Google Scholar 

  45. Wu YL, Yeh CT, Hung WC, Tang CY (2012) Gaze direction estimation using support vector machine with active appearance model. Multimed Tools Appl 70(3):2037–2062. https://doi.org/10.1007/s11042-012-1220-z

    Article  Google Scholar 

  46. Friedman JH (2002) Stochastic gradient boosting. Comput Stat Data Anal 38(4):367–378. https://doi.org/10.1016/s0167-9473(01)00065-2

    Article  MathSciNet  Google Scholar 

  47. van Schagen I, Sagberg F (2012) The potential benefits of naturalistic driving for road safety research: theoretical and empirical considerations and challenges for the future. Procedia Soc Behav Sci 48:692–701. https://doi.org/10.1016/j.sbspro.2012.06.1047

    Article  Google Scholar 

  48. Choudhary P, Velaga NR (2017) Mobile phone use during driving: effects on speed and effectiveness of driver compensatory behavior. Accid Anal Prev 106:370–378. https://doi.org/10.1016/j.aap.2017.06.021

    Article  Google Scholar 

  49. Pawar NM, Khanuja RK, Choudhary P, Velaga NR (2020) Modelling braking behaviour and accident probability of drivers under increasing time pressure conditions. Accid Anal Prev 136(105401):105401. https://doi.org/10.1016/j.aap.2019.105401

    Article  Google Scholar 

  50. Zöller I, Abendroth B, Bruder R (2019) Driver behaviour validity in driving simulators – analysis of the moment of initiation of braking at urban intersections. Transp Res Part F Traffic Psychol Behav 61:120–130. https://doi.org/10.1016/j.trf.2017.09.008

    Article  Google Scholar 

Download references

Acknowledgements

We extend our sincere gratitude to all the drivers who participated in the experiments, contributing invaluable data to this research. This work received support from the Fundamental Research Funds for the Central Universities at Chang'an University (Grant No. 300102249310).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniela Daniel Ndunguru.

Ethics declarations

Conflict of interest

The corresponding author, representing the other authors, confirms that there are no conflicts of interest associated with this manuscript.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ndunguru, D.D., Zhanwen, L., Oroni, C.Z. et al. Predictive modeling of gaze patterns in drivers: a machine learning approach with tobii glass 2. Int. j. inf. tecnol. (2024). https://doi.org/10.1007/s41870-024-01814-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41870-024-01814-0

Keywords

Navigation