Skip to main content
Log in

On the impact of hardware-related events on the execution of real-time programs

  • Published:
Design Automation for Embedded Systems Aims and scope Submit manuscript

Abstract

Estimating safe upper bounds on execution times of programs is required in the design of predictable real-time systems. When multi-core, instruction pipeline, branch prediction, or cache memory are in place, due to the considerable complexity traditional static timing analysis faces, measurement-based timing analysis (MBTA) is a more tractable option. MBTA estimates upper bounds on execution times using data measured under the execution of representative execution scenarios. In this context, understanding how hardware-related events affect the executing program under analysis brings about useful information for MBTA. This paper contributes to this need by modeling the execution behavior of programs in function of hardware-related events. More specifically, for a program under analysis, we show that the number of cycles per executed instruction can be correlated to hardware-related event occurrences. We apply our modeling methodology to two architectures, ARMv7 Cortex-M4 and Cortex-A53. While all hardware events can be monitored at once in the former, the latter allows simultaneous monitoring of up to 6 out of 59 events. We then describe a method to select the most relevant hardware events that affect the execution of a program under analysis. These events are then used to model the program behavior via machine learning techniques under different execution scenarios. The effectiveness of this method is evaluated by extensive experiments. Obtained results revealed prediction errors below 20%, showing that the chosen events can largely explain the execution behavior of programs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availibility

All data generated and analyzed during the study are available upon request.

Notes

  1. Although it is possible to monitor more than seven events in Raspberry Pi, this can only be done by multiplexing the registers during the measurements, which leads to excessive measurement errors.

  2. Employing this strategy in preliminary stages of our work led to unsatisfactory results.

  3. More details on the KDBench are available at https://team.inria.fr/kopernic/kdbench/.

  4. https://github.com/tadeunca/daem_journal/ .

References

  1. Akesson B, Nasri M, Nelissen G, Altmeyer S, Davis RI (2022) A comprehensive survey of industry practice in real-time systems. Real Time Syst 58(3):358–398. https://doi.org/10.1007/s11241-021-09376-1

    Article  Google Scholar 

  2. Wilhelm R, Engblom J, Ermedahl A, Holsti N, Thesing S, Whalley D, Bernat G, Ferdinand C, Heckmann R, Mitra T, Mueller F, Puaut I, Puschner P, Staschulat J, Stenström P (2008) The worst-case execution-time problem: overview of methods and survey of tools. ACM Trans Embed Comput Syst 7(3):36–13653. https://doi.org/10.1145/1347375.1347389

    Article  Google Scholar 

  3. Li H, De Meulenaere P, Vanherpen K, Mercelis S, Hellinckx P (2020) A hybrid timing analysis method based on the isolation of software code block. Internet Things 11:100230. https://doi.org/10.1016/j.iot.2020.100230

    Article  Google Scholar 

  4. Milutinovic S, Mezzetti E, Abella J, Cazorla FJ (2019) Increasing the reliability of software timing analysis for cache-based processors. IEEE Trans Comput 68(6):836–851. https://doi.org/10.1109/TC.2018.2890626

    Article  MathSciNet  Google Scholar 

  5. Reghenzani F, Massari G, Fornaciari W (2020) Probabilistic-wcet reliability: statistical testing of evt hypotheses. Microprocess Microsyst 77:103135. https://doi.org/10.1016/j.micpro.2020.103135

    Article  Google Scholar 

  6. Rodriguez Ferrandez I, Jover Alvarez A, Maria Trompouki M, Kosmidis L, Cazorla FJ (2023) Worst case execution time and power estimation of multicore and gpu software: a pedestrian detection use case. ACM SIGAda Ada Lett 43(1):111–117. https://doi.org/10.1145/3631483.3631502

    Article  Google Scholar 

  7. Davis R, Cucu-Grosjean L (2019) A survey of probabilistic timing analysis techniques for real-time systems. LITES Leibniz Trans Embed Syst 6(1):1–60. https://doi.org/10.4230/LITES-v006-i001-a003

    Article  Google Scholar 

  8. Cazorla FJ, Kosmidis L, Mezzetti E, Hernandez C, Abella J, Vardanega T (2019) Probabilistic worst-case timing analysis: taxonomy and comprehensive survey. ACM Comput Surv. https://doi.org/10.1145/3301283

    Article  Google Scholar 

  9. Lima G, Dias D, Barros E (2016) Extreme value theory for estimating task execution time bounds: a careful look, pp 200–211. https://doi.org/10.1109/ECRTS.2016.20

  10. Jiménez Gil S, Bate I, Lima G, Santinelli L, Gogonel A, Cucu-Grosjean L (2017) Open challenges for probabilistic measurement-based worst-case execution time. IEEE Embed Syst Lett 9(3):69–72. https://doi.org/10.1109/LES.2017.2712858

    Article  Google Scholar 

  11. Barros Vasconcelos J, Lima G (2022) Possible risks with evt-based timing analysis: an experimental study on a multi-core platform. In: 2022 XII Brazilian symposium on computing systems engineering (SBESC), pp 1–8. https://doi.org/10.1109/SBESC56799.2022.9964853

  12. Praveen Kumar E, Priyanka S, Sudhakar T (2022) A review on vulnerabilities to modern processors and its mitigation for various variants. Procedia Comput Sci 215:91–97. https://doi.org/10.1016/j.procs.2022.12.010

    Article  Google Scholar 

  13. Amaris M, Camargo R, Cordeiro D, Goldman A, Trystram D (2023) Evaluating execution time predictions on gpu kernels using an analytical model and machine learning techniques. J Parallel Distrib Comput 171:66–78. https://doi.org/10.1016/j.jpdc.2022.09.002

    Article  Google Scholar 

  14. Dongarra J, London K, Moore S, Mucci P, Terpstra D, You H, Zhou M (2003) Experiences and lessons learned with a portable interface to hardware performance counters. In: Proceedings international parallel and distributed processing symposium. https://doi.org/10.1109/IPDPS.2003.1213517

  15. Khazen MW, Zagalo K, Clarke H, Mezouak M, Abdeddaïm Y, Bar-Hen A, Ben-Amor S, Bennour R, Gogonel A, Kougblenou K, Sorel Y, Cucu-Grosjean L (2022) Work in progress: Kdbench - towards open source benchmarks for measurement-based multicore WCET estimators. In: 28th IEEE real-time and embedded technology and applications symposium, RTAS 2022, Milano, Italy, 4–6 May 2022, pp 309–312. IEEE, Milano, Italy. https://doi.org/10.1109/RTAS54340.2022.00035

  16. Holding A (2016) ARM Cortex-A53 MPCore Processor Technical Reference Manual. https://developer.arm.com/documentation/ddi0500/latest/

  17. Weaver V, McKee S (2008) Can hardware performance counters be trusted?, pp 141–150. https://doi.org/10.1109/IISWC.2008.4636099

  18. Imtiaz S, Danielsson J, Behnam M, Capannini G, Carlson J, Jägemar M (2021) Automatic platform-independent monitoring and ranking of hardware resource utilization. In: 2021 26th IEEE international conference on emerging technologies and factory automation (ETFA ), pp 1–8. https://doi.org/10.1109/ETFA45728.2021.9613506

  19. Gustafsson J, Betts A, Ermedahl A, Lisper B (2010) The mälardalen wcet benchmarks: past, present and future. In: Lisper B (ed) WCET. OASICS, vol 15, pp 136–146. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany. https://doi.org/10.4230/OASIcs.WCET.2010.136

  20. Theobald O (2020) Machine learning for absolute beginners: a plain English introduction (third edition), 6th edn. Scatterplot Press, London

    Google Scholar 

  21. Conway D, White JM (2012) Machine learning for hackers. O’Reilly Media, Sebastopol

    Google Scholar 

  22. Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer, Berlin

    Google Scholar 

  23. Axer P, Ernst R, Falk H, Girault A, Grund D, Guan N, Jonsson B, Marwedel P, Reineke J, Rochange C, Sebastian M, Hanxleden R, Wilhelm R, Yi W (2014) Building timing predictable embedded systems. ACM Trans Embed Comput Syst 13(4):82–18237. https://doi.org/10.1145/2560033

    Article  Google Scholar 

  24. Lesage B, Griffin D, Bate I, Soboczenski F (2017) Exploring and understanding multicore interference from observable factors. In: Dencker P, Klenk H, Keller HB, Plödererder E (eds) Automotive: safety and security 2017—Sicherheit und Zuverlässigkeit Für Automobile Informationstechnik, pp 75–88. Gesellschaft für Informatik, Bonn, Germany. https://dl.gi.de/20.500.12116/148

  25. Azimi R, Tam DK, Soares L, Stumm M (2009) Enhancing operating system support for multicore processors by using hardware performance monitoring. ACM SIGOPS Oper Syst Rev. https://doi.org/10.1145/1531793.1531803

    Article  Google Scholar 

  26. Malone C, Zahran M, Karri R (2011) Are hardware performance counters a cost effective way for integrity checking of programs. In: Proceedings of the sixth ACM workshop on scalable trusted computing. STC ’11, pp 71–76. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/2046582.2046596

  27. Griffin D, Lesage B, Bate I, Soboczenski F, Davis RI (2017) Forecast-based interference: modelling multicore interference from observable factors. RTNS ’17, pp 198–207. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3139258.3139275

  28. Andrade TNC, Lima G, Lima VMC, Abdeddaïm Y, Grosjean LC (2021) On the selection of relevant hardware events for explaining execution time behavior. In: 2021 XI Brazilian symposium on computing systems engineering (SBESC), pp 1–8. https://doi.org/10.1109/SBESC53686.2021.9628374

  29. Zaparanuks D, Jovic M, Hauswirth M (2009) Accuracy of performance counter measurements. In: IEEE international symposium on performance analysis of systems and software, pp 23–32. https://doi.org/10.1109/ISPASS.2009.4919635

  30. Das S, Werner J, Antonakakis M, Polychronakis M, Monrose F (2019) Sok: the challenges, pitfalls, and perils of using hardware performance counters for security. In: 2019 IEEE symposium on security and privacy (SP), pp 20–38. https://doi.org/10.1109/SP.2019.00021

  31. Banerjee S, Jha S, Iyer R (2021) Bayesperf: minimizing performance monitoring errors using Bayesian statistics https://doi.org/10.1145/3445814.3446739

  32. Meier L. PX4 Development Guide. https://dev.px4.io/en/

  33. Holding A (2021) ARM v7-M architecture application level reference manual. https://developer.arm.com/documentation/ddi0403/latest/

  34. Nuttx G. NuttX operating system, user’s manual. http://www.nuttx.org/doku.php?id=documentation:userguide

  35. Liu CL, Layland JW (1973) Scheduling algorithms for multiprogramming in a hard real-time environment. J ACM 20(1):46–61. https://doi.org/10.1145/321738.321743

    Article  MathSciNet  Google Scholar 

  36. FOUNDATION RP. Raspberry Pi OS - Raspberry Documentation (2018). Accessed 24 July 2020. https://www.raspberrypi.org/documentation/raspbian/

  37. Kerrisk M (2019) Perf event open: Linux manual page. https://linux.die.net/man/2/perf_event_open

  38. Bechtel MG, Yun H (2019) Denial-of-service attacks on shared cache in multicore: analysis and prevention. https://doi.org/10.48550/arXiv.1903.01314

  39. Devore JL (2011) Probability and statistics for engineering and the sciences, 8th edn. Brooks/Cole, Boston

    Google Scholar 

  40. Vera X, Lisper B, Xue J (2003) Data cache locking for higher program predictability. In: International conference on measurement and modeling of computer systems (SIGMETRICS), pp 272–282. http://www.es.mdh.se/publications/438-

  41. Quiñones E, Berger GBED, Cazorla FJ (2009) Using randomized caches in probabilistic real-time systems, pp 129–138 https://doi.org/10.1109/ECRTS.2009.30

  42. Altmeyer S, Cucu-grosjean RDL (2015) Static probabilistic timing analysis for real-time systems using random replacement caches, pp 77–123. https://doi.org/10.1007/s11241-014-9218-4

  43. R Core Team (2020) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. R Foundation for Statistical Computing. https://www.R-project.org/

  44. Montgomery DC, Peck EA, Vining GG (2006) Introduction to linear regression analysis, 4th edn. Wiley, Hoboken

    Google Scholar 

  45. Zhang C, Ma Y (2012) Ensemble machine learning: methods and applications, 1st edn. Springer, New York. https://doi.org/10.1007/978-1-4419-9326-7

    Book  Google Scholar 

  46. Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324

    Article  Google Scholar 

  47. Liaw A, Wiener M (2002) Classification and regression by randomforest. R News 2(3):18–22

    Google Scholar 

  48. Cannon AJ (2022) Multi-layer perceptron neural network with optional monotonicity constraints, Victoria, Canada. https://CRAN.R-project.org/package=monmlp

  49. Chapter seven-architecture of neural processing unit for deep neural networks. In: Kim S, Deka GC (eds) Hardware accelerator systems for artificial intelligence and machine learning. Advances in computers, vol 122, pp 217–245. Elsevier (2021). https://doi.org/10.1016/bs.adcom.2020.11.001

  50. Candel A, LeDell E (2023) Deep Learning with H\(_2\)O. Mountain View, California. https://h2o.ai/resources/

  51. Awad M, Khanna R (2015) Support vector regression. Apress, Berkeley, pp 67–80. https://doi.org/10.1007/978-1-4302-5990-9_4

    Book  Google Scholar 

  52. Meyer D, Dimitriadou E, Hornik K, Weingessel A, Leisch F (2023) E1071: misc functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien. R package version 1.7-13. https://CRAN.R-project.org/package=e1071

  53. Kosmidis L, Abella J, Quiñones E, Cazorla FJ (2013) A cache design for probabilistically analysable real-time systems. In: 2013 Design, automation test in Europe conference exhibition (DATE), pp 513–518. https://doi.org/10.7873/DATE.2013.116

  54. Colin A, Puaut I (2000) Worst case execution time analysis for a processor with branch prediction. Real Time Syst 18:249–274. https://doi.org/10.1023/a:1008149332687

    Article  Google Scholar 

  55. Bate I, Reutemann R (2004) Worst-case execution time analysis for dynamic branch predictors. In: Proceedings. 16th Euromicro conference on real-time systems, 2004. ECRTS 2004, pp 215–222. https://doi.org/10.1109/EMRTS.2004.1311023

  56. Burguière C, Rochange C (2005) A contribution to branch prediction modeling in wcet analysis, pp 612–617. https://doi.org/10.1109/DATE.2005.7

Download references

Funding

This work was partially funded by CAPES (Brazil), Grant No. 001, through the Inria-UFBA Associated Teams Program under the Kepler project, by the ANRT FR Joint CIFRE Inria and StatInf, CIFRE Grant No. 1072/2020, and by BPI France under the FR PSPC-regions - STARTREC 2021 project.

Author information

Authors and Affiliations

Authors

Contributions

I, TNCA, wrote the main text of the manuscript under the supervision of GL and VL. IH and SB-A were responsible for monitoring hardware events in the Cortexv7-m4 architecture, contributing to section 4.1 under the supervision of LC-G. All authors reviewed and contributed to the manuscript.

Corresponding authors

Correspondence to Tadeu Nogueira C. Andrade or George Lima.

Ethics declarations

Conflict of interest

The authors declare that they have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Andrade, T.N.C., Lima, G., Lima, V.M.C. et al. On the impact of hardware-related events on the execution of real-time programs. Des Autom Embed Syst 27, 275–302 (2023). https://doi.org/10.1007/s10617-023-09281-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10617-023-09281-9

Keywords

Navigation