Skip to main content

Advertisement

Log in

Survey on ontology-based explainable AI in manufacturing

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

Artificial intelligence (AI) has become an essential tool for manufacturers seeking to optimize their production processes, reduce costs, and improve product quality. However, the complexity of the underlying mechanisms of AI systems can render it difficult for humans to understand and trust AI-driven decisions. Explainable AI (XAI) is a rapidly evolving field that addresses this challenge, providing human-understandable explanations of AI decisions. Based on a systematic literature survey, We explore the latest techniques and approaches that are helping manufacturers gain transparency in the decision-making processes of their AI systems. In this survey, we focus on two of the most exciting areas of XAI: ontology-based and semantic-based XAI (O-XAI, S-XAI, respectively), which provide human-readable explanations of AI decisions by exploiting semantic information. These latter types of explanations are presented in natural language and are designed to be easily understood by non-experts. Translating the decision paths taken by AI algorithms to meaningful explanations through semantics, O-XAI, and S-XAI enables humans to identify various cross-cutting concerns that influence the decisions made by the AI system. This information can be used to improve the performance of the AI system, identify potential biases in the system, and ensure that the decisions are aligned with the goals and values of the manufacturing organization. Additionally, we highlight the benefits and challenges of using O-XAI and S-XAI in manufacturing and discuss the potential for future research, aiming to provide valuable guidance for researchers and practitioners looking to leverage the power of ontologies and general semantics for XAI.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availibility

The data related to Visualisation and Graphs generated during the current study are available from the corresponding author upon reasonable request.

Notes

  1. https://www.darpa.mil/attachments/XAIIndustryDay_Final.pptx.

  2. https://euagenda.eu/publications/industry-4-0.

  3. https://www.webofscience.com/WoS/WoScc/basic-search.

  4. https://www.mckinsey.com/featured-insights/artificial-intelligence/ai-adoption-advances-but-foundational-barriers-remain.

  5. https://insidebigdata.com/2022/01/21/2022-trends-in-semantic-technologies-humanizing-artificial-intelligence/

References

  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–18).

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.

    Article  Google Scholar 

  • Aditya, S., Yang, Y., & Baral, C. (2018). Explicit reasoning over end-to-end neural architectures for visual question answering. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. New Orleans, Louisiana

  • Ai, Q., Azizi, V., Chen, X., & Zhang, Y. (2018). Learning heterogeneous knowledge base embeddings for explainable recommendation. Algorithms, 11(9), 137.

    Article  Google Scholar 

  • Albayrak Ünal, O., Erkayman, B., & Usanmaz, B. (2023). Applications of artificial intelligence in inventory management: A systematic review of the literature. Archives of Computational Methods in Engineering, 30(4), 2605–2625.

    Google Scholar 

  • Ali, A., Jahanzaib, M., & Aziz, H. (2014). Manufacturing flexibility and agility: A distinctive comparison. The Nucleus, 51(3), 379–384.

    Google Scholar 

  • Alirezaie, M., Längkvist, M., Sioutis, M., & Loutfi, A. (2018). A symbolic approach for explaining errors in image classification tasks. In IJCAI Workshop on Learning and Reasoning. Stockholm.

  • Alirezaie, M., Längkvist, M., Sioutis, M., & Loutfi, A. (2019). Semantic referee: A neural-symbolic framework for enhancing geospatial semantic segmentation. Semantic Web, 10(5), 863–880.

    Article  Google Scholar 

  • Allen, J. F., Byron, D. K., Dzikovska, M., Ferguson, G., Galescu, L., & Stent, A. (2001). Toward conversational human-computer interaction. AI Magazine, 22(4), 27–27.

    Google Scholar 

  • Alvanpour, A., Das, S. K., Robinson, C. K., Nasraoui, O., & Popa, D. (2020). Robot failure mode prediction with explainable machine learning. In 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE) (pp. 61–66). IEEE.

  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., & Chatila, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.

    Article  Google Scholar 

  • Bader, S., & Hitzler, P. (2005). Dimensions of neural-symbolic integration-a structured survey. arXiv preprint cs/0511042.

  • Bai, L., Lao, S., Jones, G. J., & Smeaton, A. F. (2007, September). Video semantic content analysis based on ontology. In International Machine Vision and Image Processing Conference (IMVIP 2007) (pp. 117–124). IEEE.

  • Balasubramanian, V. N. (2022). Toward explainable deep learning. Communications of the ACM, 65(26), 68–69.

    Article  Google Scholar 

  • Batet, M., Valls, A., Gibert, K., S’anchez, D. (2010). Semantic clustering using multiple ontologies. In Artificial Intelligence Research and Development - Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence (pp. 207–216). IOS Press, Amsterdam.

  • Biran, O., & Cotton, C. (2017). Explanation and justification in machine learning: A survey. In IJCAI-17 Workshop on Explainable AI (XAI) (Vol. 8, No. 1, pp. 8–13).

  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.

  • Bonci, A., Longhi, S., & Pirani, M. (2021). IEC 61499 device management model through the lenses of RMAS. Procedia Computer Science, 180, 656–665.

    Article  Google Scholar 

  • Calegari, R., Ciatto, G., Denti, E., & Omicini, A. (2020). Logic-based technologies for intelligent systems: State of the art and perspectives. Information, 11(3), 167.

    Article  Google Scholar 

  • Calegari, R., Ciatto, G., & Omicini, A. (2020). On the integration of symbolic and subsymbolic techniques for XAI: A survey. Intelligenza Artificiale, 14(1), 7–32.

    Article  Google Scholar 

  • Capron, F., & Racoceanu, D. (2015). Towards semantic-driven high-content image analysis: An operational instantiation for mitosis detection in digital histopathology. Computerised Medical Imaging and Graphics, 42, 2–15.

    Article  Google Scholar 

  • Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832.

    Article  Google Scholar 

  • Che, Z., Kale, D., Li, W., Bahadori, M.T., & Liu, Y. (2015). Deep computational phenotyping. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 507–516). KDD ’15, ACM, New York, NY

  • Chen, J., Lécué, F., Pan, J. Z., Horrocks, I., & Chen, H. (2018). Knowledge-based transfer learning explanation. In Sixteenth International Conference on Principles of Knowledge Representation and Reasoning.

  • Christou, I. T., Amolochitis, E., & Tan, Z. H. (2018). A parallel/distributed algorithmic framework for mining all quantitative association rules. arXiv preprint arXiv:1804.06764.

  • Chromik, M., & Butz, A. (2021). Human-XAI interaction: a review and design principles for explanation user interfaces. In Human-Computer Interaction-INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30-September 3, 2021, Proceedings, Part II 18 (pp. 619–640). Springer International Publishing.

  • Chromik, M., & Schuessler, M., (2020). A taxonomy for human subject evaluation of black-box explanations in XAI. Exss-atec@ iui, 94.

  • Clos, J., Wiratunga, N., & Massie, S. (2017). Towards explainable text classification by jointly learning lexicon and modifier terms. In IJCAI-17 Workshop on Explainable AI (XAI) (p. 19).

  • Confalonieri, R., Galliani, P., Kutz, O., Porello, D., Righetti, G., & Troquard, N. (2021). Towards knowledge-driven distillation and explanation of black-box models. In Proceedings of the Workshop on Data meets Applied Ontologies in Explainable AI (DAO-XAI 2021) part of Bratislava Knowledge September (BAKS 2021) (Vol. 2998). CEUR-WS.

  • Confalonieri, R., Weyde, T., Besold, T. R., & del Prado Martín, F. M. (2021). Using ontologies to enhance human understandability of global post-hoc explanations of black-box models. Artificial Intelligence, 296, 103471.

    Article  Google Scholar 

  • Crawford, B. (2021). A progressive learning framework, leveraging machine-learning knowledgeability, towards Composites 4.0 (Doctoral dissertation, University of British Columbia).

  • Das, A., & Rad, P., (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.

  • Donadello, I., & Dragoni, M. (2019). An End-to-End Semantic Platform for Nutritional Diseases Management. In The Semantic Web-ISWC 2019: 18th International Semantic Web Conference, Auckland, New Zealand, October 26-30, 2019, Proceedings, Part II 18 (pp. 363–381). Springer International Publishing.

  • Donadello, I., & Dragoni, M. (2020, November). SeXAI: A semantic explainable artificial intelligence framework. In International Conference of the Italian Association for Artificial Intelligence (pp. 51–66). Springer, Cham.

  • Doran, D., Schulz, S., & Besold, T. R. (2017). What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794.

  • Doshi-Velez, F., & Kim, B., (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

  • Feng, J., Luo, H., & Fang, D. (2023). A progressive deep learning framework for fine-grained primate behavior recognition. Applied Animal Behaviour Science, 8, 106099.

    Article  Google Scholar 

  • Foggia, P., Genna, R., & Vento, M. (2001). Symbolic vs. connectionist learning: An experimental comparison in a structured domain. IEEE Transactions on Knowledge and Data Engineering, 13(2), 176–195.

    Article  Google Scholar 

  • Garetti, M., Fumagalli, L., & Negri, E. (2015). Role of ontologies for CPS implementation in manufacturing. Management and Production Engineering Review, 89, 8.

    Google Scholar 

  • Geng, Y., Chen, J., Jimenez-Ruiz, E., & Chen, H. (2019). Human-centric transfer learning explanation via knowledge graph. Honolulu: In AAAI Workshop on Network Interpretability for Deep Learning.

  • Glock, A. C. (2021). Explaining a random forest with the difference of two ARIMA models in an industrial fault detection scenario. Procedia Computer Science, 180, 476–481.

    Article  Google Scholar 

  • Gocev, I., Grimm, S., & Runkler, T. A. (2018). Explanation of action plans through ontologies. In OTM Confederated International Conferences” On the Move to Meaningful Internet Systems” (pp. 386–403). Springer, Cham.

  • Goldman, C. V., Baltaxe, M., Chakraborty, D., & Arinez, J. (2021). Explaining learning models in manufacturing processes. Procedia Computer Science, 180, 259–268.

    Article  Google Scholar 

  • Golovianko, M., Terziyan, V., Branytskyi, V., & Malyk, D. (2023). Industry 4.0 vs. Industry 5.0: co-existence, Transition, or a Hybrid. Procedia Computer Science, 217, 102–113.

    Article  Google Scholar 

  • Gribbestad, M., Hassan, M. U., Hameed, I. A., & Sundli, K. (2021). Health monitoring of air compressors using reconstruction-based deep learning for anomaly detection with increased transparency. Entropy, 23(1), 83.

    Article  Google Scholar 

  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58.

    Article  Google Scholar 

  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI-explainable artificial intelligence. Science Robotics, 4(37), 78.

    Article  Google Scholar 

  • Guo, W. (2020). Explainable artificial intelligence for 6G: Improving trust between human and machine. IEEE Communications Magazine, 58(6), 39–45.

    Article  Google Scholar 

  • Gusmão, A. C., Correia, A. H. C., De Bona, G., & Cozman, F. G. (2018). Interpreting embedding models of knowledge bases: A pedagogical approach. arXiv preprint arXiv:1806.09504.

  • Hagras, H. (2018). Toward human-understandable, explainable AI. Computer, 51(9), 28–36.

    Article  Google Scholar 

  • Hendricks, L.A., Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. arXiv:1603.08507v1 [cs.CV].

  • Hermansa, M., Kozielski, M., Michalak, M., Szczyrba, K., Wróbel, Ł, & Sikora, M. (2021). Sensor-based predictive maintenance with reduction of false alarms: A case study in heavy industry. Sensors, 22(1), 226.

    Article  Google Scholar 

  • Himmelhuber, A., Grimm, S., Runkler, T., & Zillner, S. (2020). Ontology-based skill description learning for flexible production systems. In 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) (Vol. 1, pp. 975–981). IEEE.

  • Hoang, X. L., Hildebrandt, C., & Fay, A. (2018). Product-oriented description of manufacturing resource skills. IFAC-PapersOnLine, 51(11), 90–95.

    Article  Google Scholar 

  • Hoff, K. A. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434.

    Article  Google Scholar 

  • Hoffman, R.R., Mueller, S.T., Klein, G., & Litman, J., (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.

  • Holzinger, A., & Jurisica, I. (2014). Knowledge discovery and data mining in biomedical informatics: The future is in integrative, interactive machine learning solutions. In Interactive Knowledge Discovery and Data Mining in Biomedical Informatics (pp. 1–18). Springer, Berlin.

  • Hughes, L., Dwivedi, Y. K., Rana, N. P., Williams, M. D., & Raghavan, V. (2022). Perspectives on the future of manufacturing within the Industry 4.0 era. Production Planning & Control, 33(2), 138–158.

    Article  Google Scholar 

  • Hussain, F., Hussain, R., & Hossain, E. (2021). Explainable artificial intelligence (XAI): An engineering perspective. arXiv preprint arXiv:2101.03613.

  • Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences, 12, 1353. https://doi.org/10.3390/app12031353

    Article  Google Scholar 

  • Järvenpää, E., Siltala, N., Hylli, O., & Lanz, M. (2019). The development of an ontology for describing the capabilities of manufacturing resources. Journal of Intelligent Manufacturing, 30(2), 959–978.

    Article  Google Scholar 

  • Khan, O. Z., Poupart, P., & Black, J. P. (2008). Explaining recommendations generated by MDPs. In ExaCt (pp. 13–24).

  • Kitchenham, B., Pretorius, R., Budgen, D., Brereton, O. P., Turner, M., Niazi, M., & Linkman, S. (2010). Systematic literature reviews in software engineering: A tertiary study. Information and Software Technology, 52(8), 792–805.

    Article  Google Scholar 

  • Kulmanov, M., Smaili, F. Z., Gao, X., & Hoehndorf, R. (2021). Semantic similarity and machine learning with ontologies. Briefings in Bioinformatics, 4, 22.

    Google Scholar 

  • Lee, M., & Jeon, J. (2021). Explainable AI for domain experts: A post Hoc analysis of deep learning for defect classification of TFT-LCD panels. Journal of Intelligent Manufacturing, 89, 1–13.

    Google Scholar 

  • Lee, M., Jeon, J., & Lee, H. (2021). Explainable ai for domain experts: A post hoc analysis of deep learning for defect classification of tft-lcd panels. Journal of Intelligent Manufacturing, 5, 1–13.

    Google Scholar 

  • Leuce, F. (2020). On the role of knowledge graphs in explainable AI. Semantic Web, 1, 1–5.

    Google Scholar 

  • Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable AI: A review of machine learning interpretability methods. Entropy, 23, 18.

    Article  Google Scholar 

  • Lipton, Z. C. (2018). The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31–57.

    Article  Google Scholar 

  • Löfström, H., Hammar, K., & Johansson, U. (2022). A meta survey of quality evaluation criteria in explanation methods. In International Conference on Advanced Information Systems Engineering (pp. 55–63). Springer, Cham.

  • Longo, L., Goebel, R., Lecue, F., Kieseberg, P., & Holzinger, A. (2020). Explainable artificial intelligence: Concepts, applications, research challenges and visions. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 1–16). Springer, Cham.

  • Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 8, 30.

    Google Scholar 

  • Mabkhot, A. M., Al-Samhan, A., & Hidri, L. (2019). An ontology-enabled case-based reasoning decision support system for manufacturing process selection. Advances in Materials Science and Engineering, 5, 89.

    Google Scholar 

  • Matzka, S. (2020). Explainable artificial intelligence for predictive maintenance applications. In 2020 Third International Conference on Artificial Intelligence for Industries (AI4I) (pp. 69–74). IEEE.

  • Mawson, V. J., & Hughes, B. R. (2019). The development of modelling tools to improve energy efficiency in manufacturing processes and systems. Journal of Manufacturing Systems, 51, 95–105.

    Article  Google Scholar 

  • McLaughlin, M. P., Stamper, A., Barber, G., Paduano, J., Mennell, P., Benn, E., ... & Menser, C. (2021). Enhanced defect detection in after develop inspection with machine learning disposition. In 2021 32nd Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC) (pp. 1–5). IEEE.

  • Miller, T., Howe, P., & Sonenberg, L. (2017). Explainable AI: Beware of inmates running the asylum or: How I learnt to stop worrying and love the social and behavioural sciences. arXiv preprint arXiv:1707.06347.

  • Mohseni, S., Zarei, N., & Ragan, E.D. (2018) A multidisciplinary survey and framework for design and evaluation of explainable AI systems. arXiv arXiv:1811.11839.

  • Mooney, R., & Towell, G. (1990). Symbolic and connectionist learning algorithms. In Readings in machine learning, p. 171.

  • Moradi, M., & Samwald, M. (2021). Post-hoc explanation of black-box classifiers using confident itemsets. Expert Systems with Applications, 165, 113941.

    Article  Google Scholar 

  • Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A., & Klein, G., (2019). Explanation in human-AI systems: A literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876.

  • Mwihaki, A. (2004). Meaning as use: A functional view of semantics and pragmatics. Swahili Forum, 11, 127–139.

    Google Scholar 

  • Naqvi, M. R., Iqbal, M. W., Ashraf, M. U., Ahmad, S., Soliman, A. T., Khurram, S., & Choi, J. G. (2022). Ontology driven testing strategies for IoT applications. Computers, Materials and Continua, 70, 5855–5869.

    Article  Google Scholar 

  • Natesan Ramamurthy, K., Vinzamuri, B., Zhang, Y., & Dhurandhar, A. (2020). Model agnostic multilevel explanations. Advances in Neural Information Processing Systems, 33, 5968–5979.

    Google Scholar 

  • New, A., Rashid, S. M., Erickson, J. S., McGuinness, D. L., & Bennett, K. P. (2018). Semantically-aware population health risk analyses. arXiv preprint arXiv:1811.11190.

  • Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459.

    Article  Google Scholar 

  • Palatnik de Sousa, I., Maria Bernardes Rebuzzi Vellasco, M., & Costa da Silva, E. (2019). Local interpretable model-agnostic explanations for classification of lymph node metastases. Sensors, 19(13), 2969.

    Article  Google Scholar 

  • Palmonari, M., & Minervini, P. (2020). Knowledge graph embeddings and explainable AI. Knowledge Graphs for Explainable Artificial Intelligence, 47, 49.

    Google Scholar 

  • Pesquita, C. (2021). Towards semantic integration for explainable artificial intelligence in the biomedical domain. In HEALTHINF (pp. 747–753).

  • Plumb, G., Molitor, D., & Talwalkar, A. S. (2018). Model agnostic supervised local explanations. Advances in Neural Information Processing Systems, 31, 1.

    Google Scholar 

  • Preece, A. (2018). Asking ‘Why’ in AI: Explainability of intelligent systems-perspectives and challenges. Intelligent Systems in Accounting, Finance and Management, 25(2), 63–72.

    Google Scholar 

  • Publio, G.C., Esteves, D., Lawrynowicz, A., Panov, P., Soldatova, L., Soru, T., Vanschoren, J., & Zafar, H. (2018). ML Schema: Exposing the semantics of machine learning with schemas and ontologies. In ICML 2018 Workshop on Reproducibility in Machine Learning. Stockholm.

  • Rehse, J. R., Mehdiyev, N., & Fettke, P. (2019). Towards explainable process predictions for industry 4.0 in the dfki-smart-lego-factory. KI-Künstliche Intelligenz, 33(2), 181–187.

    Article  Google Scholar 

  • Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?” Explaining the Predictions of Any Classifier. CHI 2016 Workshop on Human Centered Machine Learning. arXiv:1602.04938v1 [cs.LG].

  • Rožanec, J. M., Zajec, P., Kenda, K., Novalija, I., Fortuna, B., Mladenić, D., ... & Soldatos, J. (2021, September). STARdom: an architecture for trusted and secure human-centered manufacturing systems. In IFIP International Conference on Advances in Production Management Systems (pp. 199–207). Springer, Cham.

  • Rožanec, J. M., Zajec, P., Kenda, K., Novalija, I., Fortuna, B., & Mladenić, D. (2021, June). XAI-KG: knowledge graph to support XAI and decision-making in manufacturing. In International Conference on Advanced Information Systems Engineering (pp. 167–172). Springer, Cham.

  • Rumelhart, D. E., Hinton, G. E., & McClelland, J. L. (1986). A general framework for parallel distributed processing. In Parallel Distributed Processing, 1(45–76), 26.

    Google Scholar 

  • Sabou, M., Biffl, S., Einfalt, A., Krammer, L., Kastner, W., & Ekaputra, F. J. (2020). Semantics for cyber-physical systems: A cross-domain perspective. Semantic Web, 11(1), 115–124.

    Article  Google Scholar 

  • Sajja, S., Aggarwal, N., Mukherjee, S., Manglik, K., Dwivedi, S., & Raykar, V. (2021). Explainable AI based interventions for pre-season decision making in fashion retail. In 8th ACM IKDD CODS and 26th COMAD (pp. 281–289).

  • Sarkar, A., Naqvi, M.R., Elmhadhbi, L., Sormaz, D., Archimede, B., Karray, M.H. (2023). CHAIKMAT 4.0-Commonsense Knowledge and Hybrid Artificial Intelligence for Trusted Flexible Manufacturing. In: Kim, KY., Monplaisir, L., Rickli, J. (eds) Flexible Automation and Intelligent Manufacturing: The Human-Data-Technology Nexus. FAIM 2022. Lecture Notes in Mechanical Engineering. Springer, Cham.

  • Sarker, M.K., Xie, N., Doran, D., Raymer, M., Hitzler, P. (2017). Explaining trained neural networks with Semantic Web Technologies: First steps. In Proceedings of the Twelfth International Workshop on Neural-Symbolic Learning and Reasoning (NeSy). London.

  • Seeliger, A., Pfaff, M., & Krcmar, H. (2019). Semantic web technologies for explainable machine learning models: A literature review. PROFILES/SEMEX@ ISWC, 2465, pp. 1–16.

  • Senoner, J., Netland, T., & Feuerriegel, S. (2021). Using explainable artificial intelligence to improve process quality: Evidence from semiconductor manufacturing. Management Science, 68(8), 5704–5723.

    Article  Google Scholar 

  • Shin, D. (2023). Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. Journal of Information Science, 49(1), 18–31.

    Article  Google Scholar 

  • Smolensky, P. (1987). Connectionist AI, symbolic AI, and the brain. Artificial Intelligence Review, 1(2), 95–109.

    Article  Google Scholar 

  • Šormaz, D., & Sarkar, A. (2019). SIMPM-Upper-level ontology for manufacturing process plan network generation. Robotics and Computer-Integrated Manufacturing, 55, 183–198.

    Article  Google Scholar 

  • Tiddi, I. (2020). Directions for explainable knowledge-enabled systems. Knowledge Graphs for eXplainable Artificial Intelligence, 47, 245.

    Google Scholar 

  • Tiddi, I., d’Aquin, M., & Motta, E. (2015). Data patterns explained with linked data. In A. Bifet, M. May, B. Zadrozny, R. Gavalda, D. Pedreschi, F. Bonchi, J. Cardoso, & M. Spiliopoulou (Eds.), Machine learning and knowledge discovery in databases (pp. 271–275). Cham: Springer.

    Chapter  Google Scholar 

  • Torcianti, A., & Matzka, S. (2021). Explainable Artificial Intelligence for Predictive Maintenance Applications using a Local Surrogate Model. In 2021 4th International Conference on Artificial Intelligence for Industries (AI4I) (pp. 86–88). IEEE.

  • Uddin, M. K., Dvoryanchikova, A., Lobov, A., & Lastra, J. M. (2011). An ontology-based semantic foundation for flexible manufacturing systems. In IECON 2011-37th Annual Conference of the IEEE Industrial Electronics Society (pp. 340–345). IEEE.

  • Villaronga, E. F., Kieseberg, P., & Li, T. (2018). Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Computer Law & Security Review, 34(2), 304–313.

    Article  Google Scholar 

  • Wang, J., Liu, C., Zhu, M., Guo, P., & Hu, Y. (2018). Sensor data based system-level anomaly prediction for smart manufacturing. In 2018 IEEE International Congress on Big Data (BigData Congress) (pp. 158–165). IEEE.

  • Wang, X., Wang, D., Xu, C., He, X., Cao, Y., & Chua, T. S. (2019, July). Explainable reasoning over knowledge graphs for recommendation. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 5329–5336).

  • Wang, P., Wu, Q., Shen, C., Dick, A., & Van Den Henge, A. (2017). Explicit knowledge-based reasoning for visual question answering. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (pp. 1290–1296). IJCAI’17, AAAI Press.

  • Xu, D., Karray, H., & Archimède, B. (2016). Towards an interoperable decision support platform for eco-labeling process. In Enterprise Interoperability VII: Enterprise Interoperability in the Digitized and Networked Factory of the Future (pp. 239–248). Springer International Publishing.

  • Xu, D., Karray, M. H., & Archimède, B. (2017). A semantic-based decision support platform to assist products’ eco-labeling process. Industrial Management & Data Systems, 117(7), 1340–1361.

  • Xu, D., Karray, M. H., & Archimède, B. (2018). A knowledge base with modularized ontologies for eco-labeling: Application for laundry detergents. Computers in Industry, 98, 118–133.

    Article  Google Scholar 

  • Yan, K., Peng, Y., Sandfort, V., Bagheri, M., Lu, Z., & Summers, R.M. (2019). Holistic and comprehensive annotation of clinically significant findings on diverse CT images: Learning from radiology reports and label ontology. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach.

  • Yoo, S., & Kang, N. (2021). Explainable artificial intelligence for manufacturing cost estimation and machining feature visualization. Expert Systems with Applications, 183, 115430.

    Article  Google Scholar 

  • Zafar, M. R., & Khan, N. (2021). Deterministic local interpretable model-agnostic explanations for stable explainability. Machine Learning and Knowledge Extraction, 3(3), 525–541.

    Article  Google Scholar 

  • Zajec, P., Rožanec, J. M., Trajkova, E., Novalija, I., Kenda, K., Fortuna, B., & Mladeni’c, D. (2021). Help me learn! Architecture and strategies to combine recommendations and active learning in manufacturing. Information, 12, 473.

  • Zdravković, M., Ćirić, I., & Ignjatović, M. (2021). Towards explainable AI-assisted operations in District Heating Systems. IFAC-PapersOnLine, 54(1), 390–395.

    Article  Google Scholar 

Download references

Funding

This work is performed within the CHAIKMAT project funded by the French National Research Agency (ANR) under grant agreement “ANR-21-CE10-0004-01".

Author information

Authors and Affiliations

Authors

Contributions

MRN Conceptualized the research, curated the data, and wrote the primary manuscript. LE Reviewed, edited, and provided feedback on multiple versions of the manuscript. AS Played a significant role in literature review and validation. BA Assisted with software and visualization tools. MHK Reviewed, edited, and provided feedback on multiple versions of the manuscript. MRN, LE, AS, BA, MHK read and approved the final manuscript.

Corresponding author

Correspondence to Muhammad Raza Naqvi.

Ethics declarations

Conflict of interest

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naqvi, M.R., Elmhadhbi, L., Sarkar, A. et al. Survey on ontology-based explainable AI in manufacturing. J Intell Manuf (2024). https://doi.org/10.1007/s10845-023-02304-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10845-023-02304-z

Keywords

Navigation