当前位置: X-MOL 学术ACM Trans. Interact. Intell. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable Activity Recognition for Smart Home Systems
ACM Transactions on Interactive Intelligent Systems ( IF 3.4 ) Pub Date : 2023-05-05 , DOI: https://dl.acm.org/doi/10.1145/3561533
Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova

Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in smart home operations can lead users reliant on smart home predictions to wonder “Why did the smart home do that?” In this work, we build on insights from Explainable Artificial Intelligence (XAI) techniques and introduce an explainable activity recognition framework in which we leverage leading XAI methods (Local Interpretable Model-agnostic Explanations, SHapley Additive exPlanations (SHAP), Anchors) to generate natural language explanations that explain what about an activity led to the given classification. We evaluate our framework in the context of a commonly targeted smart home scenario: autonomous remote caregiver monitoring for individuals who are living alone or need assistance. Within the context of remote caregiver monitoring, we perform a two-step evaluation: (a) utilize Machine Learning experts to assess the sensibility of explanations and (b) recruit non-experts in two user remote caregiver monitoring scenarios, synchronous and asynchronous, to assess the effectiveness of explanations generated via our framework. Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations. Moreover, in 83% of sampled scenarios users preferred natural language explanations over a simple activity label, underscoring the need for explainable activity recognition systems. Finally, we show that explanations generated by some XAI methods can lead users to lose confidence in the accuracy of the underlying activity recognition model, while others lead users to gain confidence. Taking all studied factors into consideration, we make a recommendation regarding which existing XAI method leads to the best performance in the domain of smart home automation and discuss a range of topics for future work to further improve explainable activity recognition.



中文翻译:

智能家居系统的可解释活动识别

智能家居环境旨在提供服务,通过安装在整个空间的各种传感器和执行器来帮助改善居住者的生活质量。智能家居采取的许多自动操作都由底层活动识别系统的输出控制。然而,活动识别系统可能并不完全准确,因此智能家居操作中的不一致可能导致依赖智能家居预测的用户想知道“智能家居为什么这样做?” 在这项工作中,我们基于可解释人工智能 (XAI) 技术的见解,并引入了一个可解释的活动识别框架,在该框架中,我们利用领先的 XAI 方法(本地可解释模型不可知论解释、SHapley Additive exPlanations (SHAP)、锚点)生成自然语言解释,解释导致给定分类的活动。我们在一个共同目标的智能家居场景的背景下评估我们的框架:对独居或需要帮助的个人进行自主远程护理人员监控。在远程护理人员监控的背景下,我们进行了两步评估:(a) 利用机器学习专家评估解释的敏感性,以及 (b) 在两个用户远程护理人员监控场景(同步和异步)中招募非专家,以评估通过我们的框架生成的解释的有效性。我们的结果表明,XAI 方法 SHAP 在生成合理解释方面的成功率为 92%。而且,在 83% 的抽样场景中,用户更喜欢自然语言解释而不是简单的活动标签,这强调了对可解释的活动识别系统的需求。最后,我们表明,某些 XAI 方法生成的解释会导致用户对基础活动识别模型的准确性失去信心,而其他方法会使用户获得信心。考虑到所有研究因素,我们就现有的 XAI 方法在智能家居自动化领域带来最佳性能提出了建议,并讨论了未来工作的一系列主题,以进一步改进可解释的活动识别。我们表明,某些 XAI 方法生成的解释可能导致用户对基础活动识别模型的准确性失去信心,而其他方法则使用户获得信心。考虑到所有研究因素,我们就现有的 XAI 方法在智能家居自动化领域带来最佳性能提出了建议,并讨论了未来工作的一系列主题,以进一步改进可解释的活动识别。我们表明,某些 XAI 方法生成的解释可能导致用户对基础活动识别模型的准确性失去信心,而其他方法则使用户获得信心。考虑到所有研究因素,我们就现有的 XAI 方法在智能家居自动化领域带来最佳性能提出了建议,并讨论了未来工作的一系列主题,以进一步改进可解释的活动识别。

更新日期:2023-05-05
down
wechat
bug