当前位置:
X-MOL 学术
›
arXiv.cs.CL
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Return of EM: Entity-driven Answer Set Expansion for QA Evaluation
arXiv - CS - Computation and Language Pub Date : 2024-04-24 , DOI: arxiv-2404.15650 Dongryeol Lee, Minwoo Lee, Kyungmin Min, Joonsuk Park, Kyomin Jung
arXiv - CS - Computation and Language Pub Date : 2024-04-24 , DOI: arxiv-2404.15650 Dongryeol Lee, Minwoo Lee, Kyungmin Min, Joonsuk Park, Kyomin Jung
Recently, directly using large language models (LLMs) has been shown to be
the most reliable method to evaluate QA models. However, it suffers from
limited interpretability, high cost, and environmental harm. To address these,
we propose to use soft EM with entity-driven answer set expansion. Our approach
expands the gold answer set to include diverse surface forms, based on the
observation that the surface forms often follow particular patterns depending
on the entity type. The experimental results show that our method outperforms
traditional evaluation methods by a large margin. Moreover, the reliability of
our evaluation method is comparable to that of LLM-based ones, while offering
the benefits of high interpretability and reduced environmental harm.
中文翻译:
EM 回归:用于 QA 评估的实体驱动答案集扩展
最近,直接使用大型语言模型(LLM)已被证明是评估 QA 模型最可靠的方法。然而,它的可解释性有限、成本高和环境危害。为了解决这些问题,我们建议使用软 EM 和实体驱动的答案集扩展。基于表面形式通常根据实体类型遵循特定模式的观察,我们的方法扩展了黄金答案集以包括不同的表面形式。实验结果表明,我们的方法大大优于传统的评估方法。此外,我们的评估方法的可靠性与基于法学硕士的评估方法相当,同时具有高可解释性和减少环境危害的优点。
更新日期:2024-04-25
中文翻译:
EM 回归:用于 QA 评估的实体驱动答案集扩展
最近,直接使用大型语言模型(LLM)已被证明是评估 QA 模型最可靠的方法。然而,它的可解释性有限、成本高和环境危害。为了解决这些问题,我们建议使用软 EM 和实体驱动的答案集扩展。基于表面形式通常根据实体类型遵循特定模式的观察,我们的方法扩展了黄金答案集以包括不同的表面形式。实验结果表明,我们的方法大大优于传统的评估方法。此外,我们的评估方法的可靠性与基于法学硕士的评估方法相当,同时具有高可解释性和减少环境危害的优点。