当前位置: X-MOL 学术Nat. Lang. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Emerging trends: When can users trust GPT, and when should they intervene?
Natural Language Engineering ( IF 2.5 ) Pub Date : 2024-01-16 , DOI: 10.1017/s1351324923000578
Kenneth Church

Usage of large language models and chat bots will almost surely continue to grow, since they are so easy to use, and so (incredibly) credible. I would be more comfortable with this reality if we encouraged more evaluations with humans-in-the-loop to come up with a better characterization of when the machine can be trusted and when humans should intervene. This article will describe a homework assignment, where I asked my students to use tools such as chat bots and web search to write a number of essays. Even after considerable discussion in class on hallucinations, many of the essays were full of misinformation that should have been fact-checked. Apparently, it is easier to believe ChatGPT than to be skeptical. Fact-checking and web search are too much trouble.



中文翻译:

新兴趋势:用户什么时候可以信任GPT,什么时候应该干预?

大型语言模型和聊天机器人的使用几乎肯定会继续增长,因为它们非常易于使用,而且(令人难以置信)可信。如果我们鼓励对人在环中进行更多评估,以便更好地描述机器何时可以信任以及何时应该人类进行干预,我会对这一现实感到更加满意。本文将描述一项作业,我要求我的学生使用聊天机器人和网络搜索等工具写一些论文。即使在课堂上对幻觉进行了大量讨论之后,许多文章仍然充满了本应经过事实核查的错误信息。显然,相信 ChatGPT 比怀疑更容易。事实核查和网络搜索太麻烦了。

更新日期:2024-01-16
down
wechat
bug