当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deceptive Patterns of Intelligent and Interactive Writing Assistants
arXiv - CS - Human-Computer Interaction Pub Date : 2024-04-14 , DOI: arxiv-2404.09375
Karim Benharrak, Tim Zindulka, Daniel Buschek

Large Language Models have become an integral part of new intelligent and interactive writing assistants. Many are offered commercially with a chatbot-like UI, such as ChatGPT, and provide little information about their inner workings. This makes this new type of widespread system a potential target for deceptive design patterns. For example, such assistants might exploit hidden costs by providing guidance up until a certain point before asking for a fee to see the rest. As another example, they might sneak unwanted content/edits into longer generated or revised text pieces (e.g. to influence the expressed opinion). With these and other examples, we conceptually transfer several deceptive patterns from the literature to the new context of AI writing assistants. Our goal is to raise awareness and encourage future research into how the UI and interaction design of such systems can impact people and their writing.

中文翻译:

智能互动写作助手的欺骗模式

大型语言模型已成为新型智能交互式写作助手不可或缺的一部分。许多在商业上提供了类似聊天机器人的 UI,例如 ChatGPT,并且提供的有关其内部工作原理的信息很少。这使得这种新型的广泛系统成为欺骗性设计模式的潜在目标。例如,此类助理可能会利用隐性成本,在某一点之前提供指导,然后要求付费查看其余部分。作为另一个例子,他们可能会将不需要的内容/编辑隐藏到较长的生成或修改的文本片段中(例如,以影响所表达的意见)。通过这些和其他例子,我们从概念上将文献中的几种欺骗性模式转移到人工智能写作助手的新环境中。我们的目标是提高认识并鼓励未来研究此类系统的用户界面和交互设计如何影响人们及其写作。
更新日期:2024-04-16
down
wechat
bug