当前位置: X-MOL 学术BMJ › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Quality and safety of artificial intelligence generated health information
The BMJ ( IF 105.7 ) Pub Date : 2024-03-20 , DOI: 10.1136/bmj.q596
Michael J Sorich , Bradley D Menz , Ashley M Hopkins

Generative artificial intelligence (AI) is advancing rapidly and has the potential to greatly improve many aspects of society, including health. The risks of potentially harmful consequences, however, necessitate effective oversight and mitigation measures. This article highlights distinct forms of health related risks of generative AI, with corresponding options for mitigating risk. Although artificial intelligence (AI) holds considerable promise for positive effects on society, it also has the potential for harmful consequences, which may occur either unintentionally or because of misuse. Applications, such as ChatGPT, Gemini, Midjourney, and Sora, showcase generative AI’s capability to create high quality text, audio, video, and image content. The rapid advancement of AI technologies requires an equally rapid escalation of efforts to identify and mitigate risks. New disciplines, such as AI Safety and Ethical AI, broadly aim to ensure that current and future AI operates in a manner that is safe and ethical. This article focuses on generative AI—a technology with substantial potential to transform how communities seek, access, and communicate information, including about health. Table 1 outlines a glossary of key terms used in the article. Given that more than 70% of people turn to the internet as their first source of health information,1 it is crucial to identify common types of risks associated with AI technologies and to introduce effective vigilance structures for mitigating these risks. Notably, as generative AI becomes increasingly sophisticated, it will become more challenging for the public to discern when outputs (text, audio, video) are incorrect. In this article, we aim to differentiate common types of potential risks and highlight emerging ideas for mitigating each type of risk. For simplicity, we often use large language models (LLMs) to illustrate emerging …

中文翻译:

人工智能生成的健康信息的质量和安全

生成人工智能(AI)正在迅速发展,并有潜力极大地改善社会的许多方面,包括健康。然而,潜在有害后果的风险需要有效的监督和缓解措施。本文重点介绍了生成人工智能的不同形式的健康相关风险,以及相应的降低风险的选项。尽管人工智能 (AI) 有望对社会产生积极影响,但它也有可能造成有害后果,这种后果可能是无意的,也可能是由于滥用而发生的。 ChatGPT、Gemini、Midjourney 和 Sora 等应用程序展示了生成式 AI 创建高质量文本、音频、视频和图像内容的能力。人工智能技术的快速发展需要同样快速地加大识别和减轻风险的力度。人工智能安全和道德人工智能等新学科的广泛目标是确保当前和未来的人工智能以安全和道德的方式运行。本文重点讨论生成式人工智能——一种具有巨大潜力的技术,可以改变社区寻求、访问和交流信息(包括健康信息)的方式。表 1 概述了本文中使用的关键术语的词汇表。鉴于超过 70% 的人将互联网作为健康信息的第一来源,1 识别与人工智能技术相关的常见风险类型并引入有效的警惕结构来减轻这些风险至关重要。值得注意的是,随着生成式人工智能变得越来越复杂,公众辨别输出(文本、音频、视频)何时不正确将变得更加困难。在本文中,我们的目标是区分常见的潜在风险类型,并重点介绍减轻每种类型风险的新兴想法。为了简单起见,我们经常使用大型语言模型(LLM)来说明新兴的……
更新日期:2024-03-21
down
wechat
bug