当前位置: X-MOL 学术J. Res. Sci. Teach. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence and the Journal of Research in Science Teaching
Journal of Research in Science Teaching ( IF 3.918 ) Pub Date : 2024-02-23 , DOI: 10.1002/tea.21933
Troy D. Sadler 1 , Felicia Moore Mensah 2 , Jonathan Tam 2
Affiliation  

Artificial Intelligence (AI) is a transformative technology that promises to impact many aspects of society including research, education, and publishing. We, the editors of the Journal of Research in Science Teaching (JRST), think that the journal has a responsibility to contribute to the ongoing dialogues about the use of AI in research and publishing with particular attention to the field of science education. We use this editorial to share our current ideas about the opportunities and challenges associated with AI in science education research and to sketch out new journal guidelines related to the use of AI for the production of JRST articles. We also extend an invitation to scholars to submit research articles and commentaries that advance the field's understanding of the intersections of AI and science education.

Establishing foundations for an AI revolution has been in progress since the mid-twentieth century (Adamopoulou & Moussiades, 2020), and a giant step in public engagement with AI was taken in November 2022 when OpenAI released ChatGPT. This tool along with other large language models (LLM) such as Google Bard, and Microsoft's Copilot, provide platforms that are easy to use and can generate content such as text, images, computer code, audio, and video. It has quickly become apparent that these generative AI tools have the potential to change education in substantial ways. There is already evidence that students and teachers are actively using AI in ways that will push the field of education to reconsider what it means to construct learning artifacts, how to assess the work of learners, and the nature of learner-technology interactions (e.g., Prather et al., 2023). Of course, generative AI will not just impact the work of students, teachers, and other educational practitioners, it will affect how research is conducted and reported. As journal editors, we are particularly interested in the use of AI in the sharing of research and publication processes.

Across the field of education research, and science education research more specifically, scholars use a host of technologies to support their work. For example, researchers regularly use statistical packages to derive quantitative patterns in data, qualitative software to organize and represent coded themes in data, grammar, and spelling check software embedded in word processors and online (i.e., Grammarly), and reference managers to find and cite literature. Technologies such as these examples are ubiquitous across our field, and new generative AI presents another set of tools that researchers might leverage for the sharing of their scholarship. However, the now widely available LLMs seem, to us, to represent a fundamental shift in technological capacity for producing research publications. The users of software for data analysis, reference management, and grammar checks exert levels of control and supervision over these technologies, which is not the case when using an LLM. There is a much greater degree of opaqueness and uncertainty when it comes to generating content with an LLM as compared to generating regression coefficients with data analysis software. Given these distinctions between AI and other technologies used by researchers, we think AI presents a unique challenge for academic publishing and therefore warrants the additional attention called for in this editorial.

In considering the role of AI in publishing research, we think it is important to highlight two fundamental tensions. First, the research enterprise is about the creation of new knowledge. Researchers conduct and write about studies and other forms of scholarship as a means of generating new ideas and insights about the foci of their inquiries. We argue that AI, at least the LLMs that are currently prevalent, cannot achieve the goal of trustworthy knowledge creation. LLMs necessarily work from existing source material—they can repeat, reword, and summarize what already exists, but they do not create new knowledge. AI can be generative in the sense that it can generate content such as text, but AI is not generative from a research perspective. Second, an important hallmark of science and research is a commitment to openness and transparency. The set of social practices employed by research communities is a fundamental dimension of science itself, and open sharing and critique of methods, findings, and interpretations are some of these critical social practices (Osborne et al., 2022). The processes underlying generative AI tools in common use are not open or transparent. It is not always clear what the sources for AI generation are, how the sources are being analyzed, or why some ideas are highlighted and others are not. The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al., 2023).

Despite these concerns, we are not arguing that AI has no place in conducting and publishing research. As authors of a recent JRST commentary suggest, “the [AI] train… has indeed left the station” (Zhai & Nehm, 2023, p. 1395). Although this statement was written specifically in response to AI's role in formative assessment, its point about the inevitability of AI extends to other aspects of our field including publishing. We can imagine ways in which AI might be used (and is already being used) responsibly for conducting research and preparing manuscripts. For example, AI can help researchers review existing literature, generate code for analyzing data, create outlines for organizing manuscripts, and assist brainstorming processes. (In the interest of full disclosure, as we thought about what to claim that AI could do for researchers, we posed the following questions to ChatGPT: “How can generative AI be used responsibly for conducting research and publishing?” and “What things can AI do for researchers trying to publish their work?” Some of the responses were helpful to jump-start our thinking, but we created the final list shared above.)

We also think that it is critically important for users of AI to be aware of its limitations and problems. Some of those limitations and problems include bias, inaccuracy, and, as we highlighted above, limited transparency. Generative AI is biased by the data corpus that it reviews. Models trained on biased data sets produce biased results including the propagation of gender stereotypes and racial discrimination (Heaven, 2023). These platforms can also produce inaccurate results—the output can be outdated, factually inaccurate, and occasionally nonsensical. In addition, generative AI tends not to provide citations for the products that it creates, and when asked specifically to do so, may create fictitious references (Stokel-Walker & Van Noorden, 2023). Over time, the models will improve, and the users of this technology will get better at using it. However, these concerns will not simply go away, and it is essential for scholars using generative AI as well as those consuming AI-generated content to be aware of these issues.

Given both the challenges and potential associated with AI, we are not in favor of the use of generative AI to produce text for writing manuscripts. However, as stewards of JRST, we recognize that AI technologies are rapidly evolving as are the ways in which science education scholars use them, and setting overly restrictive guidelines regarding the use of AI for JRST publications could be detrimental to the journal and the JRST community. We think that it would be inappropriate for a research team to use AI to generate the full text for a JRST manuscript. At this moment, we do not think that it would even be possible to do this in a way that yields a product that meets the standards for JRST publication. However, we can also imagine circumstances in which a team employs AI in a manner consistent with the uses we presented above, and that some aspect of the AI-generated content ends up in the manuscript. Despite our acknowledged skepticism of the role of AI in publishing scholarship generally, we see this hypothetical case as one of likely numerous situations in which AI-generated content is quite appropriately included in a JRST article. In all situations in which authors employ AI, they should thoroughly review and edit the AI-generated content to check for accuracy and ensure that ethical standards for research, including proper attribution of sources and the avoidance of plagiarism, are met.



中文翻译:

人工智能与科学教学研究杂志

人工智能 (AI) 是一项变革性技术,有望影响社会的许多方面,包括研究、教育和出版。我们, 《科学教学研究杂志》 (JRST)的编辑,认为该杂志有责任为有关人工智能在研究和出版中的使用的持续对话做出贡献,特别关注科学教育领域。我们利用这篇社论来分享我们目前对人工智能在科学教育研究中的机遇和挑战的看法,并制定与使用人工智能制作 JRST 文章相关的新期刊指南。我们还邀请学者提交研究文章和评论,以促进该领域对人工智能和科学教育交叉点的理解。

自二十世纪中叶以来,人工智能革命的基础建设一直在进行(Adamopoulou & Moussiades,  2020),并且在 2022 年 11 月 OpenAI 发布 ChatGPT 时,公众参与人工智能迈出了一大步。该工具与其他大型语言模型 (LLM)(例如 Google Bard 和 Microsoft 的 Copilot)一起提供了易于使用并可以生成文本、图像、计算机代码、音频和视频等内容的平台。人们很快就发现,这些生成式人工智能工具有潜力以实质性方式改变教育。已有证据表明,学生和教师正在积极使用人工智能,这将推动教育领域重新考虑构建学习制品的含义、如何评估学习者的工作以及学习者与技术交互的本质(例如, Prather 等人,  2023)。当然,生成式人工智能不仅会影响学生、教师和其他教育从业者的工作,还会影响研究的进行和报告方式。作为期刊编辑,我们对人工智能在共享研究和出版过程中的使用特别感兴趣。

在教育研究领域,更具体地说是科学教育研究领域,学者们使用大量技术来支持他们的工作。例如,研究人员经常使用统计软件包来导出数据中的定量模式,使用定性软件来组织和表示数据中的编码主题,语法和嵌入文字处理器和在线的拼写检查软件(即 Grammarly),以及参考文献管理器来查找和表示数据中的编码主题。引用文献。诸如此类的技术在我们的领域中无处不在,而新的生成人工智能提供了另一套工具,研究人员可以利用这些工具来分享他们的学术成果。然而,对我们来说,现在广泛使用的法学硕士似乎代表了制作研究出版物的技术能力的根本转变。数据分析、参考文献管理和语法检查软件的用户对这些技术进行一定程度的控制和监督,而使用法学硕士时情况并非如此。与使用数据分析软件生成回归系数相比,使用法学硕士生成内容时存在更大程度的不透明性和不确定性。鉴于人工智能与研究人员使用的其他技术之间的这些区别,我们认为人工智能对学术出版提出了独特的挑战,因此值得这篇社论中呼吁的额外关注。

在考虑人工智能在出版研究中的作用时,我们认为强调两个基本的紧张关系很重要。首先,研究事业的目的是创造新知识。研究人员进行并撰写研究和其他形式的学术成果,以此作为针对其研究重点产生新想法和见解的一种手段。我们认为人工智能,至少是目前流行的法学硕士,无法实现值得信赖的知识创造的目标。法学硕士必须以现有的源材料为基础——他们可以重复、改写和总结已经存在的内容,但他们不会创造新的知识。人工智能可以是生成性的,因为它可以生成文本等内容,但从研究的角度来看,人工智能并不是生成性的。其次,科学研究的一个重要标志是致力于开放和透明。研究团体采用的一系列社会实践是科学本身的一个基本维度,对方法、发现和解释的公开分享和批评是其中一些关键的社会实践(Osborne 等人,  2022)。常用的生成式人工智能工具的底层流程并不公开或透明。人们并不总是清楚人工智能生成的来源是什么,如何分析这些来源,或者为什么有些想法被强调而另一些则没有。人工智能幻觉现象(法学硕士根据源材料中不存在的模式生成虚假信息)提供了这一问题的证据。为什么人工智能工具会创建虚假或误导性的内容尚未完全理解,这反映了潜在的不确定性(Athaluri 等人,  2023)。

尽管存在这些担忧,我们并不认为人工智能在开展和发表研究方面没有地位。正如最近 JRST 评论的作者所言,“[AI] 列车……确实已经离开车站”(Zhai & Nehm,  2023,第 1395 页)。尽管这一声明是专门针对人工智能在形成性评估中的作用而写的,但它关于人工智能必然性的观点延伸到了我们领域的其他方面,包括出版。我们可以想象如何负责任地使用人工智能(并且已经在使用)来进行研究和准备手稿。例如,人工智能可以帮助研究人员审查现有文献,生成用于分析数据的代码,创建用于组织手稿的大纲,并协助头脑风暴过程。 (为了充分披露,当我们思考人工智能可以为研究人员做什么时,我们向 ChatGPT 提出了以下问题:“如何负责任地使用生成式人工智能来进行研究和出版?”以及“哪些东西可以人工智能对试图发表其研究成果的研究人员有何帮助?”一些回复有助于激发我们的思考,但我们创建了上面分享的最终列表。)

我们还认为,人工智能的用户意识到其局限性和问题至关重要。其中一些限制和问题包括偏见、不准确,以及正如我们上面强调的透明度有限。生成式人工智能因其审查的数据集而存在偏见。在有偏见的数据集上训练的模型会产生有偏见的结果,包括性别刻板印象和种族歧视的传播(Heaven,  2023)。这些平台还可能产生不准确的结果——输出可能过时、事实上不准确,有时甚至是无意义的。此外,生成式人工智能往往不会为其创建的产品提供引用,并且当特别要求这样做时,可能会创建虚构的引用(Stokel-Walker & Van Noorden,  2023)。随着时间的推移,模​​型将会改进,并且该技术的用户将会更好地使用它。然而,这些担忧不会轻易消失,对于使用生成人工智能的学者以及消费人工智能生成内容的人来说,必须意识到这些问题。

考虑到与人工智能相关的挑战和潜力,我们不赞成使用生成式人工智能来生成用于撰写手稿的文本。然而,作为 JRST 的管理者,我们认识到人工智能技术正在迅速发展,科学教育学者使用它们的方式也在迅速发展,并且针对 JRST 出版物使用人工智能制定过度限制性的指导方针可能会对期刊和 JRST 社区产生不利影响。我们认为研究团队使用人工智能生成 JRST 手稿的全文是不合适的。目前,我们认为甚至不可能以产生符合 JRST 发布标准的产品的方式来做到这一点。然而,我们也可以想象这样的情况:团队以与我们上面介绍的用途一致的方式使用人工智能,并且人工智能生成的内容的某些方面最终会出现在手稿中。尽管我们普遍对人工智能在学术出版中的作用持怀疑态度,但我们认为这种假设的情况是人工智能生成的内容非常适合包含在 JRST 文章中的众多情况之一。在作者使用人工智能的所有情况下,他们都应该彻底审查和编辑人工智能生成的内容,以检查准确性并确保符合研究道德标准,包括正确归属来源和避免抄袭。

更新日期:2024-02-23
down
wechat
bug