当前位置: X-MOL 学术J. Cogn. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Error-based Implicit Learning in Language: The Effect of Sentence Context and Constraint in a Repetition Paradigm
Journal of Cognitive Neuroscience ( IF 3.2 ) Pub Date : 2024-03-26 , DOI: 10.1162/jocn_a_02145
Alice Hodapp 1 , Milena Rabovsky 1
Affiliation  

Prediction errors drive implicit learning in language, but the specific mechanisms underlying these effects remain debated. This issue was addressed in an EEG study manipulating the context of a repeated unpredictable word (repetition of the complete sentence or repetition of the word in a new sentence context) and sentence constraint. For the manipulation of sentence constraint, unexpected words were presented either in high-constraint (eliciting a precise prediction) or low-constraint sentences (not eliciting any specific prediction). Repetition-induced reduction of N400 amplitudes and of power in the alpha/beta frequency band was larger for words repeated with their sentence context as compared with words repeated in a new low-constraint context, suggesting that implicit learning happens not only at the level of individual items but additionally improves sentence-based predictions. These processing benefits for repeated sentences did not differ between constraint conditions, suggesting that sentence-based prediction update might be proportional to the amount of unpredicted semantic information, rather than to the precision of the prediction that was violated. In addition, the consequences of high-constraint prediction violations, as reflected in a frontal positivity and increased theta band power, were reduced with repetition. Overall, our findings suggest a powerful and specific adaptation mechanism that allows the language system to quickly adapt its predictions when unexpected semantic information is processed, irrespective of sentence constraint, and to reduce potential costs of strong predictions that were violated.

中文翻译:

语言中基于错误的内隐学习:重复范式中句子上下文和约束的影响

预测错误推动了语言的内隐学习,但这些影响背后的具体机制仍然存在争议。这个问题在脑电图研究中得到解决,该研究操纵重复的不可预测单词的上下文(完整句子的重复或新句子上下文中单词的重复)和句子约束。对于句子约束的操纵,意想不到的单词要么以高约束(引发精确预测)要么以低约束句子(不引发任何特定预测)出现。与在新的低约束上下文中重复的单词相比,重复引起的 N400 振幅和 α/β 频带功率的降低对于在其句子上下文中重复的单词来说更大,这表明内隐学习不仅发生在单个项目,还改进了基于句子的预测。重复句子的这些处理优势在约束条件之间没有差异,这表明基于句子的预测更新可能与不可预测的语义信息量成正比,而不是与违反的预测的精度成正比。此外,高约束预测违规的后果(如正面积极性和增加的 θ 波段功率所反映)随着重复而减少。总的来说,我们的研究结果提出了一种强大而具体的适应机制,允许语言系统在处理意外的语义信息时快速调整其预测,而不管句子约束如何,并减少违反强预测的潜在成本。
更新日期:2024-03-26
down
wechat
bug