当前位置: X-MOL 学术Language and Cognition › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal encoding of motion events in speech, gesture and cognition
Language and Cognition ( IF 2.660 ) Pub Date : 2023-12-15 , DOI: 10.1017/langcog.2023.61
Ercenur Ünal , Ezgi Mamus , Aslı Özyürek

How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.

中文翻译:

语音、手势和认知中运动事件的多模态编码

人们如何交流运动事件以及语言类型学如何塑造运动事件,主要研究的重点是语音中的语言编码。然而,人类交流通常涉及多模态信号的交互交换,例如具有表示事件成分的不同可供性的手势。在这里,我们回顾了最近关于语音和手势运动多模态编码的经验证据,以更深入地了解语言类型学是否以及如何塑造不同模态的语言表达,以及这种变化如何在不同的感官输入模态之间变化以及与其他方面的相互作用。认识。经验证据强烈表明,塔尔米的事件整合类型学可以预测言语和手势中的多模态事件描述,以及在生成这些描述之前对事件组件的视觉注意。此外,事件本身的可变性,例如刺激的类型和方式,可能会超越语言类型的影响,尤其是对于方式的表达。
更新日期:2023-12-15
down
wechat
bug