当前位置: X-MOL 学术Laboratory Phonology › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Speaking clearly improves speech segmentation by statistical learning under optimal listening conditions
Laboratory Phonology ( IF 1.761 ) Pub Date : 2021-07-23 , DOI: 10.5334/labphon.310
Zhe-chen Guo , Rajka Smiljanic

This study investigated the effect of speaking style on speech segmentation by statistical learning under optimal and adverse listening conditions. Similar to the intelligibility and memory benefits found in previous studies, enhanced acoustic-phonetic cues of the listener-oriented clear speech could improve speech segmentation by statistical learning compared to conversational speech. Yet, it could not be precluded that hyper-articulated clear speech, reported to have less pervasive coarticulation, would result in worse segmentation than conversational speech. We tested these predictions using an artificial language learning paradigm. Listeners who acquired English before age six were played continuous repetitions of the ‘words’ of an artificial language, spoken either clearly or conversationally and presented either in quiet or in noise at a signal-to-noise ratio of +3 or 0 dB SPL. Next, they recognized the artificial words in a two-alternative forced-choice test. Results supported the prediction that clear speech facilitates segmentation by statistical learning more than conversational speech but only in the quiet listening condition. This suggests that listeners can use clear speech acoustic-phonetic enhancements to guide speech processing dependent on domain-general, signal-independent statistical computations. However, there was no clear speech benefit in noise at either signal-to-noise ratio. We discuss possible mechanisms that could explain these results.

中文翻译:

在最佳聆听条件下,口语通过统计学习明显改善语音分割

本研究通过统计学习在最佳和不利的听力条件下调查说话风格对语音分割的影响。与先前研究中发现的可理解性和记忆力益处类似,与会话语音相比,面向听众的清晰语音的增强声学语音提示可以通过统计学习改善语音分割。然而,不能排除超清晰的清晰语音,据报道具有较少的共同发音,会导致比会话语音更差的分割。我们使用人工语言学习范式测试了这些预测。在六岁之前获得英语的听众被播放连续重复人工语言的“单词”,说话清晰或对话,以 +3 或 0 dB SPL 的信噪比在安静或噪音中呈现。接下来,他们在两种选择的强制选择测试中识别出人造词。结果支持了这样的预测,即清晰的语音比会话语音更有助于通过统计学习进行分割,但仅在安静的聆听条件下。这表明听众可以使用清晰的语音声学语音增强来指导依赖于域通用、与信号无关的统计计算的语音处理。然而,在任何一种信噪比下,都没有明显的语音优势。我们讨论了可以解释这些结果的可能机制。结果支持了这样的预测,即清晰的语音比会话语音更有助于通过统计学习进行分割,但仅在安静的聆听条件下。这表明听众可以使用清晰的语音声学语音增强来指导依赖于域通用、与信号无关的统计计算的语音处理。然而,在任何一种信噪比下,都没有明显的语音优势。我们讨论了可以解释这些结果的可能机制。结果支持了这样的预测,即清晰的语音比会话语音更有助于通过统计学习进行分割,但仅在安静的聆听条件下。这表明听众可以使用清晰的语音声学语音增强来指导依赖于域通用、与信号无关的统计计算的语音处理。然而,在任何一种信噪比下,都没有明显的语音优势。我们讨论了可以解释这些结果的可能机制。在任何一种信噪比下,噪声都没有明显的语音优势。我们讨论了可以解释这些结果的可能机制。在任何一种信噪比下,噪声都没有明显的语音优势。我们讨论了可以解释这些结果的可能机制。
更新日期:2021-07-23
down
wechat
bug