当前位置: X-MOL 学术arXiv.cs.SD › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Extending Whisper with prompt tuning to target-speaker ASR
arXiv - CS - Sound Pub Date : 2023-12-13 , DOI: arxiv-2312.08079
Hao Ma, Zhiyuan Peng, Mingjie Shao, Jing Li, Ju Liu

Target-speaker automatic speech recognition (ASR) aims to transcribe the desired speech of a target speaker from multi-talker overlapped utterances. Most of the existing target-speaker ASR (TS-ASR) methods involve either training from scratch or fully fine-tuning a pre-trained model, leading to significant training costs and becoming inapplicable to large foundation models. This work leverages prompt tuning, a parameter-efficient fine-tuning approach, to extend Whisper, a large-scale single-talker ASR model, to TS-ASR. Experimental results show that prompt tuning can achieve performance comparable to state-of-the-art full fine-tuning approaches while only requiring about 1% of task-specific model parameters. Notably, the original Whisper's features, such as inverse text normalization and timestamp prediction, are retained in target-speaker ASR, keeping the generated transcriptions natural and informative.

中文翻译:


通过快速调整目标说话者 ASR 来扩展 Whisper



目标说话人自动语音识别(ASR)旨在从多个说话者重叠的话语中转录目标说话人所需的语音。大多数现有的目标说话人 ASR (TS-ASR) 方法要么从头开始训练,要么对预训练模型进行完全微调,导致训练成本高昂,并且不适用于大型基础模型。这项工作利用即时调整(一种参数高效的微调方法)将大规模单说话者 ASR 模型 Whisper 扩展到 TS-ASR。实验结果表明,即时调整可以实现与最先进的完全微调方法相当的性能,同时只需要大约 1% 的特定于任务的模型参数。值得注意的是,原始 Whisper 的功能(例如逆文本标准化和时间戳预测)保留在目标说话者 ASR 中,使生成的转录保持自然且信息丰富。
更新日期:2023-12-15
down
wechat
bug