当前位置: X-MOL 学术Program. Comput. Softw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-Lingual Transfer Learning in Drug-Related Information Extraction from User-Generated Texts
Programming and Computer Software ( IF 0.7 ) Pub Date : 2023-12-07 , DOI: 10.1134/s036176882307006x
A. S. Sakhovskiy , E. V. Tutubalina

Abstract

Aggregating knowledge about drug, disease, and drug reaction entities across a broader range of domains and languages is critical for information extraction applications. In this work, we present a fine-grained evaluation intended to understand the efficiency of multilingual BERT-based models for biomedical named entity recognition (NER) and multi-label sentence classification. We investigate the role of transfer learning strategies between two English corpora and a novel annotated corpus of Russian reviews about drug therapy. In these corpora, labels for sentences indicate health-related issues or their absence. Sentences that belong to a certain class are additionally labeled at the entity level to identify fine-grained subtypes such as drug names, drug indications, and drug reactions. The evaluation results demonstrate that the BERT training on Russian and English raw reviews (5M in total) provides the best transfer capabilities for adverse drug reactions detection task on the Russian data. The macro F1 score of 74.85% in the NER task was achieved by our RuDR-BERT model. For the classification task, our EnRuDR-BERT model achieved the macro F1 score of 70%, gaining 8.64% over the score of a general-domain BERT model.



中文翻译:

从用户生成的文本中提取药物相关信息的跨语言迁移学习

摘要

跨更广泛的领域和语言聚合有关药物、疾病和药物反应实体的知识对于信息提取应用至关重要。在这项工作中,我们提出了一种细粒度的评估,旨在了解基于多语言 BERT 的生物医学命名实体识别 (NER) 和多标签句子分类模型的效率。我们研究了两个英语语料库和一个关于药物治疗的俄罗斯评论的新颖注释语料库之间的迁移学习策略的作用。在这些语料库中,句子的标签表明与健康相关的问题或它们的缺失。属于某个类别的句子在实体级别上进行额外标记,以识别细粒度的子类型,例如药物名称、药物适应症和药物反应。评估结果表明,俄文和英文原始评论(总计5M)的BERT训练为俄文数据上的药品不良反应检测任务提供了最佳的传输能力。我们的 RuDR-BERT 模型在 NER 任务中取得了 74.85% 的宏观 F1 分数。对于分类任务,我们的 EnRuDR-BERT 模型获得了 70% 的宏观 F1 分数,比通用领域 BERT 模型的分数提高了 8.64%。

更新日期:2023-12-08
down
wechat
bug