当前位置: X-MOL 学术Cataloging & Classification Quarterly › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Analysis of BERT (NLP) for Assisted Subject Indexing for Project Gutenberg
Cataloging & Classification Quarterly Pub Date : 2022-11-25 , DOI: 10.1080/01639374.2022.2138666
Charlene Chou 1 , Tony Chu 2
Affiliation  

Abstract

In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.



中文翻译:

用于古腾堡计划辅助主题索引的 BERT (NLP) 分析

摘要

本文结合AI(Artificial Intelligence)和NLP(Natural Language Processing)技术,探讨利用AI/NLP模型增强数字资源主题标引的可行性。虽然 BERT(Bidirectional Encoder Representations from Transformers)模型在学术界得到广泛使用,但作者评估了 BERT 模型是否可用于古腾堡计划馆藏中的机器辅助索引,方法是建议由某些国会图书馆过滤的国会图书馆主题标目分类子类标签。这项研究的结果为进一步研究 BERT 模型提供了信息,以协助数字图书馆馆藏的自动主题索引。

更新日期:2022-11-25
down
wechat
bug