当前位置: X-MOL 学术Dokl. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Metric Learning: Loss Functions Comparison
Doklady Mathematics ( IF 0.6 ) Pub Date : 2024-02-09 , DOI: 10.1134/s1064562423701053
R. L. Vasilev , A. G. D’yakonov

Abstract

An overview of deep metric learning methods is presented. Although they have appeared in recent years, these methods were compared only with their predecessors, with neural networks of outdated architectures used for representation learning (representations on which the metric is calculated). The described methods were compared on different datasets from several domains, using pre-trained neural networks comparable in performance to SotA (state of the art): ConvNeXt for images and DistilBERT for texts. Labeled datasets were used, divided into two parts (train and test) so that the classes did not overlap (i.e., for each class its objects are fully in train or fully in test). Such a large-scale honest comparison was made for the first time and led to unexpected conclusions, viz. some “old” methods, for example, Tuplet Margin Loss, are superior in performance to their modern modifications and methods proposed in very recent works.



中文翻译:

深度度量学习:损失函数比较

摘要

概述了深度度量学习方法。尽管它们是近年来出现的,但这些方法仅与它们的前辈进行比较,其中过时架构的神经网络用于表示学习(计算度量的表示)。使用性能与 SotA(最先进的技术)相当的预训练神经网络,在来自多个领域的不同数据集上对所描述的方法进行比较:用于图像的 ConvNeXt 和用于文本的 DistilBERT。使用标记数据集,分为两部分(训练和测试),以便类别不重叠(即,对于每个类别,其对象完全在训练中或完全在测试中)。如此大规模的诚实比较还是第一次,得出了意想不到的结论:一些“旧”方法,例如 Tuplet Margin Loss,在性能上优于其现代修改和最近作品中提出的方法。

更新日期:2024-02-09
down
wechat
bug