当前位置: X-MOL 学术arXiv.cs.IR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Ultra Low-Cost Two-Stage Multimodal System for Non-Normative Behavior Detection
arXiv - CS - Information Retrieval Pub Date : 2024-03-24 , DOI: arxiv-2403.16151
Albert Lu, Stephen Cranefield

The online community has increasingly been inundated by a toxic wave of harmful comments. In response to this growing challenge, we introduce a two-stage ultra-low-cost multimodal harmful behavior detection method designed to identify harmful comments and images with high precision and recall rates. We first utilize the CLIP-ViT model to transform tweets and images into embeddings, effectively capturing the intricate interplay of semantic meaning and subtle contextual clues within texts and images. Then in the second stage, the system feeds these embeddings into a conventional machine learning classifier like SVM or logistic regression, enabling the system to be trained rapidly and to perform inference at an ultra-low cost. By converting tweets into rich multimodal embeddings through the CLIP-ViT model and utilizing them to train conventional machine learning classifiers, our system is not only capable of detecting harmful textual information with near-perfect performance, achieving precision and recall rates above 99\% but also demonstrates the ability to zero-shot harmful images without additional training, thanks to its multimodal embedding input. This capability empowers our system to identify unseen harmful images without requiring extensive and costly image datasets. Additionally, our system quickly adapts to new harmful content; if a new harmful content pattern is identified, we can fine-tune the classifier with the corresponding tweets' embeddings to promptly update the system. This makes it well suited to addressing the ever-evolving nature of online harmfulness, providing online communities with a robust, generalizable, and cost-effective tool to safeguard their communities.

中文翻译:

用于非规范行为检测的超低成本两级多模态系统

在线社区越来越多地被有害评论的有毒浪潮所淹没。为了应对这一日益严峻的挑战,我们引入了一种两阶段超低成本多模态有害行为检测方法,旨在以高精度和召回率识别有害评论和图像。我们首先利用 CLIP-ViT 模型将推文和图像转换为嵌入,有效捕获文本和图像中语义意义和微妙上下文线索的复杂相互作用。然后在第二阶段,系统将这些嵌入输入到传统的机器学习分类器(如 SVM 或逻辑回归)中,使系统能够快速训练并以超低的成本执行推理。通过 CLIP-ViT 模型将推文转换为丰富的多模态嵌入,并利用它们来训练传统的机器学习分类器,我们的系统不仅能够以近乎完美的性能检测有害文本信息,实现超过 99% 的准确率和召回率,而且凭借其多模态嵌入输入,还展示了无需额外训练即可零拍摄有害图像的能力。此功能使我们的系统能够识别看不见的有害图像,而无需大量且昂贵的图像数据集。此外,我们的系统可以快速适应新的有害内容;如果发现新的有害内容模式,我们可以使用相应推文的嵌入对分类器进行微调,以及时更新系统。这使得它非常适合解决不断变化的在线危害性,为在线社区提供强大、通用且经济高效的工具来保护其社区。
更新日期:2024-03-27
down
wechat
bug