当前位置: X-MOL 学术Int. J. Distrib. Sens. Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The robustness of popular multiclass machine learning models against poisoning attacks: Lessons and insights
International Journal of Distributed Sensor Networks ( IF 2.3 ) Pub Date : 2022-07-14 , DOI: 10.1177/15501329221105159
Majdi Maabreh 1 , Arwa Maabreh 2 , Basheer Qolomany 3 , Ala Al-Fuqaha 4
Affiliation  

Despite the encouraging outcomes of machine learning and artificial intelligence applications, the safety of artificial intelligence–based systems is one of the most severe challenges that need further exploration. Data set poisoning is a severe problem that may lead to the corruption of machine learning models. The attacker injects data into the data set that are faulty or mislabeled by flipping the actual labels into the incorrect ones. The word “robustness” refers to a machine learning algorithm’s ability to cope with hostile situations. Here, instead of flipping the labels randomly, we use the clustering approach to choose the training samples for label changes to influence the classifiers’ performance and the distance-based anomaly detection capacity in quarantining the poisoned samples. According to our experiments on a benchmark data set, random label flipping may have a short-term negative impact on the classifier’s accuracy. Yet, an anomaly filter would discover on average 63% of them. On the contrary, the proposed clustering-based flipping might inject dormant poisoned samples until the number of poisoned samples is enough to influence the classifiers’ performance severely; on average, the same anomaly filter would discover 25% of them. We also highlight important lessons and observations during this experiment about the performance and robustness of popular multiclass learners against training data set–poisoning attacks that include: trade-offs, complexity, categories, poisoning resistance, and hyperparameter optimization.



中文翻译:

流行的多类机器学习模型对中毒攻击的鲁棒性:经验教训和见解

尽管机器学习和人工智能应用取得了令人鼓舞的成果,但基于人工智能的系统的安全性是需要进一步探索的最严峻挑战之一。数据集中毒是一个严重的问题,可能导致机器学习模型的损坏。攻击者通过将实际标签翻转为不正确的标签,将数据注入错误或错误标签的数据集中。“鲁棒性”一词是指机器学习算法应对恶劣情况的能力。在这里,我们不是随机翻转标签,而是使用聚类方法来选择训练样本进行标签更改,以影响分类器的性能和隔离中毒样本时基于距离的异常检测能力。根据我们在基准数据集上的实验,随机标签翻转可能会对分类器的准确性产生短期的负面影响。然而,异常过滤器平均会发现其中的 63%。相反,所提出的基于聚类的翻转可能会注入休眠的中毒样本,直到中毒样本的数量足以严重影响分类器的性能;平均而言,相同的异常过滤器会发现其中的 25%。我们还强调了本实验中关于流行的多类学习器对训练数据集中毒攻击的性能和鲁棒性的重要教训和观察,包括:权衡、复杂性、类别、抗中毒和超参数优化。相反,所提出的基于聚类的翻转可能会注入休眠的中毒样本,直到中毒样本的数量足以严重影响分类器的性能;平均而言,相同的异常过滤器会发现其中的 25%。我们还强调了本实验中关于流行的多类学习器对训练数据集中毒攻击的性能和鲁棒性的重要教训和观察,包括:权衡、复杂性、类别、抗中毒和超参数优化。相反,所提出的基于聚类的翻转可能会注入休眠的中毒样本,直到中毒样本的数量足以严重影响分类器的性能;平均而言,相同的异常过滤器会发现其中的 25%。我们还强调了本实验中关于流行的多类学习器对训练数据集中毒攻击的性能和鲁棒性的重要教训和观察,包括:权衡、复杂性、类别、抗中毒和超参数优化。

更新日期:2022-07-18
down
wechat
bug