当前位置: X-MOL 学术Ann. Math. Artif. Intel. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning preference representations based on Choquet integrals for multicriteria decision making
Annals of Mathematics and Artificial Intelligence ( IF 1.2 ) Pub Date : 2024-02-07 , DOI: 10.1007/s10472-024-09930-0
Margot Herin , Patrice Perny , Nataliya Sokolovska

This paper concerns preference elicitation and learning of decision models in the context of multicriteria decision making. We propose an approach to learn a representation of preferences by a non-additive multiattribute utility function, namely a Choquet or bi-Choquet integral. This preference model is parameterized by one-dimensional utility functions measuring the attractiveness of consequences w.r.t. various point of views and one or two set functions (capacities) used to weight the coalitions and control the intensity of interactions among criteria, on the positive and possibly the negative sides of the utility scale. Our aim is to show how we can successively learn marginal utilities from properly chosen preference examples and then learn where the interactions matter in the overall model. We first present a preference elicitation method to learn spline representations of marginal utilities on every component of the model. Then we propose a sparse learning approach based on adaptive \(L_1\)-regularization for determining a compact Möbius representation fitted to the observed preferences. We present numerical tests to compare different regularization methods. We also show the advantages of our approach compared to basic methods that do not seek sparsity or that force sparsity a priori by requiring k-additivity.



中文翻译:

基于 Choquet 积分的学习偏好表示,用于多标准决策

本文涉及多标准决策背景下的偏好获取和决策模型学习。我们提出了一种通过非加性多属性效用函数(即 Choquet 或 bi-Choquet 积分)来学习偏好表示的方法。该偏好模型由一维效用函数参数化,该函数测量各种观点的结果的吸引力,以及一个或两个集合函数(能力),用于对联盟进行加权并控制标准之间相互作用的强度,在积极的和可能的方面效用规模的消极面。我们的目的是展示如何从正确选择的偏好示例中连续学习边际效用,然后了解相互作用在整个模型中的重要性。我们首先提出一种偏好启发方法来学习模型每个组件的边际效用的样条表示。然后,我们提出了一种基于自适应\(L_1\)正则化的稀疏学习方法,用于确定适合观察到的偏好的紧凑莫比乌斯表示。我们提出数值测试来比较不同的正则化方法。我们还展示了与不寻求稀疏性或通过要求k可加性来强制稀疏性的基本方法相比,我们的方法的优点。

更新日期:2024-02-07
down
wechat
bug