当前位置: X-MOL 学术Psychological Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Evaluating the complexity and falsifiability of psychological models.
Psychological Review ( IF 5.4 ) Pub Date : 2023-03-09 , DOI: 10.1037/rev0000421
Manuel Villarreal 1 , Alexander Etz 2 , Michael D Lee 1
Affiliation  

Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback-Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of different experimental outcomes. Using introductory conceptual examples and applications with existing models and experiments, we show that KL-delta challenges widely held scientific intuitions about model complexity and falsifiability. In a psychophysics application, we show that hierarchical models with more parameters are often more falsifiable than the original nonhierarchical model. This counters the intuition that adding parameters always makes a model more complex. In a decision-making application, we show that a choice model incorporating response determinism can be harder to falsify than its special case of probability matching. This counters the intuition that if one model is a special case of another, the special case must be less complex. In a memory recall application, we show that using informative data priors based on the serial position curve allows KL-delta to distinguish models that otherwise would be indistinguishable. This shows the value in model evaluation of extending the notion of possible falsifiability, in which all data are considered equally likely, to the more general notion of plausible falsifiability, in which some data are more likely than others. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

中文翻译:

评估心理模型的复杂性和可证伪性。

理解模型的复杂性对于开发有用的心理模型很重要。考虑模型复杂性的一种方法是根据模型所做的预测以及经验证据证伪这些预测的能力。我们认为现有的可证伪性衡量标准具有重要的局限性,并开发了一种新的衡量标准。KL-delta 使用 Kullback-Leibler 散度将模型的先验预测分布与先验数据进行比较,从而形式化有关不同实验结果的合理性的知识。通过使用介绍性概念示例以及对现有模型和实验的应用,我们表明 KL-delta 挑战了人们普遍持有的关于模型复杂性和可证伪性的科学直觉。在心理物理学应用中,我们表明,具有更多参数的分层模型通常比原始非分层模型更容易被证伪。这违背了添加参数总是会使模型变得更加复杂的直觉。在决策应用中,我们表明,结合响应决定论的选择模型比其概率匹配的特殊情况更难伪造。这违背了这样的直觉:如果一个模型是另一个模型的特例,那么该特例一定不那么复杂。在记忆回忆应用中,我们表明,使用基于序列位置曲线的信息数据先验可以让 KL-delta 区分模型,否则这些模型将无法区分。这显示了扩展可能证伪性概念的模型评估的价值,其中所有数据都被认为是同等可能的,更普遍的合理可证伪性概念,其中某些数据比其他数据更有可能。(PsycInfo 数据库记录 (c) 2023 APA,保留所有权利)。
更新日期:2023-03-09
down
wechat
bug