当前位置: X-MOL 学术Minds Mach. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence
Minds and Machines ( IF 7.4 ) Pub Date : 2022-03-05 , DOI: 10.1007/s11023-022-09596-9
Hajo Greif 1
Affiliation  

The problem of epistemic opacity in Artificial Intelligence (AI) is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first contrast computer models and their claims to algorithm-based universality with cybernetics-style analogue models and their claims to structural isomorphism between elements of model and target system (in: Black, Models and metaphors, 1962). While analogue models aim at perceptually or conceptually accessible model-target relations, computer models give rise to a specific kind of underdetermination in these relations that needs to be addressed in specific ways. I then undertake a comparison between two contemporary AI approaches that, although related, distinctly align with the above modelling paradigms and represent distinct strategies towards model intelligibility: Deep Neural Networks and Predictive Processing. I conclude that their respective degrees of epistemic transparency primarily depend on the underlying purposes of modelling, not on their computational properties.



中文翻译:

模拟模型和通用机器。人工智能认知透明度的范式

人工智能 (AI) 中的认知不透明问题通常被描述为导致不透明模型的不透明算法问题。然而,人工智能模型的透明度不应被视为其算法属性的绝对衡量标准,而是模型对人类用户的可理解程度的绝对衡量标准。它的认知相关元素将在计算级别之上和之外的各个级别上指定。为了阐明这一主张,我首先将计算机模型及其对基于算法的普遍性的主张与控制论风格的模拟模型及其对模型元素和目标系统之间的结构同构性的主张进行对比(见:布莱克,模型和隐喻,1962) . 虽然模拟模型旨在感知或概念上可访问的模型-目标关系,计算机模型在这些关系中产生了一种特定的不确定性,需要以特定的方式加以解决。然后,我对两种当代 AI 方法进行了比较,这两种方法虽然相关,但明显符合上述建模范式,并代表了模型可理解性的不同策略:深度神经网络和预测处理。我的结论是,它们各自的认知透明程度主要取决于建模的基本目的,而不是它们的计算属性。与上述建模范式明显一致,并代表了不同的模型可理解性策略:深度神经网络和预测处理。我的结论是,它们各自的认知透明程度主要取决于建模的基本目的,而不是它们的计算属性。与上述建模范式明显一致,并代表了不同的模型可理解性策略:深度神经网络和预测处理。我的结论是,它们各自的认知透明程度主要取决于建模的基本目的,而不是它们的计算属性。

更新日期:2022-03-05
down
wechat
bug