当前位置: X-MOL 学术Minds Mach. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?
Minds and Machines ( IF 7.4 ) Pub Date : 2023-07-17 , DOI: 10.1007/s11023-023-09642-0
Jesse Lopes

The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339–5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine’s “apparatus of identity and quantification” in order to (1) achieve concepts, and (2) represent objects, as opposed to “half-entities” corresponding to similarity amalgams (Quine, Quintessence, Cambridge, 2004, p. 107). Similarity amalgams are also called “approximate meaning[s]” (Marcus & Davis, Rebooting AI, Pantheon, 2019, p. 132). Although Husserl inferred the “complete abandonment of the empiricist theory of abstraction” (a fortiori deep CNNs) due to the infinite regress and circularity arguments examined in this paper, I argue that the statistical learning of deep CNNs may be incorporated into a Fodorian hybrid account that supports Quine’s “sortal predicates, negation, plurals, identity, pronouns, and quantifiers” which are representationally necessary to overcome the regress/circularity in content constitution and achieve objective (as opposed to similarity-subjective) representation (Burge, Origins of Objectivity. Oxford, 2010, p. 238). I base myself initially on Yoshimi’s (New Frontiers in Psychology, 2011) attempt to explain Husserlian phenomenology with neural networks but depart from him due to the arguments and consequently propose a two-system view which converges with Weiskopf’s proposal (“Observational Concepts.” The Conceptual Mind. MIT, 2015. 223–248).



中文翻译:

深度 CNN 能否避免内容构成中的无限回归/循环?

深度卷积神经网络(CNN)的表示是通过经验主义抽象理论的方式概括相似性和抽象差异而形成的(Buckner, Synthese 195:5339–5372, 2018)。抽象的经验主义理论被很好地理解为在内容构成中带来无限的回归和循环(Husserl,Logical Investigations. Routledge,2001)。本文认为这些蕴涵对于深度 CNN 来说更是如此。两个论文的结果是:深度 CNN 需要奎因的“身份和量化装置”的补充,以便 (1) 实现概念,(2) 表示对象,而不是对应于相似性混合体的“半实体”(奎因、Quintessence、剑桥,2004 年,第 107 页)。相似性混合也称为“近似含义”(Marcus & Davis,Rebooting AI,万神殿,2019,p。132)。尽管胡塞尔由于本文研究的无限回归和循环论证而推断“完全放弃了经验主义抽象理论”(更不用说深度 CNN),但我认为深度 CNN 的统计学习可以纳入福多里安混合帐户中支持蒯因的“排序谓词、否定、复数、同一性、代词和量词”,这些对于克服内容构成中的回归/循环性和实现客观(与相似性主观)表示是必要的(Burge,《客观性的起源》)。牛津,2010 年,第 238 页)。我最初以 Yoshimi 的《心理学新前沿》为基础,

更新日期:2023-07-18
down
wechat
bug