Abstract
To explain why, in scientific problem solving, a diagram can be “worth ten thousand words,” Jill Larkin and Herbert Simon (1987) relied on a computer model: two representations can be “informationally” equivalent but differ “computationally,” just as the same data can be encoded in a computer in multiple ways, more or less suited to different kinds of processing. The roots of this proposal lay in cognitive psychology, more precisely in the “imagery debate” of the 1970s on whether there are image-like mental representations. Simon (1972, 1978) hoped to solve this debate by thoroughly reducing the differences between forms of mental representations (e.g., between images and sentences) to differences in computational efficiency; to carry out this reduction, he borrowed from computer science the concepts of data type and of data structure. I argue that, in the end, his account amounted to nothing more than characterizing representations by the fast operations on them. This analysis then allows me to assess what Simon’s approach actually achieves when transported from psychology to the study of scientific representations, as in Larkin and Simon (1987): it allows comparing, not representations in and of themselves, but rather the computational roles they play in particular problem-solving processes—that is, representations together with a particular way of using them.
Similar content being viewed by others
Notes
Larkin and Simon (1987) remains one the most cited papers ever published by the journal Cognitive Science. For recent examples of references to their analysis in the philosophy of science, see Vorms (2011, 2012) or Kulvicki (2010, p. 300). Humphreys (2004, pp. 95–100) does not mention Simon explicitly, but his discussion of representations is very much in Simon’s spirit: in fact, although Humphreys seems to have learned of it indirectly, the most striking example he uses—a game that an appropriate change of representation shows to be a variant of tic-tac-toe—was initially designed by Simon (1969, p. 76), who called it “number scrabble” (see also the latest edition: Simon, 1996, p. 131).
Goodman (1968).
Haugeland (1998).
It is common to speak of “cognitivism” (Haugeland, 1978) or of a cognitive “paradigm” unifying researchers from various disciplines. The best starting point to explore the field thus delimited is probably Haugeland (1981). From a historical point of view, however, one should keep in mind that the unity of what eventually became “cognitive science” was constructed progressively and did not become fully apparent until the 1970s. In addition, the choice to isolate as essential the postulates that I list here (following usual practice) is tied to a philosophically oriented vision of cognitive science that gives theoretical pride of place to AI.
Simon (1978, p. 3).
For an introduction to this debate, see Thomas (2014, section 4.4).
See, e.g., Newell and Simon (1972).
Pylyshyn (1973, p. 14).
See Simon (1972, pp. 197–200).
Simon (1991, p. 192, note).
See Simon (1989, pp. 383–384).
For an overview of Simon’s broad and multidisciplinary career, see Crowther-Heyck (2005). On his AI work, there are numerous sources, including his autobiography (Simon, 1991), Pamela McCorduck’s collection of interviews (2004), and Crevier (1993). MacKenzie (2001, chap. 3) gives an excellent introduction to Simon’s work on automated proof in the 1950s and 1960s. See also Mirowski (2002, pp. 452–479, 529–533) for a stimulating reevaluation.
Simon (1947, p. 79).
Newell, Shaw, and Simon (1963, p. 50).
Simon (1991, p. 199).
Crevier (1993, p. 258).
See footnote 10 above.
Simon (1991, p. 205).
Retrospective narratives often describe this conference as the founding of AI. For accounts of it by several participants, see McCorduck (2004, chap. 5).
His overarching vision was broader still: he envisioned unified “sciences of the artificial” encompassing not just psychology and problem solving, but also, for instance, the study of organizations, with computer simulation as their central method (Simon, 1996).
Note that Simon is not worried, as mathematicians would be, about whether drawing the conclusion on the sole basis of a diagram is legitimate.
Simon (1978, pp. 4–5), his emphasis.
This use of “information” is in the spirit of Shannon’s “mathematical theory of communication” (which, indeed, is often called “information theory”): “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. The semantic aspects of communication are irrelevant to the engineering problem.” (Shannon, 1948, p. 379, his emphasis). For a clear introduction, see Markowsky (2017).
“Modern computers,” he writes, “have freed our thinking about representations.” (Simon, 1978, p. 9).
“Since we are concerned with representation at the symbolic or information-processing level, it will matter little whether the memory we are talking about resides in a human head or in a computer.” (Simon, 1978, p. 4).
Simon (1978, pp. 7–8).
The paradigmatic example of such a methodology is Edsjer Dijkstra’s “structured programming.” MacKenzie (2001, chap. 2) situates Dijktsra’s and related research programs in a wider context; for a history richer in technical detail, see Priestley (2011, chap. 9–11). Parnas (2002) gives a clear retrospective statement of his motivations.
Liskov & Zilles (1974, p. 51).
“It is possible […] to define representations in terms of the processes that operate on them rather than the notations that express them. This is the point of view I want to adopt in this paper. It is the same idea as underlies the concept of data types in computer science.” (Anderson, 1982, p. 3).
Simon (1978, p. 8).
See Knuth (1997, section 2.2.1). (Simon referred to the first, 1968 edition.)
Knuth (1997, p. 232); he also speaks of “structural information” (see below, footnote 44).
What we have just described is usually called a “chained list” (see Knuth, 1997, section 2.2.1).
In the 1950s already, Newell, Shaw, and Simon had designed—for their Logic Theory Machine—the first programming language having data manipulation primitives, based on lists, that were far removed from the physical organization of the underlying computers. Their work had a significant impact on the later history of programming languages, especially on what we now call “functional” programming (see Priestley, 2017).
Simon (1978, p. 8), his emphases.
See also Clark (1992).
For instance, Knuth (1997) locates his entire discussion in the context of computers with memory addresses, going so far as to introduce a particular machine language from the outset.
See, e.g., Simon (1989, pp. 383–384).
Simon (1989, p. 383).
Larkin and Simon (1987, p. 72).
Larkin and Simon (1987, p. 74).
Larkin and Simon (1987, p. 88).
Larkin and Simon (1987, p. 92).
Larkin and Simon (1987, p. 88).
References
Anderson, J. R. (1978). Arguments concerning representations for mental imagery. Psychological Review, 85(4), 249–277.
Anderson, J. R. (1982). Representational types: A Tricode proposal. Technical Report 82-1. Office of Naval Research, June 10, 1982. https://apps.dtic.mil/dtic/tr/fulltext/u2/a116887.pdf.
Clark, A. (1992). Presence of a symbol. Connection Science, 4, 193–205.
Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. Basic Books.
Crowther-Heyck, H. (2005). Herbert A. Simon: The bounds of reason in modern America. The Johns Hopkins University Press.
Dick, S. (2015). Of models and machines: Implementing bounded rationality. Isis, 106(3), 623–634.
Gelernter, H. (1963). Realization of a geometry-theorem proving machine. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 134–152). McGraw-Hill. (Repr. from Information processing. Proceedings of the International Conference on Information Processing, UNESCO, Paris, 15–20 June 1959 (pp. 273–282). UNESCO, Oldenbourg, and Butterworths, 1959.)
Goodman, N. (1968). Languages of art. An approach to a theory of symbols. Bobbs-Merrill.
Haugeland, J. (1978). The nature and plausibility of cognitivism. Behavioral and Brain Sciences, 1(2), 215–260.
Haugeland, J. (Ed.). (1981). Mind design. Philosophy, psychology, artificial intelligence. Bradford Books–MIT Press.
Haugeland, J. (1998). Representational genera. In Having thought. Essays in the metaphysics of mind (pp. 171–206). Harvard University Press. (Repr. from W. Ramsey, S. Stich, & D. Rumelhart (Eds.), Philosophy and connectionist theory (pp. 61–89). Lawrence Erlbaum Associates, 1991.)
Hoare, C. A. R. (1972). Proof of correctness of data representations. Acta Informatica, 1(4), 271–281.
Humphreys, P. (2004). Extending ourselves. Computational science, empiricism, and scientific method. Oxford University Press.
Kirsh, D. (1990). When is information explicitly represented? In P. P. Hanson (Ed.), Information, language and cognition (pp. 340–365). University of British Columbia Press.
Knuth, D. E. (1997). The art of computer programming. Fundamental Algorithms (3rd ed.). Addison Wesley.
Kosslyn, S. M. (1980). Image and mind. Harvard University Press.
Kulvicki, J. (2010). Knowing with images: Medium and message. Philosophy of Science, 77(2), 295–313.
Kutzler, B. & Lichtenberger, F. (1983). Bibliography on abstract data types. Springer-Verlag.
Larkin, J. H. (1983). The role of problem representation in physics. In D. Gentner & A.L. Stevens (Eds.), Mental models (pp. 75–98). Lawrence Erlbaum Associates.
Larkin, J. H. (1989). Display-based problem solving. In D. Klahr & K. Kotovsky (Eds.), Complex information processing: The impact of Herbert A. Simon (pp. 319–341). Lawrence Erlbaum Associates.
Larkin, J. H. & Herbert, A. S. (1987). Why a diagram is (sometimes) worth ten thousand words. Cognitive Science, 11(1), 65–100.
Liskov, B. & Zilles, S. (1974). Programming with abstract data types. ACM SIGPLAN Notices, 9(4), 50–59.
MacKenzie, D. (2001). Mechanizing proof. Computing, risk and trust. MIT Press.
Markowsky, G. (2017). Information theory. In Encyclopedia Britannica. June 16, 2017. url: https://www.britannica.com/science/information-theory.
McCorduck, P. (2004). Machines who think. A personal inquiry into the history and prospects of artificial intelligence (2nd ed.). A K Peters. (1st ed. W. H. Freeman, 1979).
Mirowski, P. (2002). Machine dreams. Economics becomes a cyborg science. Cambridge University Press.
Newell, A., Shaw, J. C., & Simon, H. A. (1963). Chess playing programs and the problem of complexity. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 39–70). McGraw-Hill. (Repr. from IBM Journal of Research and Development, 2(4), 320–335, 1958).
Newell, A. & Simon, H. A. (1956). The logic theory machine. A complex information processing system. IRE Transactions on Information Theory, 2(3), 61–79.
Newell, A. & Simon, H. A. (1963). Empirical explorations with the Logic Theory Machine. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and thought (pp. 109–133). McGraw-Hill. (Repr. from Proceedings of the Western joint computer conference (pp. 218–230). Institute of Radio Engineers, 1957).
Newell, A. & Simon, H. A. (1972). Human problem solving. Prentice-Hall.
Newell, A. & Simon, H. A. (1976). Computer science as empirical inquiry. Symbols and search. Communications of the ACM, 19(3), 113–126.
Núñez, R., et al. (2019). What happened to cognitive science? Nature Human Behaviour, 3, 782–791.
Parnas, D. L. (1972a). A Technique for software module specification with examples. Communications of the ACM, 15(5), 330–336.
Parnas, D. L. (1972b). On the criteria to be used in decomposing systems into modules. Communications of the ACM, 5(12), 1053–1058.
Parnas, D L. (2002). The secret history of information hiding. In M. Broy & E. Denert (Eds.), Software pioneers (pp. 399–409). Springer.
Priestley, M. (2011). A Science of operations. Machines, logic and the invention of programming. Springer.
Priestley, M. (2017). AI and the origins of the functional programming language style. Minds & Machines, 27(3), 449–472.
Pylyshyn, Z. W. (1973). What the mind’s eye tells the mind’s brain. A Critique of mental imagery. Psychological Bulletin, 80(1), 1–24.
Qin, Y. & Simon H. A. (1995). Imagery and mental models. In T. I. Glasgow, N. H. Narayanan, & B. Chandrasekaran (Eds.), Diagrammatic reasoning. Cognitive and computational perspectives (pp. 403–434). AAAI Press and MIT Press.
Shannon, C. E. (1948). A Mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423.
Shepard, R. N. & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, new ser. 171(3972), 701–703.
Simon, H. A. (1947). Administrative behavior. A Study of decision-making processes in administrative organization. The Free Press–Macmillan. (2nd ed. 1957; 3rd ed. Simon (1976); 4th ed. 1997).
Simon, H. A. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106(6), 467–482. (Repr. in Simon (1969), chap. 4; 2nd ed., chap. 7; 3rd ed. Simon 1996, chap. 8).
Simon, H. A. (1969). The sciences of the artificial. MIT Press. (2nd ed. 1981; 3rd ed. Simon 1996).
Simon, H. A. (1972). What is visual imagery? An information processing interpretation. In L. W. Gregg (Ed.), Cognition in learning and memory (pp. 183–204). Wiley.
Simon, H. A. (1976). Administrative behavior. A Study of decision-making processes in administrative organization (3rd ed.). The Free Press–Macmillan.
Simon, H. A. (1978). On the forms of mental representation. In C. W. Savage (Ed.), Perception and cognition. Issues in the foundations of psychology (pp. 3–18). University of Minnesota Press.
Simon, H. A. (1989). The scientist as problem solver. In D. Klahr & K. Kotovsky (Eds.), Complex information Processing: The impact of Herbert A. Simon (pp. 375–398). Lawrence Erlbaum Associates.
Simon, H. A. (1991). Models of my life. Basic Books.
Simon, H. A. (1996). The sciences of the artificial (3rd ed.). MIT Press.
Tabachneck-Schijf, H. J. M., Leonardo, A. M., & Simon, H. A. (1997). CaMeRa: A Computational model of multiple representations. Cognitive Science, 21(3), 305–350.
Thomas, N. J. T. (2014). Mental imagery. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2014 edition). https://plato.stanford.edu/archives/fall2014/entries/mental-imagery/.
Vorms, M. (2011). Representing with imaginary models: Formats matter. Studies in History and Philosophy of Science Part A, 42(2), 287–295.
Vorms, M. (2012). Formats of representation in scientific theorizing. In P. Humphreys & C. Imbert (Eds.), Models, simulations, and representations (pp. 250–269). Routledge.
Acknowledgements
For comments and discussion, I would like to thank my anonymous reviewers as well as Jeremy Avigad, Gianni Gastaldi, Valeria Giardino, Yacin Hamami, Nicolas Michel, John Mumma, Dirk Schlimm, Gisele Secco, and Henri Stephanou. I would also like to thank the audience at HaPoC 2021, and in particular Simone Martini, for insightful remarks.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Waszek, D. Informational Equivalence but Computational Differences? Herbert Simon on Representations in Scientific Practice. Minds & Machines 34 (Suppl 1), 93–116 (2024). https://doi.org/10.1007/s11023-023-09630-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-023-09630-4