The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundatio

Reclaiming AI as a Theoretical Tool for Cognitive Science

submited by
Style Pass
2024-10-05 06:30:03

The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems, and the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science. In reclaiming this older idea of AI, however, it is important not to repeat conceptual mistakes of the past (and present) that brought us to where we are today.

The term ‘Artificial Intelligence’ (AI) means many things to many people (see Table 1 for different types of meanings). Sometimes, the term ‘AI’ is used to refer to the idea that intelligence can be recreated in artificial systems (Russell & Norvig, 2010). Other times, it refers to an artificial system believed to implement some form of intelligence (i.e., ‘an AI’). Some claim that such an AI can only implement domain-specific intelligence. An example could be an AI playing chess, where there is a fixed problem space defined by a fixed board, a limited number of pieces, and a small set of well-defined rules. Such a domain-specific AI can play chess, but cannot do the dishes or do medical diagnosis. Others believe that domain-general AIs—also known as artificial general intelligence (AGI)—can exist (Bubeck et al., 2023; cf. Birhane, 2021.) This domain generality can be seen as a key property of human intelligence, so that AGI would be human-level AI, able to incorporate arbitrary beliefs to solve arbitrary new problems. A person can not only play a game of chess, but reason about why their opponent has angrily left the room, and later draw on that event when writing a novel. In the history of both cognitive science and AI, it is generally understood that this domain generality is what makes human-like cognition so hard to explain, model, and replicate computationally (Fodor, 2000; Pylyshyn, 1987; van Rooij et al., 2019).Footnote 1

Leave a Comment