Internet Encyclopedia of Philosophy

submited by
Style Pass
2023-03-16 19:00:04

The Chinese room argument is a thought experiment of John Searle. It is one of the best known and widely credited counters to claims of artificial intelligence (AI), that is, to claims that computers do or at least can (or someday might) think. According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states”. Searle contrasts strong AI with “weak AI.” According to weak AI, computers just simulate thought. Their seeming understanding is not real understanding (just as-if); their seeming calculation is only as-if calculation, and so forth. Nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things).

Against “strong AI,” Searle (1980a) asks you to imagine yourself a monolingual English speaker “locked in a room, and given a large batch of Chinese writing” plus “a second batch of Chinese script” and “a set of rules” in English “for correlating the second batch with the first batch.” The rules “correlate one set of formal symbols with another set of formal symbols”; “formal” (or “syntactic”) meaning you “can identify the symbols entirely by their shapes.” A third batch of Chinese symbols and more instructions in English enable you “to correlate elements of this third batch with elements of the first two batches” and instruct you, thereby, “to give back certain sorts of Chinese symbols with certain sorts of shapes in response.” Those giving you the symbols “call the first batch ‘a script’ [a data structure with natural language processing applications], “they call the second batch ‘a story’, and they call the third batch ‘questions’; the symbols you give back “they call . . . ‘answers to the questions'”; “the set of rules in English . . . they call ‘the program'”: you yourself know none of this. Nevertheless, you “get so good at following the instructions” that “from the point of view of someone outside the room” your responses are “absolutely indistinguishable from those of Chinese speakers.” Just by looking at your answers, nobody can tell you “don’t speak a word of Chinese.” Producing answers “by manipulating uninterpreted formal symbols,” it seems “[a]s far as the Chinese is concerned,” you “simply behave like a computer”; specifically, like a computer running Schank and Abelson’s (1977) “Script Applier Mechanism” story understanding program (SAM), which Searle’s takes for his example.

Leave a Comment