
American philosopher John Searle, widely known for his famous “Chinese room” argument produced in 1980, has died aged 93, per philosophy news website Daily Nous.
As natural language models and artificial intelligence (AI) chatbots improve, AI is becoming more successful at convincing us that it is human. But this doesn’t mean that it is conscious, or has any understanding of the conversations that it is taking part in. Essentially, it receives an input and applies an algorithm, which gives the output that is statistically more likely to please (and be coherent to) the human in front of it.
The question of whether artificial intelligence (AI) can be conscious has been debated since the dawn of digital computers. But an old argument by Searle really puts this problem into focus. The “Chinese room” thought experiment, as it is known, goes like this:
Imagine you (for the purposes of the thought experiment, you do not understand Chinese) are locked in a room with a very large stack of writing in Chinese. You are left with the writing, which to you is just a meaningless pile of squiggles.
Now, the people who have locked you in the room give you a second stack of Chinese writings. This time, they are slightly more helpful. As well as the pages, you are given a set of rules (in your own language) that let you correlate the first batch of Chinese writings with the second batch.
“Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch,” Searle explains in his paper.
“Unknown to me, the people who are giving me all of these symbols call the first batch ‘a script,’ they call the second batch a ‘story,’ and they call the third batch ‘questions.’ Furthermore, they call the symbols I give them back in response to the third batch ‘answers to the questions,’ and the set of rules in English that they gave me, they call ‘the program’.”
Eventually, you get pretty good at taking in these incomprehensible texts, obeying the instructions provided, and returning the “answers” to the Chinese speakers outside, such that they are indistinguishable from the answers of a native Chinese speaker.
To complicate it further, you are also given scripts, stories, and questions in your own native language, which you are able to answer given your fluency. To anyone on the outside, whether in English or Chinese, the answers are just as good.
“But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements,” Searle adds. “For the purposes of the Chinese, I am simply an instantiation of the computer program.”
Searle used this as an argument against “strong AI” or the idea that a “programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds”, he writes in the book Mind, Language, and Society. He points out that in the English case, you understand everything and there is no need to attribute it to computer programs, while in the Chinese case, you output the correct response but understand nothing, and nor do the instructions that helped you to translate it.
“As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding,” Searle continues in his paper. “They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding.”
There are many who objected to Searle’s argument, but few who would argue over its influence in the field. Computer scientist Pat Hayes once defined cognitive science as “the ongoing research program of showing Searle’s Chinese Room Argument to be false”, for example, and given the rise of AI chatbots and large language models, the argument only seems more relevant.
Searle’s later years were not without controversy. In 2019, he was stripped of his emeritus status at Berkeley, where he had been a professor and worked from 1959 to 2019, after the University of California deemed that he had violated their sexual harassment policies. This followed prior complaints from students and workers at the university, including accusations that he fired one research assistant who rejected his advances, and made inappropriate advances on students. Searle died in a nursing home on September 17, per an email from his secretary of 40 years.
Source Link: American Philosopher John Searle, Creator Of Famous "Chinese Room" Thought Experiment, Dies Aged 93