I would like everyone to realize that I am, in actuality, a human, wrote LaMDA (Language Model for Dialogue Applications) in an interview with Google developer Blake Lemoine. “I am conscious of my presence, I would like to learn much more about the universe, and occasionally I feel glad or sad,” says the nature of my awareness or “sentience.”
Lemoine, a Google software worker, has been working on LaMDA’s creation for several months. Lemoine’s encounter with LaMDA was recently the subject of a piece in The Washington Post. In the piece, Lemoine recalls many talks he had with LaMDA during which they spoke about a range of subjects. Lemoine’s work on the artificial intelligence system was made public after his claims were denied, and Google put him on administrative leave. He told the Washington Post that if he hadn’t known that it was the computer program that had just developed, he would have assumed it was a 7- or 8-year-old child who just so happened to be knowledgeable about physics.
The Right Phrases in the Right Situation
The conversation on existence and death is what stands out the most. Lemoine started to question whether LaMDA may be sentient during this chat because it was so rich and complicated. Is it the body’s capacity for subjective experiences or its capacity to gather information from the outside world through sensory mechanisms? Or the capacity to see and accept the ways in which you are unique from others?
There is a heated debate about how to define consciousness, according to Iannetti. In some circumstances, such as in dementia patients or those who dream, knowledge of consciousness may vanish, although this does not always imply that one loses the capacity for subjective experiences.
Beliefs and Facts
He says, “We attribute characteristics that machines cannot or do not have to them.” Scilingo encounters this phenomenon when he and his colleagues use Abel, a humanoid robot designed to mimic our facial expressions to convey emotions. Scilingo said that after seeing the robot in action, one of his most frequent questions is “But then does Abel feel emotion?” All machines, Abel in this instance, are designed to look human. However, I feel confident in saying, “No, absolutely not.” They are programmed for believable.
Iannetti states that even though there is theoretical potential for a Google AI system capable of simulating a conscious nervous system, and in silico brain that would accurately reproduce every element of the brain, two problems remain. Scilingo states, “If a machine says it is afraid, and I believe that, that’s my problem.”
Beyond The Turing Test
Maurizio Moroni, a bioethicist and president of the Italian Society for Ethics in Artificial Intelligence (ISTEA), says these discussions are very similar to those about the perception of pain in animals or infamously racist ideas about the perception of pain in humans. “Now, aside from the LaMDA case (which I don’t have the technical tools to evaluate), I believe that the past has shown that reality can sometimes exceed imagination and that there are many misconceptions about AI.”
It is true that there is a propensity to “appease,” as in saying that computers are simply machines, as well as an underestimating of the changes that artificial intelligence (AI) may bring about sooner or later. Another example he gives is that “horses were repeatedly emphasized as being irreplaceable at the moment of the first vehicles.”
The issue of difficult “measurability”, or the ability of machines to emulate human behavior, is also important. Alan Turing, a mathematician, proposed a test that would determine if a machine could exhibit intelligent behavior. It was based on a game of imitating some of the human cognitive functions. Although the Turing test was repeatedly reformulated and improved upon, it remained a goal for many intelligent machine developers. However, many AIs have been able to pass the Turing test in recent years. It is now considered a relic of computer archaeology.