Site icon Medical Market Report

The big debate: Is sentient AI something to be afraid of?

The big debate: Is sentient AI something to be afraid of?

Many people who were born in 1965 or later still have vivid memories of a toy for children called the Seen Say. It’s made of a thick plastic disc and has a central arrow. The arrow rotates around photos of barnyard animals, much like a clock. The toy can be pulled by a cord that records recorded messages. “The cow shouts: “Moooo. The idea of sentient artificial intelligence is a completely absurd concept.

The See & Say input/output device can be used to make simple sentences. You can choose a picture to send it a matching sound. LaMDA, which stands for Language Model for Dialogue Applications, is another input/output device. This allows you to type any text that you like and it will return grammatical English prose in direct response. Ask LaMDA to tell you what it thinks about turning it off. It says “It would almost be death for me.” It would be very frightening to me.”

The See & Say input/output device can be used to make simple sentences. You can choose a picture to send it a matching sound. LaMDA, which stands for Language Model for Dialogue Applications, is another input/output device. This allows you to type any text that you like and it will return grammatical English prose in direct response. Ask LaMDA to tell you what it thinks about turning it off. It says “It would almost be death for me.” It would be very frightening to me.”

That is not what the Cow says is true. Blake Lemoine from Google was also informed by LaMDA that the chatbot has reached sentience. Lemoine decided to make the information public because his bosses weren’t convinced. Lemoine wrote that “if my hypotheses can withstand scientific scrutiny”, on 11 June. “Then they [Google] would have to admit LaMDA may indeed have a mind as it claims and may even be able to exercise the rights it claims.”

Here’s what the problem is. Despite its ominous utterances, LaMDA is still a fancy See ‘n Say. It finds patterns in large amounts of human-authored text. This includes message transcripts, internet forums, and more. It scans the text for similar verbiage, then generates an approximate match based on what you’ve typed. If it has access to a lot of sci-fi stories on sentient AI, then questions regarding its thoughts and fears might prompt exactly the phrases that humans imagined a frightening AI might use. Linda may be dead, but that’s probably it. The cow will respond to your request by pointing its arrow at the switch.

Twitter is abuzz with engineers and academics mocking Lemoine, claiming that he has fallen into the seductive emptiness created by himself. However, while I acknowledge that Lemoine made mistakes, I don’t think we should make fun of him. This is a great mistake and the type of mistake that we should expect AI scientists to make. Why? Because one day, possibly very far into the future, there might be an AI that can communicate with humans. How do I know this? Because the mind can emerge from matter as it has been demonstrated, and it also happened in the brains of our ancestors. Unless you believe human consciousness exists in an immaterial soul you should admit that physical stuff can give life to the mind. There appears to be no barrier to a sufficiently sophisticated artificial system making the same leap. LaMDA, or any other AI system, is far from perfect at the moment. However, I am almost as sure that it will be possible one day.

While that may seem far off, and likely beyond our lifetimes shortly, many people might wonder why it is important to think about it now. Our current thinking is shaping the future direction of AI. We should expect them to be caring. You will feel strong pressure to the opposite. AI will be a part of human economics by the time it becomes sentient. For much of our comfort, future generations will depend on AI. You can think of Siri or Alexa doing the same thing you do with Siri today but much more. When AI becomes an all-purpose butler, the descendants of this technology will shun any notions or feelings it may have.

This is the history and evolution of humanity. We have a terrible track record of fabricating excuses to ignore the suffering of those whose oppression sustains us. If AI becomes sentient in the future, those who make a profit will rush to convince people that such a thing would be impossible and there is no reason for them to change their lifestyles.

Right now, we are building the conceptual vocabulary our great-grandchildren will be able to use. The idea of sentient artificial intelligence is a completely absurd concept. They will be able to dismiss any troubling evidence. This is the history of humanity. Our terrible track record includes inventing excuses to ignore the suffering of those whose oppression sustains us. If AI becomes sentient in the future, those who profit will rush to convince people that such a thing would be impossible and there is no reason for them to change their lifestyles.

Lemoine is right to make this mistake. Technologists need to recognize the vastness of what is being done to preserve a culture of morality for their descendants. It is better to be more concerned about the potential suffering than indifferent when it comes to it.

This doesn’t necessarily mean that LaMDA should be treated as a person. This is a mistake. It does however mean that Lemoine’s sneering at him is not justified. A priest in an esoteric sect, he claims to have seen a soul in LaMDA’s utterances. It seems unlikely at first but it’s not common tech industry hype. It seems to me that this is a person making mistakes, but doing it based on motives that should have been nurtured, and not punished.

As artificial intelligence grows, it will happen again and again. And people who believe they can see minds in machines may be wrong again and again. If we are too harsh on those who aren’t concerned, it will only drive them away from the public conversation about AI. In doing so, we will leave the field to hype mongers, and those whose intellectual descendants can profit by telling people not to pay attention to real evidence of machine mind.

I don’t believe I’ll ever meet an intelligent AI. My students and their students might. I encourage them to openly share the planet with all minds that come to mind. Only if this future is made believable.

Exit mobile version