• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

“Nightmare Scenario”: Godfather Of AI Quits Google To Warn About Dangers Of AI

May 2, 2023 by Deborah Bloomfield

A man often referred to as the “Godfather” of artificial intelligence (AI) has quit his job at Google this week over concerns about where AI is headed.

Geoffrey Hinton is best known for his work on deep learning and neural networks, for which he won the $1 million Turing Award in 2019. At the time he told CBC that he was pleased to have won because “for a long time, neural networks were regarded as flaky in computer science. This was an acceptance that it’s not actually flaky.”

Advertisement

Since then, with the rise of AI programs like Chat-GPT and AI image generators among many others, the method has gone mainstream. Hinton, who began working at Google in 2013 after the tech giant acquired his company, now believes that the tech could pose dangers to society and has quit the company in order to be able to talk about it openly, the New York Times reports.

In the short term, Hinton is worried about the possibility of AI-generated text, images, and videos leading to people not being able “to know what is true anymore”, as well as potential disruption to the jobs market. In the longer term, he has much bigger concerns about AI eclipsing human intelligence and AI learning unexpected behaviors.

“Right now, they’re not more intelligent than us, as far as I can tell,” he told the BBC. “But I think they soon may be.”

Hinton, 75, told the BBC that AI is currently behind humans in terms of reasoning, though it does already do “simple” reasoning. In terms of general knowledge, however, he believes it is already far ahead of humans.

Advertisement

“All these copies can learn separately but share their knowledge instantly,” he told the BBC. “So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it.”

One concern, he told the New York Times, is that “bad actors” could use AI for their own nefarious ends. Unlike threats such as nuclear weapons, which require infrastructure and monitored substances like uranium, AI research by such actors would be much more difficult to monitor.

“This is just a kind of worst-case scenario, kind of a nightmare scenario,” he told the BBC. “You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

Sub-goals such as “I need to get more power” could lead to problems we haven’t imagined yet, let alone thought how to deal with. As AI-focused philosopher and leader of the Future of Humanity Institute at Oxford University Nick Bostrom explained in 2014, even a simple instruction to maximize the number of paperclips in the world could cause unintended sub-goals that could lead to terrible consequences for the AI’s creator.

Advertisement

“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off,” he explained to HuffPost in 2014. “Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Hinton, who told the BBC that his age played a part in his decision to retire, believes that the potential upsides to AI are huge, but the technology needs regulating, especially given the current rate of progress in the field. Though he left in order to talk about potential pitfalls of AI, he believes that Google has so far acted responsibly in the field. With a global race to develop the technology, it’s probable that – left unregulated – not everyone developing it will be, and humanity will have to deal with the consequences.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Inflation could prompt ECB to tighten policy quicker, Holzmann says
  2. Tennis-British tennis must build on Raducanu success, says Murray
  3. Analysis-Peru’s Las Bambas standoff tests Castillo’s mining reform pledges
  4. In US First, Utah Kids Are Banned From Social Media Unless Their Parents Consent

Source Link: "Nightmare Scenario": Godfather Of AI Quits Google To Warn About Dangers Of AI

Filed Under: News

Primary Sidebar

  • US Just Killed NASA’s Mars Sample Return Mission – So What Happens Now?
  • Art Sleuths May Have Recovered Traces Of Da Vinci’s DNA From One Of His Drawings
  • Countries With The Most Narcissists Identified By 45,000-Person Study, And The Results Might Surprise You
  • World’s Oldest Poison Arrows Were Used By Hunters 60,000 Years Ago
  • The Real Reason You Shouldn’t Eat (Most) Raw Cookie Dough
  • Antarctic Scientists Have Just Moved The South Pole – Literally
  • “What We Have Is A Very Good Candidate”: Has The Ancestor Of Homo Sapiens Finally Been Found In Africa?
  • Europe’s Missing Ceratopsian Dinosaurs Have Been Found And They’re Quite Diverse
  • Why Don’t Snorers Wake Themselves Up?
  • Endangered “Northern Native Cat” Captured On Camera For The First Time In 80 Years At Australian Sanctuary
  • Watch 25 Years Of A Supernova Expanding Into Space Squeezed Into This 40-Second NASA Video
  • “Diet Stacking” Trend Could Be Seriously Bad For Your Health
  • Meet The Psychedelic Earth Tiger, A Funky Addition To “10 Species To Watch” In 2026
  • The Weird Mystery Of The “Einstein Desert” In The Hunt For Rogue Planets
  • NASA Astronaut Charles Duke Left A Touching Photograph And Message On The Moon In 1972
  • How Multilingual Are You? This New Language Calculator Lets You Find Out In A Minute
  • Europa’s Seabed Might Be Too Quiet For Life: “The Energy Just Doesn’t Seem To Be There”
  • Amoebae: The Microscopic Health Threat Lurking In Our Water Supplies. Are We Taking Them Seriously?
  • The Last Dogs In Antarctica Were Kicked Out In April 1994 By An International Treaty
  • Interstellar Comet 3I/ATLAS Snapped By NASA’s Europa Mission: “We’re Still Scratching Our Heads About Some Of The Things We’re Seeing”
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2026 · Medical Market Report. All Rights Reserved.

Go to mobile version