• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

“Nightmare Scenario”: Godfather Of AI Quits Google To Warn About Dangers Of AI

May 2, 2023 by Deborah Bloomfield

A man often referred to as the “Godfather” of artificial intelligence (AI) has quit his job at Google this week over concerns about where AI is headed.

Geoffrey Hinton is best known for his work on deep learning and neural networks, for which he won the $1 million Turing Award in 2019. At the time he told CBC that he was pleased to have won because “for a long time, neural networks were regarded as flaky in computer science. This was an acceptance that it’s not actually flaky.”

Advertisement

Since then, with the rise of AI programs like Chat-GPT and AI image generators among many others, the method has gone mainstream. Hinton, who began working at Google in 2013 after the tech giant acquired his company, now believes that the tech could pose dangers to society and has quit the company in order to be able to talk about it openly, the New York Times reports.

In the short term, Hinton is worried about the possibility of AI-generated text, images, and videos leading to people not being able “to know what is true anymore”, as well as potential disruption to the jobs market. In the longer term, he has much bigger concerns about AI eclipsing human intelligence and AI learning unexpected behaviors.

“Right now, they’re not more intelligent than us, as far as I can tell,” he told the BBC. “But I think they soon may be.”

Hinton, 75, told the BBC that AI is currently behind humans in terms of reasoning, though it does already do “simple” reasoning. In terms of general knowledge, however, he believes it is already far ahead of humans.

Advertisement

“All these copies can learn separately but share their knowledge instantly,” he told the BBC. “So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it.”

One concern, he told the New York Times, is that “bad actors” could use AI for their own nefarious ends. Unlike threats such as nuclear weapons, which require infrastructure and monitored substances like uranium, AI research by such actors would be much more difficult to monitor.

“This is just a kind of worst-case scenario, kind of a nightmare scenario,” he told the BBC. “You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

Sub-goals such as “I need to get more power” could lead to problems we haven’t imagined yet, let alone thought how to deal with. As AI-focused philosopher and leader of the Future of Humanity Institute at Oxford University Nick Bostrom explained in 2014, even a simple instruction to maximize the number of paperclips in the world could cause unintended sub-goals that could lead to terrible consequences for the AI’s creator.

Advertisement

“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off,” he explained to HuffPost in 2014. “Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Hinton, who told the BBC that his age played a part in his decision to retire, believes that the potential upsides to AI are huge, but the technology needs regulating, especially given the current rate of progress in the field. Though he left in order to talk about potential pitfalls of AI, he believes that Google has so far acted responsibly in the field. With a global race to develop the technology, it’s probable that – left unregulated – not everyone developing it will be, and humanity will have to deal with the consequences.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Inflation could prompt ECB to tighten policy quicker, Holzmann says
  2. Tennis-British tennis must build on Raducanu success, says Murray
  3. Analysis-Peru’s Las Bambas standoff tests Castillo’s mining reform pledges
  4. In US First, Utah Kids Are Banned From Social Media Unless Their Parents Consent

Source Link: "Nightmare Scenario": Godfather Of AI Quits Google To Warn About Dangers Of AI

Filed Under: News

Primary Sidebar

  • Interstellar Object 3I/ATLAS Has A Rare “Anti-Tail”, New Observations Confirm
  • Asteroid Apophis: Animation Shows Asteroid’s Nail-Biting Close Approach To Earth In 2029
  • Titan Breaks A Key Chemistry Rule: What That Means For Alien Life
  • Scientists Studied “Chicago Rat Hole” – They Have Bad News, The South Atlantic’s Magnetic Field Weak Spot Is Growing, And Much More This Week
  • Could This Be The Real Reason Humans Survived And Neanderthals Died Out?
  • Newly Discovered Snail Species Named After Studio Ghibli Co-Founder Is A Hairy Beauty
  • 2025 SC79 Is The Second-Fastest Asteroid Ever Found – And Only The Second Within Venus’ Orbit
  • When Red Devil Spiders Arrived On A New Island, Their Genome Dramatically Shrank In Half
  • Is This The World’s Oldest Story? Ancient Human Tale About The Seven Sisters May Be From 100,000 BCE
  • This Pill Is Actually A Tiny Printer That Repairs Internal Injuries Using Biocompatible Ink
  • “This Is Amazing”: Scientists Have Found Evidence Of A Long-Lost World Deep Within The Earth
  • From The Shiniest World To Lava And Eternal Darkness, These Are The Weirdest Known Planets
  • Do Sharks Have Bones?
  • The Zombie Awakens: A Volcano Is Showing “First Signs” Of Unrest After 700,000 Years Of Quiet
  • Two Of The World’s Biggest Earthquakes Seem To Be Synched Together
  • California Has A New State Snake, And It’s A 1.6-Meter-Long Giant
  • Experimental Nanoparticle “Super-Vaccines” Stop Breast, Pancreatic, And Skin Cancers In Their Tracks
  • New Nightmare Fuel Unlocked: Watch The First Known Capture Of A Shrew By A False Widow Spider
  • Peculiar Glow In The Milky Way Might Be Dark Matter Signature
  • “I Was Scared To Death”: Missouri’s Great Cobra Scare Of 1953 Was Eventually Solved After 35 Years
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version