• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

“Nightmare Scenario”: Godfather Of AI Quits Google To Warn About Dangers Of AI

May 2, 2023 by Deborah Bloomfield

A man often referred to as the “Godfather” of artificial intelligence (AI) has quit his job at Google this week over concerns about where AI is headed.

Geoffrey Hinton is best known for his work on deep learning and neural networks, for which he won the $1 million Turing Award in 2019. At the time he told CBC that he was pleased to have won because “for a long time, neural networks were regarded as flaky in computer science. This was an acceptance that it’s not actually flaky.”

Advertisement

Since then, with the rise of AI programs like Chat-GPT and AI image generators among many others, the method has gone mainstream. Hinton, who began working at Google in 2013 after the tech giant acquired his company, now believes that the tech could pose dangers to society and has quit the company in order to be able to talk about it openly, the New York Times reports.

In the short term, Hinton is worried about the possibility of AI-generated text, images, and videos leading to people not being able “to know what is true anymore”, as well as potential disruption to the jobs market. In the longer term, he has much bigger concerns about AI eclipsing human intelligence and AI learning unexpected behaviors.

“Right now, they’re not more intelligent than us, as far as I can tell,” he told the BBC. “But I think they soon may be.”

Hinton, 75, told the BBC that AI is currently behind humans in terms of reasoning, though it does already do “simple” reasoning. In terms of general knowledge, however, he believes it is already far ahead of humans.

Advertisement

“All these copies can learn separately but share their knowledge instantly,” he told the BBC. “So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it.”

One concern, he told the New York Times, is that “bad actors” could use AI for their own nefarious ends. Unlike threats such as nuclear weapons, which require infrastructure and monitored substances like uranium, AI research by such actors would be much more difficult to monitor.

“This is just a kind of worst-case scenario, kind of a nightmare scenario,” he told the BBC. “You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

Sub-goals such as “I need to get more power” could lead to problems we haven’t imagined yet, let alone thought how to deal with. As AI-focused philosopher and leader of the Future of Humanity Institute at Oxford University Nick Bostrom explained in 2014, even a simple instruction to maximize the number of paperclips in the world could cause unintended sub-goals that could lead to terrible consequences for the AI’s creator.

Advertisement

“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off,” he explained to HuffPost in 2014. “Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

Hinton, who told the BBC that his age played a part in his decision to retire, believes that the potential upsides to AI are huge, but the technology needs regulating, especially given the current rate of progress in the field. Though he left in order to talk about potential pitfalls of AI, he believes that Google has so far acted responsibly in the field. With a global race to develop the technology, it’s probable that – left unregulated – not everyone developing it will be, and humanity will have to deal with the consequences.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Inflation could prompt ECB to tighten policy quicker, Holzmann says
  2. Tennis-British tennis must build on Raducanu success, says Murray
  3. Analysis-Peru’s Las Bambas standoff tests Castillo’s mining reform pledges
  4. In US First, Utah Kids Are Banned From Social Media Unless Their Parents Consent

Source Link: "Nightmare Scenario": Godfather Of AI Quits Google To Warn About Dangers Of AI

Filed Under: News

Primary Sidebar

  • How Long Did Neanderthals Live For?
  • Want To Use Dragons As Dice? Now You Can, Thanks To Math
  • Why Did Humans Start Using Fire? New Theory Suggests It Wasn’t To Cook Food
  • Controversial “Alien’s Math” Has A New Translator. Can He Reform Its Reputation?
  • How To Watch A Rare Daytime Meteor Shower This Weekend
  • Over 250 Years After Captain Cook Arrived In Australia, Final Resting Place Of HMS Endeavour Confirmed
  • Over 1 Trillion Dollars’ Worth Of Precious Metals Are Hiding In Lunar Craters, Study Suggests
  • What Happened To Marco Siffredi? The First Person To Snowboard Down Mount Everest
  • Why The 28 Biggest Cities In The US Are Sinking Into The Ground
  • 200-Year-Old Condom Made Of Sheep Appendix Contains A *Very* NSFW Drawing
  • How Does A Rattlesnake Make Its Famous Rattle?
  • “We Captured Something No One Had Documented Before”: Wild Worm Towers Seen For The First Time
  • Chimpanzees Catch Yawns From Androids In Breakthrough For Contagious Yawning Research
  • Male Embryos Develop Ovaries In First-Ever Evidence Of Environment Affecting Mammalian Sex Determination
  • A Decapitated Python In Florida Everglades Suggests Bobcats Are Resisting Their Invasion
  • The Black Hole Universe: New Model Suggests The Big Bang Was Not The Beginning Of Everything
  • “World’s Smallest” Nano-Violin Measures Less Than A Hair’s Width – But Could Lead To Big Discoveries
  • What You Really Need To Know About The World’s Unluckiest Frog
  • The World’s Largest Time Capsule Is About To Be Opened In Seward, Nebraska
  • Why It’s So Damn Hard To Tell The Sex Of A Dinosaur
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version