• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Should We Be Worried About AGI?

October 18, 2024 by Deborah Bloomfield

The concept of singularity, the moment when machines become smarter than humans, has been discussed and debated for decades. But as the development of machine learning algorithms has brought forth software that can beat the Turing test easily, the question has become more pressing. How far away are we from an artificial general intelligence (AGI) and what are the risks?



Advertisement

Current artificial intelligence (AI) is based on large language models or LLMs. Text-based AIs are not really thinking about an answer or doing research – they are doing probability and statistics. Using training data, they work out what is the most likely word that should be printed (sometimes the most likely letter) after a previous one. This can produce very reasonable results but also wrong and dangerous ones, with the occasional hilarious response that a human would never give. As we have said, the machine is not thinking.

These models are specialized in a specific task that require specific training data, but in the field, there is an idea that AGIs are incoming. These algorithms will be performing many tasks, not just a single one, and they will be able to do them on par with humans. While artificial consciousness might still be a long way away, the development of AGIs is seen as the stepping stone. In some industries, it has been argued that we are merely years away from it.

“In the next decade or two [it] seems likely an individual computer will have roughly the computing power of a human brain by 2029, 2030. Then you add another 10/15 years on that, an individual computer would have roughly the computing power of all of human society,” Ben Goertzel – who founded SingularityNET, which aims to create a “decentralized, democratic, inclusive and beneficial Artificial General Intelligence” – said in a talk at the Beneficial AGI Summit 2024.

There are two immediate questions arising from this belief. The first question is, how correct is this assessment? Detractors of today’s AI have argued that announcing imminent AGI is just a way to hype up current AIs and inflate the AI bubble even more before the eventual burst. Newly minted Nobel Laureate and “founding father of AI” Geoffrey Hinton believes that we are less than 20 years from AGI. Yoshua Bengio, who shared the Turing Award with Hinton and Yann LeCun in 2018, instead argues that we do not know how long it will take to get there.  

Advertisement

The second question is about the dangers. Hinton quit Google last year out of concern over the possible dangers of AI. Also, a survey found one-third of AI researchers believe that AI could cause catastrophic outcomes. Still, we shouldn’t consider the inevitability of some sort of Terminator-esque future, with killing machines hunting humans. The dangers might be a lot more mundane.

Already, AI models have faced allegations of being trained using stolen art. Earlier this year, OpenAI begged the British Parliament to allow it to use copyrighted works (for free) because it would be impossible to train LLMs (and make money) without accessing them. There are also environmental risks. Current AIs are associated with an astounding use of water and an “alarming” carbon footprint. More powerful AIs will require more resources in a world with a rapidly changing climate.

Another threat is the use – but more importantly the misuse – of AI for the creation of false material with the intention to spread misinformation. The creation of fake images with propaganda (or other nefarious means) in mind is as easy as pie. And while there are ways to spot these fake images now, it will get harder and harder.

Rules and regulations around AI have not been massively forthcoming, so concerns about the here and now are important. Still, there have been studies that argue that we shouldn’t be overly worried due to the fact that the more bad AI output is out there on the web, the more it will be used to train new AI, which will end up creating worse material and so on until AI stops being useful. We might not be close to creating true artificial intelligence, but we might be close to creating artificial stupidity.  

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Cricket-Manchester test likely to be postponed after India COVID-19 case
  2. EU to attend U.S. trade meeting put in doubt by French anger
  3. Soccer-West Ham win again, Leicester and Napoli falter
  4. Was Jesus A Hallucinogenic Mushroom? One Scholar Certainly Thought So

Source Link: Should We Be Worried About AGI?

Filed Under: News

Primary Sidebar

  • The Amazon Is Entering A “Hypertropical” Climate For The First Time In 10 Million Years
  • What Scientists Saw When They Peered Inside 190-Million-Year-Old Eggs And Recreated Some Of The World’s Oldest Dinosaur Embryos
  • Is 1 Dog Year Really The Same As 7 Human Years?
  • Were Dinosaur Eggs Soft Like A Reptile’s, Or Hard Like A Bird’s?
  • What Causes All The Symptoms Of Long COVID And ME/CFS? The Brainstem Could Be The Key
  • The Only Bugs In Antarctica Are Already Eating Microplastics
  • Like Mars, Europa Has A Spider Shape, And Now We Might Know Why
  • How Did Ancient Wolves Get Onto This Remote Island 5,000 Years Ago?
  • World-First Footage Of Amur Tigress With 5 Cubs Marks Huge Conservation Win
  • Happy Birthday, Flossie! The World’s Oldest Living Cat Just Turned 30
  • We Might Finally Know Why Humans Gave Up Making Our Own Vitamin C
  • Hippo Birthday Parties, Chubby-Cheeked Dinosaurs, And A Giraffe With An Inhaler: The Most Wholesome Science Stories Of 2025
  • One Of The World’s Rarest, Smallest Dolphins May Have Just Been Spotted Off New Zealand’s Coast
  • Gaming May Be Popular, But Can It Damage A Resume?
  • A Common Condition Makes The Surinam Toad Pure Nightmare Fuel For Some People
  • In 1815, The Largest Eruption In Recorded History Plunged Earth Into A Volcanic Winter
  • JWST Finds The Best Evidence Yet Of A Lava World With A Thick Atmosphere
  • Officially Gone: After 40 Years MIA, Australia’s Only Shrew Has Been Declared “Extinct”
  • Horrifically Disfigured Skeleton Known As “The Prince” Was Likely Mauled To Death By A Bear 27,000 Years Ago
  • Manumea, Dodo’s Closest Living Relative, Seen Alive After 5-Year Disappearance
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version