• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI Models Like ChatGPT Do Not Pose An “Existential Threat” To Humanity

August 20, 2024 by Deborah Bloomfield

A new study argues that ChatGPT and other large language models (LLMs) are incapable of independent learning or acquiring skills without human input. This pours cold water on the belief that such systems could pose existential risks to humanity.

Advertisement

LLMs are scaled up versions of pre-trained language models (PLMs), which are trained on massive amounts of web-scale data bodies. This access to such immense amounts of data makes them capable of understanding and generating natural language and other content that can be used for a large range of tasks.

However, they can also exhibit “emergent abilities”, which are essentially random performances that they were not explicitly trained for. This has included conducting tasks that would otherwise require some form of reasoning. For instance, an emergent ability could include an LLM’s ability to understand social situations, inferred by it performing above the random baseline on the Social IQA – a measure of commonsense reasoning about social situations.

The inherent unpredictability associated with emergent abilities, especially given that LLMs are being trained on even larger datasets, raises substantial questions about safety and security. Some have argued that future emergent abilities could include potentially hazardous abilities, including reasoning and planning, which could threaten humanity.

However, a new study has shown that LLMs have a superficial ability to follow instructions and excel at proficiency in language, but they have no potential to master new skills without explicit instruction. This means they are inherently predictable, safe, and controllable, though they can still be misused by people.

As these models continue to be scaled up, they are likely to generate more sophisticated language and become more accurate when faced with detailed and explicit prompts, but they are highly unlikely to gain complex reasoning.

Advertisement

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath, explained in a statement.

Tayyar Madabushi and colleagues, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that the models have never come across – basically their propensity to generate emergent abilities.

When it came to their ability to perform above the random baseline on the Social IQA, past researchers assumed the models “knew” what they were doing. However, the new study argues that this is not the case. Instead, the team show that the models were using a well-known ability to complete tasks based on a few examples presented to them – what is known as “in-context learning” (ICL).

Advertisement

ⓘ IFLScience is not responsible for content shared from external sites.

By running over 1,000 experiments, the team demonstrated that the ability for LLMs to follow instructions (ICL), their memory, and linguistic proficiency can explain their capabilities and their limitations.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” Tayyar Madabushi added.

“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Advertisement

Importantly, the fears over the existential threats posed by these models are not unique to non-experts; they have also been expressed by the top AI researchers across the world. However, the team believe the fear is unfounded as the tests clearly show the absence of emergent complex reasoning abilities in LLMs.

“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Tayyar Madabushi said.

“Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

However, the team do stress that these results do not rule out all threats related to AI. As Professor Gurevych explained, “[We] show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

Advertisement

The study is published in Proceedings of the 62nd Annual Meeting of the Association of Computational Linguistics. 

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Skype alumni head to court in a battle over Starship Technologies and Wire
  2. Boeing 737 MAX test flight for China’s regulator a success – exec
  3. Looking To Unplug And Write Without Distractions? You Need This
  4. Scientists Want To Use People As Human Antennas For 6G Technology

Source Link: AI Models Like ChatGPT Do Not Pose An "Existential Threat" To Humanity

Filed Under: News

Primary Sidebar

  • Noah’s Ark Or Just A Big Mound? US Researchers Eye Up A Strange Ship-Shaped Ridge In Turkey
  • US Congressman Films Old Secret Passageway Beneath The Lincoln Room Of The Capitol Building
  • Got Stains On Your Clothes? Know When To Use Hot Or Cold Water
  • Why Do Your Towels Dry You Better When They’re Older?
  • “She Would See That Face Morph Into The Face Of A Dragon”: Strange Tales From Neuroscience At CURIOUS Live
  • A Giant Mountain Range Has Been Hidden Under Antarctica’s Ice For Millions Of Years
  • Why Did Ancient Silver Coins Have Owls On Them?
  • Ancient Humans May Have Survived In Isolated Northern Scotland During Extreme Cooling 12,000 Years Ago
  • In The Year 536 CE, A Truly Miserable Period Of Human History Began
  • Why Is The Uncanny Valley So Frightening? And What One Frowny Robot Is Doing To Overcome It
  • 5-Million-Year-Old Antarctic Ice Core Contains Sample Of Air From The Pliocene Epoch
  • Flamingos Make Tiny Tornadoes In Water To Trap Their Prey
  • Off The Coast Of California Strange And Regular Circular Structures Line The Ocean Floor
  • Jupiter’s Aurorae Change Faster Than Previously Thought – But There’s Something Even Odder Going On
  • US Measles Cases Pass 1,000, Speeding Towards Worst Outbreaks Since 2019
  • UMa3/U1: Is This The Smallest Galaxy Ever Discovered, Or Something Else?
  • A Flying Car That Can Reach Over 155 MPH In Air Might Come To Market In 2026
  • World-First 3D-Printed Skin Robot Aims To Help Burn Patients In Australia
  • Dramatic Video Shows “First-Ever” Fault Movement Surface Rupture Caught On Camera
  • Migraine Drug Could Be First To Treat Symptoms That Come Before The Headache
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version