• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI Models Like ChatGPT Do Not Pose An “Existential Threat” To Humanity

August 20, 2024 by Deborah Bloomfield

A new study argues that ChatGPT and other large language models (LLMs) are incapable of independent learning or acquiring skills without human input. This pours cold water on the belief that such systems could pose existential risks to humanity.

Advertisement

LLMs are scaled up versions of pre-trained language models (PLMs), which are trained on massive amounts of web-scale data bodies. This access to such immense amounts of data makes them capable of understanding and generating natural language and other content that can be used for a large range of tasks.

However, they can also exhibit “emergent abilities”, which are essentially random performances that they were not explicitly trained for. This has included conducting tasks that would otherwise require some form of reasoning. For instance, an emergent ability could include an LLM’s ability to understand social situations, inferred by it performing above the random baseline on the Social IQA – a measure of commonsense reasoning about social situations.

The inherent unpredictability associated with emergent abilities, especially given that LLMs are being trained on even larger datasets, raises substantial questions about safety and security. Some have argued that future emergent abilities could include potentially hazardous abilities, including reasoning and planning, which could threaten humanity.

However, a new study has shown that LLMs have a superficial ability to follow instructions and excel at proficiency in language, but they have no potential to master new skills without explicit instruction. This means they are inherently predictable, safe, and controllable, though they can still be misused by people.

As these models continue to be scaled up, they are likely to generate more sophisticated language and become more accurate when faced with detailed and explicit prompts, but they are highly unlikely to gain complex reasoning.

Advertisement

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath, explained in a statement.

Tayyar Madabushi and colleagues, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that the models have never come across – basically their propensity to generate emergent abilities.

When it came to their ability to perform above the random baseline on the Social IQA, past researchers assumed the models “knew” what they were doing. However, the new study argues that this is not the case. Instead, the team show that the models were using a well-known ability to complete tasks based on a few examples presented to them – what is known as “in-context learning” (ICL).

Advertisement

ⓘ IFLScience is not responsible for content shared from external sites.

By running over 1,000 experiments, the team demonstrated that the ability for LLMs to follow instructions (ICL), their memory, and linguistic proficiency can explain their capabilities and their limitations.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” Tayyar Madabushi added.

“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Advertisement

Importantly, the fears over the existential threats posed by these models are not unique to non-experts; they have also been expressed by the top AI researchers across the world. However, the team believe the fear is unfounded as the tests clearly show the absence of emergent complex reasoning abilities in LLMs.

“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Tayyar Madabushi said.

“Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

However, the team do stress that these results do not rule out all threats related to AI. As Professor Gurevych explained, “[We] show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

Advertisement

The study is published in Proceedings of the 62nd Annual Meeting of the Association of Computational Linguistics. 

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Skype alumni head to court in a battle over Starship Technologies and Wire
  2. Boeing 737 MAX test flight for China’s regulator a success – exec
  3. Looking To Unplug And Write Without Distractions? You Need This
  4. Scientists Want To Use People As Human Antennas For 6G Technology

Source Link: AI Models Like ChatGPT Do Not Pose An "Existential Threat" To Humanity

Filed Under: News

Primary Sidebar

  • The Bizarre 1997 Experiment That Made A Frog Levitate
  • There’s A Very Good Reason Why October 1582 On Your Phone Is Missing 10 Days
  • Skynet-1A: Military Spacecraft Launched 56 Years Ago Has Been Moved By Persons Unknown
  • There’s A Simple Solution To Helping Avoid Erectile Dysfunction (But You’re Not Going To Like It)
  • Interstellar Object 3I/ATLAS May Be 10 Billion Years Old, This Rare Spider Is Half-Female, Half-Male Split Down The Middle, And Much More This Week
  • Why Do Trains Not Have Seatbelts? It’s Probably Not What You Think
  • World’s Driest Hot Desert Just Burst Into A Rare And Fleeting Desert Bloom
  • Theoretical Dark Matter Infernos Could Melt The Earth’s Core, Turning It Liquid
  • North America’s Largest Mammal Once Numbered 60 Million – Then Humans Nearly Drove It To Extinction
  • North America’s Largest Ever Land Animal Was A 21-Meter-Long Titan
  • A Two-Headed Fossil, 50/50 Spider, And World-First Butt Drag
  • Interstellar Comet 3I/ATLAS Is Losing Buckets Of Water Every Second – And It’s Got Cyanide
  • “A Historic Shift”: Renewables Generated More Power Than Coal Globally For First Time
  • The World’s Oldest Known Snake In Captivity Became A Mom At 62 – No Dad Required
  • Biggest Ocean Current On Earth Is Set To Shift, Spelling Huge Changes For Ecosystems
  • Why Are The Continents All Bunched Up On One Side Of The Planet?
  • Why Can’t We Reach Absolute Zero?
  • “We Were Onto Something”: Highest Resolution Radio Arc Shows The Lowest Mass Dark Object Yet
  • How Headsets Made For Cyclists Are Giving Hearing And Hope To Kids With Glue Ear
  • It Was Thought Only One Mammal On Earth Had Iridescent Fur – Turns Out There’s More
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version