• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI Models Like ChatGPT Do Not Pose An “Existential Threat” To Humanity

August 20, 2024 by Deborah Bloomfield

A new study argues that ChatGPT and other large language models (LLMs) are incapable of independent learning or acquiring skills without human input. This pours cold water on the belief that such systems could pose existential risks to humanity.

Advertisement

LLMs are scaled up versions of pre-trained language models (PLMs), which are trained on massive amounts of web-scale data bodies. This access to such immense amounts of data makes them capable of understanding and generating natural language and other content that can be used for a large range of tasks.

However, they can also exhibit “emergent abilities”, which are essentially random performances that they were not explicitly trained for. This has included conducting tasks that would otherwise require some form of reasoning. For instance, an emergent ability could include an LLM’s ability to understand social situations, inferred by it performing above the random baseline on the Social IQA – a measure of commonsense reasoning about social situations.

The inherent unpredictability associated with emergent abilities, especially given that LLMs are being trained on even larger datasets, raises substantial questions about safety and security. Some have argued that future emergent abilities could include potentially hazardous abilities, including reasoning and planning, which could threaten humanity.

However, a new study has shown that LLMs have a superficial ability to follow instructions and excel at proficiency in language, but they have no potential to master new skills without explicit instruction. This means they are inherently predictable, safe, and controllable, though they can still be misused by people.

As these models continue to be scaled up, they are likely to generate more sophisticated language and become more accurate when faced with detailed and explicit prompts, but they are highly unlikely to gain complex reasoning.

Advertisement

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath, explained in a statement.

Tayyar Madabushi and colleagues, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that the models have never come across – basically their propensity to generate emergent abilities.

When it came to their ability to perform above the random baseline on the Social IQA, past researchers assumed the models “knew” what they were doing. However, the new study argues that this is not the case. Instead, the team show that the models were using a well-known ability to complete tasks based on a few examples presented to them – what is known as “in-context learning” (ICL).

Advertisement

ⓘ IFLScience is not responsible for content shared from external sites.

By running over 1,000 experiments, the team demonstrated that the ability for LLMs to follow instructions (ICL), their memory, and linguistic proficiency can explain their capabilities and their limitations.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” Tayyar Madabushi added.

“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Advertisement

Importantly, the fears over the existential threats posed by these models are not unique to non-experts; they have also been expressed by the top AI researchers across the world. However, the team believe the fear is unfounded as the tests clearly show the absence of emergent complex reasoning abilities in LLMs.

“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Tayyar Madabushi said.

“Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

However, the team do stress that these results do not rule out all threats related to AI. As Professor Gurevych explained, “[We] show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

Advertisement

The study is published in Proceedings of the 62nd Annual Meeting of the Association of Computational Linguistics. 

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Skype alumni head to court in a battle over Starship Technologies and Wire
  2. Boeing 737 MAX test flight for China’s regulator a success – exec
  3. Looking To Unplug And Write Without Distractions? You Need This
  4. Scientists Want To Use People As Human Antennas For 6G Technology

Source Link: AI Models Like ChatGPT Do Not Pose An "Existential Threat" To Humanity

Filed Under: News

Primary Sidebar

  • For 25 Years, People Have Been Living Continuously In Space – But What Happens Next?
  • People Are Not Happy After Learning How Horses Sweat
  • World’s First Generational Tobacco Ban Takes Effect For People Born After 2007
  • Why Was The Year 536 CE A Truly Terrible Time To Be Alive?
  • Inside The Myth Of The 15-Meter Congo Snake, Cryptozoology’s Most Outlandish Claim
  • NASA’s Voyager Spacecraft Found A 30,000-50,000 Kelvin “Wall” At The Edge Of Our Solar System
  • “Dueling Dinosaurs” Fossil Confirms Nanotyrannus As Own Species, Interstellar Comet 3I/ATLAS Is Back From Behind The Sun, And Much More This Week
  • This Is What Antarctica Would Look Like If All Its Ice Disappeared
  • Bacteria That Can Come Back From The Dead May Have Gone To Space: “They Are Playing Hide And Seek”
  • Earth’s Apex Predators: Meet The Animals That (Almost) Can’t Be Killed
  • What Looks And Smells Like Bird Poop? These Stinky Little Spiders That Don’t Want To Be Snacks
  • In 2020, A Bald Eagle Murder Mystery Led Wildlife Biologists To A Very Unexpected Culprit
  • Jupiter-Bound Mission To Study Interstellar Comet 3I/ATLAS From Deep Space This Weekend
  • The Zombie Worms Are Disappearing And It’s Not A Good Thing
  • Think Before You Toss: Do Not Dump Your Pumpkins In The Woods After Halloween
  • A Nearby Galaxy Has A Dark Secret, But Is It An Oversized Black Hole Or Excess Dark Matter?
  • Newly Spotted Vaquita Babies Offer Glimmer Of Hope For World’s Rarest Marine Mammal
  • Do Bees Really “Explode” When They Mate? Yes, Yes They Do
  • How Do We Brush A Hippo’s Teeth?
  • Searching For Nessie: IFLScience Takes On Cryptozoology
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version