• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI Models Like ChatGPT Do Not Pose An “Existential Threat” To Humanity

August 20, 2024 by Deborah Bloomfield

A new study argues that ChatGPT and other large language models (LLMs) are incapable of independent learning or acquiring skills without human input. This pours cold water on the belief that such systems could pose existential risks to humanity.

Advertisement

LLMs are scaled up versions of pre-trained language models (PLMs), which are trained on massive amounts of web-scale data bodies. This access to such immense amounts of data makes them capable of understanding and generating natural language and other content that can be used for a large range of tasks.

However, they can also exhibit “emergent abilities”, which are essentially random performances that they were not explicitly trained for. This has included conducting tasks that would otherwise require some form of reasoning. For instance, an emergent ability could include an LLM’s ability to understand social situations, inferred by it performing above the random baseline on the Social IQA – a measure of commonsense reasoning about social situations.

The inherent unpredictability associated with emergent abilities, especially given that LLMs are being trained on even larger datasets, raises substantial questions about safety and security. Some have argued that future emergent abilities could include potentially hazardous abilities, including reasoning and planning, which could threaten humanity.

However, a new study has shown that LLMs have a superficial ability to follow instructions and excel at proficiency in language, but they have no potential to master new skills without explicit instruction. This means they are inherently predictable, safe, and controllable, though they can still be misused by people.

As these models continue to be scaled up, they are likely to generate more sophisticated language and become more accurate when faced with detailed and explicit prompts, but they are highly unlikely to gain complex reasoning.

Advertisement

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath, explained in a statement.

Tayyar Madabushi and colleagues, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, ran experiments to test the ability of LLMs to complete tasks that the models have never come across – basically their propensity to generate emergent abilities.

When it came to their ability to perform above the random baseline on the Social IQA, past researchers assumed the models “knew” what they were doing. However, the new study argues that this is not the case. Instead, the team show that the models were using a well-known ability to complete tasks based on a few examples presented to them – what is known as “in-context learning” (ICL).

Advertisement

ⓘ IFLScience is not responsible for content shared from external sites.

By running over 1,000 experiments, the team demonstrated that the ability for LLMs to follow instructions (ICL), their memory, and linguistic proficiency can explain their capabilities and their limitations.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” Tayyar Madabushi added.

“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Advertisement

Importantly, the fears over the existential threats posed by these models are not unique to non-experts; they have also been expressed by the top AI researchers across the world. However, the team believe the fear is unfounded as the tests clearly show the absence of emergent complex reasoning abilities in LLMs.

“While it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” Tayyar Madabushi said.

“Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake. Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

However, the team do stress that these results do not rule out all threats related to AI. As Professor Gurevych explained, “[We] show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”

Advertisement

The study is published in Proceedings of the 62nd Annual Meeting of the Association of Computational Linguistics. 

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Skype alumni head to court in a battle over Starship Technologies and Wire
  2. Boeing 737 MAX test flight for China’s regulator a success – exec
  3. Looking To Unplug And Write Without Distractions? You Need This
  4. Scientists Want To Use People As Human Antennas For 6G Technology

Source Link: AI Models Like ChatGPT Do Not Pose An "Existential Threat" To Humanity

Filed Under: News

Primary Sidebar

  • Objects Look Different At The Speed Of Light: The “Terrell-Penrose” Effect Gets Visualized In Twisted Experiment
  • The Universe Could Be Simple – We Might Be What Makes It Complicated, Suggests New Quantum Gravity Paper Prof Brian Cox Calls “Exhilarating”
  • First-Ever Human Case Of H5N5 Bird Flu Results In Death Of Washington State Resident
  • This Region Of The US Was Riddled With “Forever Chemicals.” They Just Discovered Why.
  • There Is Something “Very Wrong” With Our Understanding Of The Universe, Telescope Final Data Confirms
  • An Ethiopian Shield Volcano Has Just Erupted, For The First Time In Thousands Of Years
  • The Quietest Place On Earth Has An Ambient Sound Level Of Minus 24.9 Decibels
  • Physicists Say The Entire Universe Might Only Need One Constant – Time
  • Does Fluoride In Drinking Water Impact Brain Power? A Huge 40-Year Study Weighs In
  • Hunting High And Low Helps Four Wild Cat Species Coexist In Guatemala’s Rainforests
  • World’s Oldest Pygmy Hippo, Hannah Shirley, Celebrates 52nd Birthday With “Hungry Hungry Hippos”-Themed Party
  • What Is Lüften? The Age-Old German Tradition That’s Backed By Science
  • People Are Just Now Learning The Difference Between Plants And Weeds
  • “Dancing” Turtles Feel Magnetism Through Crystals Of Magnetite, Helping Them Navigate
  • Social Frailty Is A Strong Predictor Of Dementia, But Two Ingredients Can “Put The Brakes On Cognitive Decline”
  • Heard About “Subclade K” Flu? We Explore What It Is, And Whether You Should Worry
  • Why Did Prehistoric Mummies From The Atacama Desert Have Such Small Brains?
  • What Would Happen If A Tiny Primordial Black Hole Passed Through Your Body?
  • “Far From A Pop-Science Relic”: Why “6 Degrees Of Separation” Rules The Modern World
  • IFLScience We Have Questions: Can Sheep Livers Predict The Future?
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version