• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Can We Legally Require AI Chatbots To Tell The Truth?

August 7, 2024 by Deborah Bloomfield

In recent months, many people will have experimented with a chatbot like ChatGPT. Useful though they can be, there’s also no shortage of examples of them producing, ahem, erroneous information. Now, a group of scientists from the University of Oxford are asking: is there a legal pathway by which we could require these chatbots to tell us the truth?

Advertisement

The rise of large language models

Amid all the buzz around artificial intelligence (AI), which seems to have reached new heights in the past couple of years, one branch of the field has garnered more attention than any other – at least with those of us who aren’t machine learning experts. It’s the large language models (LLMs), which leverage generative AI to produce often eerily human-sounding responses to almost any query you can dream up.

The likes of ChatGPT and Google’s Gemini are based on models trained on huge amounts of data – which itself raises numerous questions around privacy and intellectual property – to allow them to understand natural language queries and generate coherent and relevant responses. Unlike with a search engine, you don’t need to learn any syntax to help you narrow down your results. Theoretically, you simply ask a question as if you were speaking out loud.



Their capabilities are no doubt impressive, and they certainly sound confident in their answers. There’s just one tiny problem – these chatbots tend to sound equally confident when they’re dead wrong. Which might be okay, if we humans could just remember not to trust everything they’re telling us.

“While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying,” write the authors of the new paper, referring to a situation that anyone who’s ever had an argument with Alexa or Siri will know well.

Advertisement

“LLMs are not designed to tell the truth in any overriding sense.”

It’s easy to tap out a question for ChatGPT and assume that it is “thinking” of the answer in the same way a human person would. That’s how it appears, but that’s not actually how these models work.

Don’t believe everything you read

As the authors explain, LLMs “are text-generation engines designed to predict which string of words comes next in a piece of text.” The truthfulness of their responses is only one metric by which the models are judged during development. In an effort to produce the most “helpful” answer, the authors argue, they can all-too-frequently stray towards oversimplification, bias, and just making stuff up.

The authors of this study are by no means the first to raise the alarm about this, with one paper going so far as to call the models “bullshitters”. Professor Robin Emsley, the editor of the journal Schizophrenia, published an account of an experience with ChatGPT in 2023, stating, “What I experienced were fabrications and falsifications.” The chatbot had produced citations for scholarly papers that did not exist, as well as several that were irrelevant to the query. Others have reported the same thing.  

Advertisement

They do okay with questions that have a clear, factual answer where – and this bit’s important – that answer has appeared a lot within their training data. These models are only as good as the data they’re trained on. And, unless you’re prepared to carefully fact-check any answer you get from an LLM, it can be very difficult to tell how accurate the information is – especially as many don’t provide links to their source material or give any measure of confidence.

“Unlike human speakers, LLMs do not have any internal conceptions of expertise or confidence, instead always ‘doing their best’ to be helpful and persuasively respond to the prompt posed,” writes the team at Oxford.

They were particularly concerned about the impact of what they call “careless speech”, and of the harm that could be done by such responses from LLMs leeching into offline human conversation. This led them to ask the question of whether there could be a legal obligation imposed on LLM providers to ensure that their models tell the truth.

What did the new study conclude?

Focusing on current European Union (EU) legislation, the authors found that there are few explicit scenarios where a duty is placed on an organization or individual to tell the truth. Those that do exist are limited to specific sectors or institutions, and very rarely apply to the private sector. Since LLMs operate on relatively new technology, the majority of existing regulations were not drawn up with these models in mind.

Advertisement

So, the authors propose a new framework, “the creation of a legal duty to minimize careless speech for providers of both narrow- and general-purpose LLMs.”

You might naturally ask, “Who is the arbiter of truth?”, and the authors do address this by saying that the aim is not to force LLMs down one particular path, but rather to require “plurality and representativeness of sources”. Fundamentally, they propose that makers redress the balance between truthfulness and “helpfulness”, which the authors argue is too much in favor of the latter. It’s not simple to do, but it might be possible.

There are no easy answers to these questions (disclaimer: we have not tried asking ChatGPT), but as this technology continues to advance they are things that developers will have to grapple with. In the meantime, when you’re working with an LLM, it may be worth remembering this sobering statement from the authors: “They are designed to participate in natural language conversations with people and offer answers that are convincing and feel helpful, regardless of the truth of the matter at hand.”

The study is published in the journal Royal Society Open Science.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Sendoso nabs $100M as its corporate gifting platform passes 20,000 customers
  2. Biotech Firm Wants To Create Synthetic Human Embryos To Harvest Donor Organs
  3. First Room-Temperature Ambient-Pressure Superconductor Achieved, Claim Scientists
  4. Five New Species Of Fabulous Eyelash Vipers Discovered In Remote Colombia And Ecuador

Source Link: Can We Legally Require AI Chatbots To Tell The Truth?

Filed Under: News

Primary Sidebar

  • Mercury’s Steep Cliffs Might Be The Result Of The Sun Squeezing The Planet
  • Dennis Hope: The Man Who Allegedly Sold Presidents Land On The Moon (That He Doesn’t Own)
  • Video: Which Animal Has The Largest Brain?
  • Amazing First Images From World’s Largest Digital Camera Revealed
  • There’s Only One Person In The World With This Blood Type
  • Garden Snails Now Venomous According To Radical Redefinition, And Things Get Surprisingly Sexy
  • “Allokelping”: Hot New Wellness Trend For Critically Endangered Orcas Showcases Impressive Tool Use
  • Beam Of Light Shone All The Way Through A Human Head For The Very First Time
  • “On My Participation In The Atomic Bomb Project”: Einstein’s Powerful Letter Goes Up For Auction For $150,000
  • Watch Friendly Dolphins Help Lead A Lost Humpback Whale Into Deeper Waters
  • World’s Largest Digital Camera Snaps 2,104 New Asteroids And Millions Of Galaxies Within A Few Hours
  • Cat Or Otter? The Jaguarundi Looks Like Both
  • “The Sea Shall Flow To Jackdaw’s Well”: Old English Mermaid Legend Traced Back Centuries
  • The Fungus Blamed For “Tutankhamun’s Curse” Could Make A Potent Anti-Cancer Drug
  • Space Might Be A Byproduct Of Three-Dimensional Time
  • “Jigsaw”-Like Fresco Made Of Thousands Of Fragments Reveals Artistic Traits Not Seen In Roman Britain Before
  • Frequent Nightmares Are A Worrying Sign Of Early Death And Accelerated Aging, Says New Study
  • UK To DNA Test All Newborn Babies In Plan To Predict And Prevent Disease
  • IFLScience We Have Questions: Why Does Snow Sometimes Look Blue?
  • New Nimbus COVID Variant Present In The UK, Infections Could Spread This Summer
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version