• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Can We Legally Require AI Chatbots To Tell The Truth?

August 7, 2024 by Deborah Bloomfield

In recent months, many people will have experimented with a chatbot like ChatGPT. Useful though they can be, there’s also no shortage of examples of them producing, ahem, erroneous information. Now, a group of scientists from the University of Oxford are asking: is there a legal pathway by which we could require these chatbots to tell us the truth?

Advertisement

The rise of large language models

Amid all the buzz around artificial intelligence (AI), which seems to have reached new heights in the past couple of years, one branch of the field has garnered more attention than any other – at least with those of us who aren’t machine learning experts. It’s the large language models (LLMs), which leverage generative AI to produce often eerily human-sounding responses to almost any query you can dream up.

The likes of ChatGPT and Google’s Gemini are based on models trained on huge amounts of data – which itself raises numerous questions around privacy and intellectual property – to allow them to understand natural language queries and generate coherent and relevant responses. Unlike with a search engine, you don’t need to learn any syntax to help you narrow down your results. Theoretically, you simply ask a question as if you were speaking out loud.



Their capabilities are no doubt impressive, and they certainly sound confident in their answers. There’s just one tiny problem – these chatbots tend to sound equally confident when they’re dead wrong. Which might be okay, if we humans could just remember not to trust everything they’re telling us.

“While problems arising from our tendency to anthropomorphize machines are well established, our vulnerability to treating LLMs as human-like truth tellers is uniquely worrying,” write the authors of the new paper, referring to a situation that anyone who’s ever had an argument with Alexa or Siri will know well.

Advertisement

“LLMs are not designed to tell the truth in any overriding sense.”

It’s easy to tap out a question for ChatGPT and assume that it is “thinking” of the answer in the same way a human person would. That’s how it appears, but that’s not actually how these models work.

Don’t believe everything you read

As the authors explain, LLMs “are text-generation engines designed to predict which string of words comes next in a piece of text.” The truthfulness of their responses is only one metric by which the models are judged during development. In an effort to produce the most “helpful” answer, the authors argue, they can all-too-frequently stray towards oversimplification, bias, and just making stuff up.

The authors of this study are by no means the first to raise the alarm about this, with one paper going so far as to call the models “bullshitters”. Professor Robin Emsley, the editor of the journal Schizophrenia, published an account of an experience with ChatGPT in 2023, stating, “What I experienced were fabrications and falsifications.” The chatbot had produced citations for scholarly papers that did not exist, as well as several that were irrelevant to the query. Others have reported the same thing.  

Advertisement

They do okay with questions that have a clear, factual answer where – and this bit’s important – that answer has appeared a lot within their training data. These models are only as good as the data they’re trained on. And, unless you’re prepared to carefully fact-check any answer you get from an LLM, it can be very difficult to tell how accurate the information is – especially as many don’t provide links to their source material or give any measure of confidence.

“Unlike human speakers, LLMs do not have any internal conceptions of expertise or confidence, instead always ‘doing their best’ to be helpful and persuasively respond to the prompt posed,” writes the team at Oxford.

They were particularly concerned about the impact of what they call “careless speech”, and of the harm that could be done by such responses from LLMs leeching into offline human conversation. This led them to ask the question of whether there could be a legal obligation imposed on LLM providers to ensure that their models tell the truth.

What did the new study conclude?

Focusing on current European Union (EU) legislation, the authors found that there are few explicit scenarios where a duty is placed on an organization or individual to tell the truth. Those that do exist are limited to specific sectors or institutions, and very rarely apply to the private sector. Since LLMs operate on relatively new technology, the majority of existing regulations were not drawn up with these models in mind.

Advertisement

So, the authors propose a new framework, “the creation of a legal duty to minimize careless speech for providers of both narrow- and general-purpose LLMs.”

You might naturally ask, “Who is the arbiter of truth?”, and the authors do address this by saying that the aim is not to force LLMs down one particular path, but rather to require “plurality and representativeness of sources”. Fundamentally, they propose that makers redress the balance between truthfulness and “helpfulness”, which the authors argue is too much in favor of the latter. It’s not simple to do, but it might be possible.

There are no easy answers to these questions (disclaimer: we have not tried asking ChatGPT), but as this technology continues to advance they are things that developers will have to grapple with. In the meantime, when you’re working with an LLM, it may be worth remembering this sobering statement from the authors: “They are designed to participate in natural language conversations with people and offer answers that are convincing and feel helpful, regardless of the truth of the matter at hand.”

The study is published in the journal Royal Society Open Science.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Sendoso nabs $100M as its corporate gifting platform passes 20,000 customers
  2. Biotech Firm Wants To Create Synthetic Human Embryos To Harvest Donor Organs
  3. First Room-Temperature Ambient-Pressure Superconductor Achieved, Claim Scientists
  4. Five New Species Of Fabulous Eyelash Vipers Discovered In Remote Colombia And Ecuador

Source Link: Can We Legally Require AI Chatbots To Tell The Truth?

Filed Under: News

Primary Sidebar

  • The Longest Living Mammals Are Giants That Live Up To 200 Years In The Icy Arctic
  • Entirely New Virus Detected In Bat Urine, And It’s Only The 4th Of Its Kind Ever Isolated
  • The First Ever Full Asteroid History: From Its Doomed Discovery To Collecting Its Meteorites
  • World’s Oldest Pachycephalosaur Fossil Pushes Back These Dinosaurs’ Emergence By 15 Million Years
  • The Hole In The Ozone Layer Is Healing And On Track For Full Recovery In The 21st Century, Thanks To Science
  • First Sweet Potato Genome Reveals They’re Hybrids With A Puzzling Past And 6 Sets Of Chromosomes
  • Why Is The Top Of Canada So Sparsely Populated? Meet The “Canadian Shield”
  • Humans Are In The Middle Of “A Great Evolutionary Transition”, New Paper Claims
  • Why Do Some Toilets Have Two Flush Buttons?
  • 130-Year-Old Butter Additive Discovered In Danish Basement Contains Bacteria From The 1890s
  • Prehistoric Humans Made Necklaces From Marine Mollusk Fossils 20,000 Years Ago
  • Zond 5: In 1968 Two Soviet Steppe Tortoises Beat Humans To Orbiting Around The Moon
  • Why Cats Adapted This Defense Mechanism From Snakes
  • Mother Orca Seen Carrying Dead Calf Once Again On Washington Coast
  • A Busy Spider Season Is Brewing: Why This Fall Could See A Boom Of Arachnid Activity
  • What Alternatives Are There To The Big Bang Model?
  • Magnetic Flip Seen Around First Photographed Black Hole Pushes “Models To The Limit”
  • Something Out Of Nothing: New Approach Mimics Matter Creation Using Superfluid Helium
  • Surströmming: Why Sweden’s Stinky Fermented Fish Smells So Bad (But People Still Eat It)
  • First-Ever Recording Of Black Hole Recoil Captured During Merger – And You Can Listen To It
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version