• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Today’s Top AI Went Up Against Expert Mathematicians. It Lost Badly.

November 23, 2024 by Deborah Bloomfield

While AI may be more commonly used for stealing art and hallucinating bullshit – that’s a technical term, by the way – the last couple of years have also seen what seem to be some genuinely extraordinary feats from the nascent technology. And that’s particularly true in the field of math: where computers were once confined to the category of blunt force instruments, today they can apparently not just solve complex problems, but can come up with novel proof strategies all of their own. 

But just how smart are they, really? In a new paper, expert mathematicians set forth a new challenge for today’s top level AI programs. The result? Abject failure.

Advertisement

“Recent AI systems have demonstrated remarkable proficiency in tackling challenging mathematical tasks, from achieving olympiad-level performance in geometry to improving upon existing research results in combinatorics,” begins the paper, currently published on the ArXiv preprint server. “However, existing benchmarks face some limitations.”

For example, the authors write, while it’s certainly impressive that AI systems can tackle challenges like the GSM8K problem set or the International Mathematical Olympiad, neither of those are exactly cutting-edge math – they’re more like “advanced high school” level than “limit of human invention”.

On top of that – and also reminiscent of high school math – we’re running out of things to ask our various AI programs. “A significant challenge in evaluating large language models (LLMs) is data contamination,” the authors explain – in other words, “the inadvertent inclusion of benchmark problems in training data.”

Like a student acing a test they already saw the answer key to, “this issue leads to artificially inflated performance metrics that mask models’ true reasoning capabilities,” they write.

Advertisement

The solution: FrontierMath – described by the team as “a benchmark of original, exceptionally challenging mathematical problems created in collaboration with over 60 mathematicians from leading institutions.” It’s no empty boast: there are multiple Fields Medal winners involved in the project, including one who contributed problems to the dataset; other tests came from mathematicians of graduate level and up, from universities across the world.

Problems submitted had to meet four criteria: they had to be original – to “[ensure] that solving them requires genuine mathematical insight rather than pattern matching against known problems,” the paper explains; they had to be guessproof; they had to be “computationally tractable” – that is, they had to be relatively straightforward if you know what you’re doing; and they had to be quickly and automatically verifiable. Once all these boxes were checked, the questions were even peer-reviewed, rated for difficulty, and handled securely to prevent dataset contamination.

It was, in other words, no small feat. But could today’s AI programs beat it?

Well… no. “Current state-of-the-art AI models solve[d] under 2 percent of problems,” the authors write, “revealing a vast gap between AI capabilities and the prowess of the mathematical community.”

Advertisement

Now, AI shouldn’t take this too hard – the problems were very difficult. “[They] are extremely challenging,” Fields Medal winner Terence Tao said, requiring extensive training data that is, in practice, “almost nonexistent.” 

But it does mean that, for now at least, the FrontierMath dataset is kind of hoisted by its own petard. “Current AI models cannot solve even a small fraction of the problems in our benchmark,” the authors write. “While this demonstrates the high difficulty level of our problems, it temporarily limits FrontierMath’s usefulness in evaluating relative performance of models.” 

“However, we expect this limitation to resolve as AI systems improve,” they add.

The paper – which includes sample problems and solutions from the dataset – is published on the ArXiv.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Cricket-Manchester test likely to be postponed after India COVID-19 case
  2. EU to attend U.S. trade meeting put in doubt by French anger
  3. Soccer-West Ham win again, Leicester and Napoli falter
  4. Lacking Company, A Dolphin In The Baltic Is Talking To Himself

Source Link: Today's Top AI Went Up Against Expert Mathematicians. It Lost Badly.

Filed Under: News

Primary Sidebar

  • What Is Lüften? The Age-Old German Tradition That’s Backed By Science
  • People Are Just Now Learning The Difference Between Plants And Weeds
  • “Dancing” Turtles Feel Magnetism Through Crystals Of Magnetite, Helping Them Navigate
  • Social Frailty Is A Strong Predictor Of Dementia, But Two Ingredients Can “Put The Brakes On Cognitive Decline”
  • Heard About “Subclade K” Flu? We Explore What It Is, And Whether You Should Worry
  • Why Did Prehistoric Mummies From The Atacama Desert Have Such Small Brains?
  • What Would Happen If A Tiny Primordial Black Hole Passed Through Your Body?
  • “Far From A Pop-Science Relic”: Why “6 Degrees Of Separation” Rules The Modern World
  • IFLScience We Have Questions: Can Sheep Livers Predict The Future?
  • The Cavendish Experiment: In 1797, Henry Cavendish Used Two Small Metal Spheres To Weigh The Entire Earth
  • People Are Only Now Learning Where The Titanic Actually Sank
  • A New Way Of Looking At Einstein’s Equations Could Reveal What Happened Before The Big Bang
  • First-Ever Look At Neanderthal Nasal Cavity Shatters Expectations, NASA Reveals Comet 3I/ATLAS Images From 8 Missions, And Much More This Week
  • The Latest Internet Debate: Is It More Efficient To Walk Around On Massive Stilts?
  • The Trump Administration Wants To Change The Endangered Species Act – Here’s What To Know
  • That Iconic Lion Roar? Turns Out, They Have A Whole Other One That We Never Knew About
  • What Are Gravity Assists And Why Do Spacecraft Use Them So Much?
  • In 2026, Unique Mission Will Try To Save A NASA Telescope Set To Uncontrollably Crash To Earth
  • Blue Origin Just Revealed Its Latest New Glenn Rocket And It’s As Tall As SpaceX’s Starship
  • What Exactly Is The “Man In The Moon”?
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version