• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Today’s Top AI Went Up Against Expert Mathematicians. It Lost Badly.

November 23, 2024 by Deborah Bloomfield

While AI may be more commonly used for stealing art and hallucinating bullshit – that’s a technical term, by the way – the last couple of years have also seen what seem to be some genuinely extraordinary feats from the nascent technology. And that’s particularly true in the field of math: where computers were once confined to the category of blunt force instruments, today they can apparently not just solve complex problems, but can come up with novel proof strategies all of their own. 

But just how smart are they, really? In a new paper, expert mathematicians set forth a new challenge for today’s top level AI programs. The result? Abject failure.

Advertisement

“Recent AI systems have demonstrated remarkable proficiency in tackling challenging mathematical tasks, from achieving olympiad-level performance in geometry to improving upon existing research results in combinatorics,” begins the paper, currently published on the ArXiv preprint server. “However, existing benchmarks face some limitations.”

For example, the authors write, while it’s certainly impressive that AI systems can tackle challenges like the GSM8K problem set or the International Mathematical Olympiad, neither of those are exactly cutting-edge math – they’re more like “advanced high school” level than “limit of human invention”.

On top of that – and also reminiscent of high school math – we’re running out of things to ask our various AI programs. “A significant challenge in evaluating large language models (LLMs) is data contamination,” the authors explain – in other words, “the inadvertent inclusion of benchmark problems in training data.”

Like a student acing a test they already saw the answer key to, “this issue leads to artificially inflated performance metrics that mask models’ true reasoning capabilities,” they write.

Advertisement

The solution: FrontierMath – described by the team as “a benchmark of original, exceptionally challenging mathematical problems created in collaboration with over 60 mathematicians from leading institutions.” It’s no empty boast: there are multiple Fields Medal winners involved in the project, including one who contributed problems to the dataset; other tests came from mathematicians of graduate level and up, from universities across the world.

Problems submitted had to meet four criteria: they had to be original – to “[ensure] that solving them requires genuine mathematical insight rather than pattern matching against known problems,” the paper explains; they had to be guessproof; they had to be “computationally tractable” – that is, they had to be relatively straightforward if you know what you’re doing; and they had to be quickly and automatically verifiable. Once all these boxes were checked, the questions were even peer-reviewed, rated for difficulty, and handled securely to prevent dataset contamination.

It was, in other words, no small feat. But could today’s AI programs beat it?

Well… no. “Current state-of-the-art AI models solve[d] under 2 percent of problems,” the authors write, “revealing a vast gap between AI capabilities and the prowess of the mathematical community.”

Advertisement

Now, AI shouldn’t take this too hard – the problems were very difficult. “[They] are extremely challenging,” Fields Medal winner Terence Tao said, requiring extensive training data that is, in practice, “almost nonexistent.” 

But it does mean that, for now at least, the FrontierMath dataset is kind of hoisted by its own petard. “Current AI models cannot solve even a small fraction of the problems in our benchmark,” the authors write. “While this demonstrates the high difficulty level of our problems, it temporarily limits FrontierMath’s usefulness in evaluating relative performance of models.” 

“However, we expect this limitation to resolve as AI systems improve,” they add.

The paper – which includes sample problems and solutions from the dataset – is published on the ArXiv.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Cricket-Manchester test likely to be postponed after India COVID-19 case
  2. EU to attend U.S. trade meeting put in doubt by French anger
  3. Soccer-West Ham win again, Leicester and Napoli falter
  4. Lacking Company, A Dolphin In The Baltic Is Talking To Himself

Source Link: Today's Top AI Went Up Against Expert Mathematicians. It Lost Badly.

Filed Under: News

Primary Sidebar

  • We Could See A Black Hole Explode Within 10 Years – Unlocking The Secrets Of The Universe
  • Denisovan DNA May Make Some People Resistant To Malaria
  • Beware The Kellas Cat? This “Cryptid” Turned Out To Be Real, But It Wasn’t What People Thought
  • “They Simply Have A Taste For The Hedonists Among Us”: Festival Mosquito Study Has Some Bad News
  • What Is The Purpose Of Those Lines On Your Towels?
  • The Invisible World Around Us: How Can We Capture And Clean The Air We Breathe?
  • 85-Million-Year-Old Dinosaur Eggs Dated Using “Atomic Clock For Fossils” For The First Time
  • Why Shouldn’t You Kiss Babies? New Study Shows Even Healthy Newborns Can Become Severely Ill With RSV
  • Earth Has A New Quasi-Moon – And It Has Probably Been Around For Decades
  • Want To Kill Your Prey? Do It Feather-Legged Lace Weaver Spider Style And Vomit All Over Them
  • IFLScience The Big Questions: Are We In The Anthropocene?
  • The Wildfire Paradox Affecting 440 Million People Has As Worrying A Solution As You’d Expect
  • AI May Infringe On Your Rights And Insult Your Dignity (Unless We Do Something Soon)
  • How Do You Study Cryptic Species? We’re Finally Lifting The Lid On The World’s Least Understood Mammals
  • Once-In-A-Decade Close Encounter With Hazardous Asteroid 2025 FA22 Approaches
  • With 229 Pairs, This Beautiful Animal Has The Highest Number Of Chromosomes Of Any Animal
  • “An Unimaginable Breakthrough”: Loudest-Ever Gravitational Wave Collision Proves Stephen Hawking Correct
  • Exciting Martian Mudstone Has Features That Might Be Considered Biosignatures
  • How Long Did Dinosaurs Live? “It’s A Big Surprise To People That Work On Them”
  • NASA’s Mysterious Announcement: “Clearest Sign Of Life That We’ve Ever Found On Mars”
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version