• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Google’s Newest AI Beats All But The Best Math Olympians

January 20, 2024 by Deborah Bloomfield

It must be tough being a kid these days. Born too late to actually enjoy the internet, too early to declare yourself god-emperor of a desert wasteland run on water scarcity and guzzoline – and should you try to numb the pain with a little light math, you’ll most likely have to put up with coming second to a robot.

“The International Mathematical Olympiad is a modern-day arena for the world’s brightest high-school mathematicians,” write Trieu Trinh and Thang Luong, research scientists at Google DeepMind, in a new blog post about their breakthrough artificial intelligence (AI) system, AlphaGeometry.

Advertisement

AlphaGeometry is “an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist – a breakthrough in AI performance,” they announce. “In a benchmarking test of 30 Olympiad geometry problems, AlphaGeometry solved 25 within the standard Olympiad time limit. For comparison […] the average human gold medalist solved 25.9 problems.”

It’s not just the system’s score in the contest that’s impressive. It’s been almost 50 years since the first ever mathematical proof by computer – essentially a brute-force workthrough of the four-color theorem – and since then, the admittedly controversial realm of computer-assisted proofs has come on leaps and bounds. 

But very recently, with the dawn of things like big data and advanced machine learning techniques, we’ve started to see a shift – however slight – away from using computers as simple number-crunchers, and towards artificial intelligence that can produce genuinely creative proofs.

The fact that AlphaGeometry can tackle the kinds of complex mathematical problems faced by Olympiad mathletes may signal a key milestone in AI research, Trinh and Luong believe. 

Advertisement

Until now, such a program would face at least two major hurdles. Firstly, computers are, well, computers; as anybody who’s ever written out 50 pages of code only to have the whole thing foiled by one mistyped semicolon in line 337 can tell you, they’re not great at things like reasoning or deduction. Secondly, math is kind of difficult to teach even the most cutting-edge machine learning system.

“Learning systems like neural networks are quite bad at doing ‘algebraic reasoning’,” David Saxton, also of DeepMind, told New Scientist back in 2019.

“Humans are good at [math],” he added, “but they are using general reasoning skills that current artificial learning systems don’t possess.”

AlphaGeometry, however, takes on these challenges by combining a neural language model – good at making quick predictions, but rubbish at making actual sense – with a symbolic deduction engine. These latter machines are “based on formal logic and use clear rules to arrive at conclusions,” Trinh and Luong write, making them better at rational deduction, but also slow and inflexible – “especially when dealing with large, complex problems on their own.”

Advertisement

Together, the two systems worked in a sort of loop: the symbolic deduction engine would chug away at the problem until it got stuck, at which point the language model would suggest a tweak to the argument. It was a great theory – there was just one problem. What would they train the language model on? 

Ideally, the program would be fed millions if not billions of human-made geometric proofs, which it could then chew up and spit back out in varying levels of gobbledegook. But “human-made” and “geometric” don’t exactly work well with “computer program” – “[AlphaGeometry] does not ‘see’ anything about the problems that it solves,” Stanislas Dehaene, a cognitive neuroscientist at the Collège de France who studies foundational geometric knowledge, told the New York Times. “There is absolutely no spatial perception of the circles, lines and triangles that the system learns to manipulate.”

So the team had to come up with a different solution. “Using highly parallelized computing, the system started by generating one billion random diagrams of geometric objects and exhaustively derived all the relationships between the points and lines in each diagram,” Trinh and Luong explain. 

“AlphaGeometry found all the proofs contained in each diagram, then worked backwards to find out what additional constructs, if any, were needed to arrive at those proofs,” they continue. They call this process “symbolic deduction and traceback”.

Advertisement


And it was evidently successful: not only was the AI nearly as good as the average human IMO gold medalist, but it was 2.5 times as successful as the previous state-of-the-art system to attempt the challenge. “Its geometry capability alone makes it the first AI model in the world capable of passing the bronze medal threshold of the IMO in 2000 and 2015,” the pair note.

While the system is currently confined to geometry problems, Trinh and Luong hope to expand the capabilities of math AI across far more disciplines. 

“We’re not making incremental improvement,” Trinh told the Times. “We’re making a big jump, a big breakthrough in terms of the result.”

“Just don’t overhype it,” he added.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Tennis-Scrappy Sakkari survives gruelling three-setter to beat Andreescu
  2. Cricket-NZ players reach Dubai after ‘specific, credible threat’ derailed Pakistan tour
  3. Vatican trial prosecutors concede case gaps, willing to investigate more
  4. The Scottish Mummy That Turned Out To Be Made Of Three People

Source Link: Google's Newest AI Beats All But The Best Math Olympians

Filed Under: News

Primary Sidebar

  • The World’s Oldest Known Cake Is Over 4,000 Years Old, And It Sounds Pretty Delicious
  • An Ominous Haze Lurks Over The Deadliest Volcano In US, But USGS Says A Repeat Of 1980 Isn’t Coming
  • Hayabusa2’s Target Asteroid Is 4 Times Smaller Than Thought – Can It Still Touch Down On It?
  • In 2011, Slavc The Wolf Journeyed 1,000 Miles To Begin Verona’s First Wolf Pack In 100 Years
  • Anyone Know What These Marine “Y-Larvae” Grow Into? Because Scientists Have No Clue
  • C/2025 A6 (Lemmon) Closest Earth Approach Is Next Month – Will We See It With The Naked Eye?
  • In 2013, A Volcanic Eruption Wiped Out Life On This Remote Island. Then, Somehow, Plants Reemerged
  • 1-Year-Old Orca Takes Out A Big Fat Seal In This Award-Winning – And Extremely Badass – Photo
  • Saturn And Neptune Will Reach Their Brightest In Days – And Look For Saturn’s Temporary Beauty Spot
  • Reindeer Bring A Gift Greater Than Any Of Santa’s – Hope Of A Stable Climate
  • If Deep-Sea Pressure Can Crush A Human Body, How Do Deep-Sea Creatures Not Implode?
  • Meet Ned: The Lonely Lefty Snail Looking For Love
  • “America Will Lead The Next Giant Leap”: NASA Announces New Milestone In Hunt For Exoplanets
  • What Did Neanderthals Sound Like?
  • One Star System Could Soon Dazzle Us Twice With Nova And Supernova Explosions
  • Unethical Experiments: When Scientists Really Should Have Stopped What They Were Doing Immediately
  • The First Humans Were Hunted By Leopards And Weren’t The Apex Predators We Thought They Were
  • Earth’s Passage Through The Galaxy Might Be Written In Its Rocks
  • What Is An Einstein Cross – And Why Is The Latest One Such A Unique Find?
  • If We Found Life On Mars, What Would That Mean For The Fermi Paradox And The Great Filter?
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version