Google and OpenAI Chatbots Claim Gold at International Math Olympiad

Artificial intelligence models developed by Google’s DeepMind team and OpenAI have a new accolade they can add to their list of achievements: they have defeated some high schoolers in math. Both companies have claimed to achieve a gold medal at this year’s International Mathematical Olympiad (IMO), one of the toughest competitions for high school students looking to prove their mathematical prowess.

The Olympiad invites top students from across the world to participate in an exam that requires them to solve a number of complex, multi-step math problems. The students take two four-and-a-half-hour exams across two days, tasked with solving a total of six questions in total with point values assigned for completing different parts of the problems. Models from DeepMind and OpenAI both solved five out of the six answers perfectly, scoring a total of 35 out of 42 possible points, which was enough for gold. A total of 67 human participants of the 630 taking part also took home the honor of gold.

There’s one little tidbit that doesn’t really have anything to do with the results, just the behavior of the companies. DeepMind was invited to participate in the IMO and announced its gold on Monday in a blog post, following the organization’s release of the official results for student participants. According to Implicator.ai, OpenAI didn’t actually enter the IMO. Instead, it took the problems, which are made public so others can take a crack at solving them, and tackled them on their own. OpenAI announced it had a gold-level performance, which can’t actually be verified by the IMO because it didn’t participate. Also, the company announced its score over the weekend instead of waiting for Monday (when the official scores are posted) against the wishes of the IMO, which asked for companies not to steal the spotlight from students.

The models used to solve these problems participated in the exam the same way the students did. They were given 4.5 hours for each exam and were not allowed to use any external tools or access the internet. Notably, it seems both companies used general-purpose AI rather than specialized models, which previously fared much better than the do-it-all models.

A noteworthy fact about these companies’ claims to the top spot: Neither model that achieved gold (or, you know, a self-administered gold) is publicly available. In fact, public models did a pretty terrible job at the task. Researchers ran the questions through Gemini 2.5 Pro, Grok-4, and OpenAI o4, and none of them were able to score higher than 13 points, which is short of the 19 needed to take home a bronze medal.

There is still plenty of skepticism about the results, and the fact that publicly available models did so poorly suggests there’s a gap in the tools that we have access to and what a more finely-tuned model can do, which rightfully should result in questions as to why those smarter models can’t be scaled or made widely available. But there are still two important takeaways here: Lab models are getting better at reasoning problems, and OpenAI is run by a bunch of lames who couldn’t wait to steal glory from some teenagers.

Like
Love
Haha
3
Обновить до Про
Выберите подходящий план
Больше