Google announced that its advanced Gemini chatbot managed to solve five of the six problems presented at the competition, earning a total of 35 out of 42 points (Representative image/Pexels)With the growing trends around artificial intelligence (AI), several industries are incorporating tools to make efficient. However, a group of teens at the International Mathematical Olympiad (IMO) beat several AI platforms, such as Google’s Gemini and Sam Altman’s ChatGPT.Held in Queensland, Australia, the 2025 edition of the global competition consisted of 641 young mathematicians under the age of 20 from 112 countries, five of whom achieved perfect scores of 42 points, something neither AI model could replicate, a report in Popular Science stated.Google announced that its advanced Gemini chatbot managed to solve five of the six problems presented at the competition, earning a total of 35 out of 42 points, a gold-medal score.“We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score,” IMO president Gregor Dolinar stated in a quote shared by the tech giant, the report said.“Their solutions were astonishing in many respects. IMO graders found them to be clear, precise, and most of them easy to follow,” he added. OpenAI, creator of ChatGPT, also confirmed that its latest experimental reasoning model achieved a score of 35 points, the report added.Also Read | Zoho’s Sridhar Vembu says kids should acquire these skills to be better in life: ‘We need children to have decent exposure’According to OpenAI researcher Alexander Wei, the company evaluated its models using the same rules as the teen competitors. “We evaluated our models on the 2025 IMO problems under the same rules as human contestants. For each problem, three former IMO medalists independently graded the model’s submitted proof,” Wei wrote on social media, as per the report.This year marks a significant leap for AI in math competitions. In 2024, Google’s model earned a silver medal in Bath, UK, solving four out of six problems. That attempt took two to three days to solve. In contrast, the latest Gemini model completed this year’s test within the official 4.5-hour time limit.Story continues below this adThe IMO acknowledged that technology firms had “privately tested closed-source AI models on this year’s problems,” which were the same as those faced by the human contestants.© IE Online Media Services Pvt LtdTags:artificial intelligencesocial media virals