Google's Advanced AI Model Is Now Available to Try—for $250 a Month

Google just made one of its most advanced AI reasoning models available to the public. Like other advanced models, Gemini 2.5 Deep Think uses more than one AI agent to brainstorm answers in order to boost accuracy and result in more creative solutions.

Google says the model outperformed several of its competitors in key AI benchmark tests and a variation of the model even achieved the gold-medal standard at this year’s International Mathematical Olympiad (IMO), solving five out of the six IMO problems perfectly. That research model took hours to produce solutions, but the version available today is designed for everyday use working much faster, while still achieving bronze-level IMO performance.

However, in order to try out the new model users will need to hand over $250 a month for a Google AI Ultra subscription.Starting today, subscribers get access to a fixed set of prompts for the new model. They can enable “Deep Think” by toggling it in the prompt bar when selecting Gemini 2.5 Pro from the model dropdown menu in the Gemini app.

Google first previewed Gemini 2.5 Deep Think back in May at its I/O developer conference, but the company says the version released today is a “significant improvement,” thanks to tester feedback and key benchmark improvements.

Google says Deep Think uses parallel thinking techniques to tackle complex problems like a human would by weighing different angles and potential solutions.

“This approach lets Gemini generate many ideas at once and consider them simultaneously, even revising or combining different ideas over time, before arriving at the best answer,” the company said in a blog post.

Additionally, Google says it developed new reinforcement learning techniques that push the model to explore extended reasoning paths, helping Deep Think become a stronger and more intuitive problem-solver over time.

This, Google claims, makes the model particularly useful for things like coding, web development, and scientific research.

According to Google, Gemini 2.5 Deep Think outperformed rival models on the Humanity’s Last Exam (HLE), a 2,500-question expertise benchmark spanning subjects like math, science, and the humanities. The model achieved a score of 34.8% on the test, compared with OpenAI o3’s 20.3% and Grok 4’s 25.4% scores.

Google also said it will share the gold-medal version of Gemini 2.5 Deep Think with a small group of mathematicians and academics, hoping to see how it might aid their research. The company plans to use that feedback to refine future versions of the model.

Like
Love
Haha
3
Nâng cấp lên Pro
Chọn gói phù hợp với bạn
Đọc Thêm