At the recent Google I/O 2025 event, the tech giant revealed significant enhancements to its Gemini AI models, specifically introducing a new reasoning capability called Deep Think for the Gemini 2.5 Pro model. This feature enables the AI to evaluate multiple potential answers before finalising a response, thereby improving its performance on various challenging benchmarks.
Demis Hassabis, the head of Google DeepMind, highlighted that Deep Think maximises the AI’s potential by incorporating advanced research methods, including parallel reasoning techniques, though details about its functioning remain somewhat ambiguous. Speculation suggests its functionality may resemble that of OpenAI’s o1-pro and forthcoming o3-pro models, which are designed to find and combine optimal solutions to complex problems.
Google announced that Deep Think allowed Gemini 2.5 Pro to excel in the LiveCodeBench coding evaluation, outperforming OpenAI’s o3 model on the MMMU benchmark, which assesses skills such as perception and reasoning. The feature is currently accessible through the Gemini API to a select group of “trusted testers,” with broader availability pending completion of necessary safety tests.
In addition to Deep Think, Google is launching an updated version of its budget-friendly Gemini 2.5 Flash model. This iteration is reported to offer enhancements in coding, multimodality, reasoning, and long-context tasks, coupled with increased efficiency compared to its predecessor. A preview of the new model is available through Google’s AI Studio and Vertex AI platforms, along with Gemini applications. General availability for developers is expected in June.
Finally, Google introduced Gemini Diffusion, a new model that claims to deliver results four to five times faster than similar models while matching the performance of competing models that are much larger. Like Deep Think, Gemini Diffusion is also currently available for trusted testers.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


