Google just leapfrogged every competitor with mind-blowing AI that can think deeper, shop smarter, and create videos with dialogue

Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
Google announced a comprehensive set of artificial smart developments on Tuesday at its annual conference of developers, as it presented more powerful models than artificial intelligence, expanding its research capabilities, and launching new creative tools that drive the limits of what its technology can accomplish.
GIMINI 2.5, which is based on Mountain View, revealed, which was put on AI’s mode to search for all users in the United States, introduced new obstetric models, and launched a monthly subscription category of $ 249.99 called Google Ai Ultra for energy users-and both reflect the AI’s accelerated momentum from Google through its system Ecological.
“More intelligence, for everyone, everywhere. The world responds and adopts artificial intelligence faster than ever,” Sondar Bishy, CEO of Google and Alphabet, said at a press conference before the conference. “What all this progress means is that we are in a new stage of the transformation of the platform of artificial intelligence, as contracts of research have now become a reality for people, companies and societies around the world.”
Augmented Thinking: Gemini 2.5 Models offer revolutionary “deep thinking” capabilities
At the Google Advertising Center, there is a continued development in large Gemini language models, with significant improvements to both the PRO and Flash. The updated Flash Gemini 2.5 will be generally available in early June, with Pro short.
The most prominent of which is the introduction of “deep thinking”, which is the strengthening mode of the Pro model, which is called Google, provides great performance in complex tasks using parallel thinking techniques. The company says this approach allows the model to consider multiple possibilities at one time, similar to how Alfago’s revolution in playing the game.
“Think Deep Think pays the model’s performance to its borders, and achieves pioneering results,” said Dimis Hasabis, CEO of Google DeepMind. “It gets a great degree on USamo 2025, and it is one of the most difficult mathematics standards. It also leads to LiveCodebeench, a standard for coding at the competition level.”
The company is going carefully with deep thinking, and plans to make it first available to the trusted laboratories to obtain reactions before the broader version. This scalp approach reflects Google’s focus on spreading responsible artificial intelligence, especially for border capabilities that drive the limits of what artificial intelligence can accomplish.
Research Re -Imagine: Artificial Intelligence mode expands with allocation and agents
Google brings AI deeper into its basic search product, as it runs “artificial intelligence mode” for all American users after pre -shortening to laboratory laboratories. This alternative research experience uses a technique called “The Inquiry Feard” to break questions to sub -citizen and issue simultaneous simultaneous searches, which provides more comprehensive results than traditional research.
“The artificial intelligence situation is our strongest research from artificial intelligence with the most advanced thinking and a multiple number, and the ability to delve deeper through follow -up questions and useful links to the web,” said Liz Reed, Vice president and Google Research President.
The company has revealed impressive standards on the current artificial intelligence overview feature, which now reaches more than 1.5 billion users. “In our largest markets like the United States and India, an artificial intelligence overview increases a 10 % increase in Google’s use of queries that show an artificial intelligence overview,” Reed pointed out during the inspection.
The new features coming to artificial intelligence mode include deep search for comprehensive research reports, live visual aid capabilities, and customization options that can integrate data from Google accounts for users. This allocation, which requires the user’s explicit approval, aims to provide more relevant results by understanding individual preferences and contexts.
Google pays greatly in the shopping experiences on behalf of artificial intelligence, and provides a virtual experience feature that allows users to know how clothes will look at them using only one image. Technology is a great progress in making online shopping easier and specialized.
“This is the position in which I found five dresses I love, and I see how they look on the website and on the models there. However, I do not look like these models, and I wonder about anyone who will really work for me,” Vidhya Srinivasan, Vice President and Director General of Advertising and Trade.
The system is operated with a specialized model for the generation of image specially designed for fashion applications. According to Srinivasan, it contains “a very deep understanding of 3D shapes and fabrics, allowing them to provide how to adhere to a realistic way and suit different types of body.
Besides Try-on, Google also provides Checkout Agentic capabilities that can automatically complete purchases when items reach a specific price point for the user. This feature deals with the entire exit process through Google Pay, as it offers how to apply Google to AIN AIN capabilities to simplify daily tasks.
Google unveiled important promotions to her obstetric models, where VEO 3 provided video generation and Imagen 4 photos. The most dramatic progress in the VEO 3 ability to create videos with simultaneous sound – including surrounding sounds, influences and personal dialogue.
“For the first time, we come out of the silent era to generate the video,” said Hasabis. “VEO 3 not only provides amazing visual quality, but it can also generate sound effects, background noise and even dialog box.”
These advanced power flow models, the new Google film manufacturing tool designed for creative professionals. Flow combines the best Google AI models to help stories narrators create clips and cinematic scenes with an easier interface.
“The flow is inspired by the feeling when the time slows and creativity is an effort, repetitive and full of possibility,” according to the company’s statement. The tool has already been tested with many movie makers who created short films using technology with traditional styles.
Meanwhile, Imagen 4 provides improvements in image quality, with special attention to printing and providing text – which makes them of special value to create marketing materials, presentations and other content that combines visual images and text.
Overwhelming contacts: Google Beam is developing from Research Starline Project
The company announced that Project Starline, which is the 3D, trial video technology that it first presented for several years, is developing into a commercial product called Google Beam. This technology creates a feeling of being in the same room with someone, even when you communicate remotely.
“Google Beam will be a new AI-FIRST video,” explained by Bishy. “Beam uses a new video intelligence model that converts video flows into a realistic 3D experience.”
The system uses a group of cameras to capture different corners of the participants, then it uses artificial intelligence to combine these flows and make them on a 3D light field screen with a flour tracking of the head. The result is a very immersive conversation experience that goes beyond communication with traditional video.
Google has made a partnership with HP to bring the first Beam devices to the market to the chosen customers later this year. Technology also provides speech translation capabilities that maintain sound and expression, allowing natural conversations across language barriers – a feature that will also come to Google Meet.
Excellent access: The new new subscription layer targets energy and professional users
To achieve income from the most advanced artificial intelligence offers, Google provided a distinct subscription category called Google Ai Ultra, at a price of $ 249.99 per month. This layer provides access to the most powerful Google, the highest limits of use, and early access to experimental features.
“If you are a director, developer, creative professional, or simply demanding the best in Google AI with the highest level of access, the Google Ai Ultra plan will be designed for you – think about it as a pass to the VIPs to Google AI,” the company mentioned in its press materials.
The Ultra Plan includes access to VEO 3 with sound generation, deep thinking when available, filmmaking tool, Agentic’s Project Mariner, and 30 TB of storage. It also comes with YouTube premium.
“The way the Google Ai Ultra plan is almost similar to your Google’s arrival. So it will be special features, the highest rate. We also put early access to products and features there too,” explained by Josh Woodward, Vice President of Google Labs and Gemini.
Google’s standard AI Pro subscription will continue at $ 19.99 per month, with some features of the ultimate high layer on its way to this most affordable option.
Where the reality is fulfilled: The vision of Amnesty International is formed from Google
Input/output ads from Google reflects a company at a turning point, as it successfully turned to convert their wide research investments into products that can reshape how people interact with technology. The focus indicates functional capabilities – which can take action on behalf of users – a significant development that goes beyond the current generation of artificial intelligence.
“One of the things that I found magic, in research in particular … People intuitively adapt to the strength of what is possible,” said Pichai. “I think the big thing that people feel excited about is when you are made [interaction] More natural and intuitive. “
For companies and developers weighing artificial intelligence strategies, the widespread ecosystem of Google provides strong tools but requires careful study of integration, costs and data privacy effects. The company’s double approach to the inclusion of artificial intelligence in basic products while developing distinguished offers indicates a long -term strategy to defend the current markets and create new revenue flows.
With these technologies moving from laboratories to daily use, they emphasize Pichai’s observation about the moment of the current artificial intelligence: we are witnessing the transformation of theoretical capabilities into practical tools that respond to how people work naturally, create and communicate. The race is not only related to building artificial intelligence only – it is about providing intelligence to the moments we need more, in ways similar to the use of technology and more effective.
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-05-20 17:45:00