At Google I/O this week, amid the standard parade of dazzling product demos and AI-powered bulletins, one thing uncommon occurred: Google declared warfare — quietly — within the race to construct synthetic normal intelligence (AGI).
“We fully intend that Gemini will be the very first AGI,” stated Google co-founder Sergey Brin, who made a shock, unscheduled look at what was initially deliberate as a solo fireplace chat with Demis Hassabis, CEO of DeepMind, Google’s AI analysis powerhouse. The dialog, hosted by Huge Expertise founder Alex Kantrowitz, pressed each males on the way forward for intelligence, scale, and the evolving definition of what it means for a machine to suppose.
The second was fleeting, however unmistakable. In a area the place most gamers hedge their discuss of AGI with caveats — or keep away from the time period altogether — Brin’s remark stood out. It marked the primary time a Google government has explicitly acknowledged an intent to win the AGI race, a contest usually related extra with Silicon Valley rivals like OpenAI and Elon Musk than with the search large.
But Brin’s boldness contrasted sharply with the warning expressed by Hassabis, a former neuroscientist and recreation developer whose imaginative and prescient has lengthy steered DeepMind’s method to AI. Whereas Brin framed AGI as an imminent milestone and aggressive goal, Hassabis known as for readability, restraint, and scientific precision.
“What I’m interested in, and what I would call AGI, is really a more theoretical construct, which is, what is the human brain as an architecture able to do?” Hassabis defined. “It’s clear to me today, systems don’t have that. And then the other thing that why I think it’s sort of overblown the hype today on AGI is that our systems are not consistent enough to be considered to be fully General. Yet they’re quite general.”
This philosophical stress between Brin and Hassabis — one chasing scale and first-mover benefit, the opposite warning of overreach — might outline Google’s future as a lot as any product launch.
Inside Google’s AGI timeline: Why Brin and Hassabis disagree on when superintelligence will arrive
The distinction between the 2 executives turned much more obvious when Kantrowitz posed a easy query: AGI earlier than or after 2030?
“Before,” Brin answered with out hesitation.
“Just after,” Hassabis countered with a smile, prompting Brin to joke that Hassabis was “sandbagging.”
This five-second alternate encapsulates the refined however important stress in Google’s AGI technique. Whereas each males clearly consider highly effective AI programs are coming this decade, their totally different timelines mirror essentially totally different approaches to the expertise’s growth.
Hassabis took pains all through the dialog to determine a extra rigorous definition of AGI than is usually utilized in trade discussions. For him, the human mind serves as “an important reference point, because it’s the only evidence we have, maybe in the universe, that general intelligence is possible.”
True AGI, in his view, would require exhibiting “your system was capable of doing the range of things even the best humans in history were able to do with the same brain architecture. It’s not one brain but the same brain architecture. So what Einstein did, what Mozart was able to do, what Marie Curie and so on.”
In contrast, Brin’s focus appeared extra oriented towards aggressive positioning than scientific precision. When requested about his return to day-to-day technical work at Google, Brin defined: “As a computer scientist, it’s a very unique time in history, like, honestly, anybody who’s a computer scientist should not be retired right now. Should be working on AI.”
DeepMind’s scientific roadmap clashes with Google’s aggressive AGI technique
Regardless of their totally different emphases, each leaders outlined related technical challenges that must be solved on the trail to extra superior AI.
Hassabis recognized a number of particular boundaries, noting that “to get all the way to something like AGI, I think may require one or two more new breakthroughs.” He pointed to limitations in present programs’ reasoning skills, artistic invention, and the accuracy of their “world models.”
“For me, for something to be called AGI, it would need to be consistent, much more consistent across the board than it is today,” Hassabis defined. “It should take, like, a couple of months for maybe a team of experts to find a hole in it, an obvious hole in it, whereas today, it takes an individual minutes to find that.”
Each executives agreed on the significance of “thinking” capabilities in AI programs. Google’s newly introduced “deep think” function, which permits AI fashions to interact in parallel reasoning processes that test one another, represents a step on this path.
“We’ve always been big believers in what we’re now calling this thinking paradigm,” Hassabis stated, referencing DeepMind’s early work on programs like AlphaGo. “If you look at a game like chess or go… we had versions of AlphaGo and AlphaZero with the thinking turned off. So it was just the model telling you its first idea. And, you know, it’s not bad. It’s maybe like master level… But then if you turn the thinking on, it’s been way beyond World Champion level.”
Brin concurred, including: “Most of us, we get some benefit by thinking before we speak. And although not always, I was reminded to do that, but I think that the AIs obviously, are much stronger once you add that capability.”
Past scale: How Google is betting on algorithmic breakthroughs to win the AGI race
When pressed on whether or not scaling present fashions or growing new algorithmic approaches would drive progress, each leaders emphasised the necessity for each — although with barely totally different emphases.
“I’ve always been of the opinion you need both,” Hassabis stated. “You need to scale to the maximum the techniques that you know about. You want to exploit them to the limit, whether that’s data or compute, scale, and at the same time, you want to spend a bunch of effort on what’s coming next.”
Brin agreed however added a notable historic perspective: “If you look at things like the N-body problem and simulating just gravitational bodies… as you plot it, the algorithmic advances have actually beaten out the computational advances, even with Moore’s law. If I had to guess, I would say the algorithmic advances are probably going to be even more significant than the computational advances.”
This emphasis on algorithmic innovation over pure computational scale aligns with Google’s current analysis focus, together with the Alpha-Evolve system introduced final week that makes use of AI to enhance AI algorithms.
Google’s multimodal imaginative and prescient: Why camera-first AI offers Gemini a strategic benefit
An space of clear alignment between the 2 executives was the significance of AI programs that may course of and generate a number of modalities — notably visible info.
In contrast to rivals whose AI demos usually emphasize voice assistants or text-based interactions, Google’s imaginative and prescient for AI closely incorporates cameras and visible processing. This was evident within the firm’s announcement of recent good glasses and the emphasis on pc imaginative and prescient all through its I/O displays.
“Gemini was built from the beginning, even the earliest versions, to be multimodal,” Hassabis defined. “That made it harder at the start… but in the end, I think we’re reaping the benefits of those decisions now.”
Hassabis recognized two key functions for vision-capable AI: “a truly useful assistant that can come around with you in your daily life, not just stuck on your computer or one device,” and robotics, the place he believes the bottleneck has at all times been the “software intelligence” somewhat than {hardware}.
“I’ve always felt that the universal assistant is the killer app for smart glasses,” Hassabis added, a press release that positions Google’s newly introduced system as central to its AI technique.
Navigating AI security: How Google plans to construct AGI with out breaking the web
Each executives acknowledged the dangers that include speedy AI growth, notably with generative capabilities.
When requested about video technology and the potential for mannequin degradation from coaching on AI-generated content material — a phenomenon some researchers name “model collapse” — Hassabis outlined Google’s method to accountable growth.
“We’re very rigorous with our data quality management and curation,” he stated. “For all of our generative models, we attach SynthID to them, so there’s this invisible AI-made watermark that is pretty, very robust, has held up now for a year, 18 months since we released it.”
The priority about accountable growth extends to AGI itself. When requested whether or not one firm would dominate the panorama, Hassabis advised that after the primary programs are constructed, “we can imagine using them to shard off many systems that have safe architectures, sort of built under… provably underneath them.”
From simulation idea to AGI: The philosophical divide between Google’s AI leaders
Maybe probably the most revealing second got here on the finish of the dialog, when Kantrowitz requested a lighthearted query about whether or not we stay in a simulation — impressed by a cryptic tweet from Hassabis.
Nature to simulation on the press of a button, does make you marvel… ♾? https://t.co/lU77WHio4L
— Demis Hassabis (@demishassabis) Might 7, 2025
Even right here, the philosophical variations between the 2 executives have been obvious. Hassabis provided a nuanced perspective: “I don’t think this is some kind of game, even though I wrote a lot of games. I do think that ultimately, underlying physics is information theory. So I do think we’re in a computational universe, but it’s not just a straightforward simulation.”
Brin, in the meantime, approached the query with logical precision: “If we’re in a simulation, then by the same argument, whatever beings are making the simulation are themselves in a simulation for roughly the same reasons, and so on so forth. So I think you’re going to have to either accept that we’re in an infinite stack of simulations or that there’s got to be some stopping criteria.”
The alternate captured the important dynamic between the 2: Hassabis the philosopher-scientist, approaching questions with nuance and from first ideas; Brin the pragmatic engineer, breaking issues down into logical parts.
Brin’s declaration throughout his Google I/O look marks a seismic shift within the AGI race. By explicitly stating Google’s intent to win, he’s deserted the corporate’s earlier restraint and immediately challenged OpenAI’s place because the perceived AGI frontrunner.
That is no small matter. For years, OpenAI has owned the AGI narrative whereas Google fastidiously averted such daring proclamations. Sam Altman has relentlessly positioned OpenAI’s total existence across the pursuit of synthetic normal intelligence, turning what was as soon as an esoteric technical idea into each a company mission and cultural touchstone. His frequent hints about GPT-5’s capabilities and obscure however tantalizing feedback about synthetic superintelligence have stored OpenAI in headlines and investor decks.
OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5:
We need to do a greater job of sharing our meant roadmap, and a significantly better job simplifying our product choices.
We would like AI to “just work” for you; we understand how difficult our mannequin and product choices have gotten.
We hate…
— Sam Altman (@sama) February 12, 2025
By deploying Brin — not simply any government, however a founder with near-mythic standing in Silicon Valley — Google has successfully introduced it gained’t cede this territory with out a battle. The transfer carries particular weight coming from Brin, who not often makes public appearances however instructions extraordinary respect amongst engineers and buyers alike.
The timing couldn’t be extra important. With Microsoft’s backing giving OpenAI seemingly limitless assets, and Meta’s aggressive open-source technique threatening to commoditize sure features of AI growth, Google wanted to reassert its place on the vanguard of AI analysis. Brin’s assertion does precisely that, serving as each a rallying cry for Google’s AI expertise and a shot throughout the bow to rivals.
What makes this three-way contest notably fascinating is how otherwise every firm approaches the AGI problem. OpenAI has wager on tight secrecy round coaching strategies paired with splashy client merchandise. Meta emphasizes open analysis and democratized entry. Google, with this new positioning, seems to be staking out center floor: the scientific rigor of DeepMind mixed with the aggressive urgency embodied by Brin’s return.
What Google’s AGI gambit means for the way forward for AI innovation
As Google continues its push towards extra highly effective AI programs, the steadiness between these approaches will seemingly decide its success in what has turn into an more and more aggressive area.
Google’s resolution to carry Brin again into day-to-day operations whereas sustaining Hassabis’s management at DeepMind suggests an understanding that each aggressive drive and scientific rigor are mandatory parts of its AI technique.
Whether or not Gemini will certainly turn into “the very first AGI,” as Brin confidently predicted, stays to be seen. However the dialog at I/O made clear that Google is now brazenly competing in a race it had beforehand approached with extra warning.
For an trade watching each sign from AI’s main gamers, Brin’s declaration represents a big shift in tone — one which will strain rivals to speed up their very own timelines, at the same time as voices like Hassabis proceed to advocate for cautious definitions and accountable growth.
On this stress between pace and science, Google might have discovered its distinctive place within the AGI race: formidable sufficient to compete, cautious sufficient to do it proper.
Day by day insights on enterprise use instances with VB Day by day
If you wish to impress your boss, VB Day by day has you lined. We provide the inside scoop on what corporations are doing with generative AI, from regulatory shifts to sensible deployments, so you possibly can share insights for max ROI.
An error occured.

