Google I/O was an AI evolution, not a revolution

Google I/O was an AI evolution, not a revolution

Posted on


At Google’s I/O developer conference, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the event, the company unveiled a revamped AI-powered search engine, an AI model with an expanded context window of 2 million tokens, AI helpers across its suite of Workspace apps, like Gmail, Drive and Docs, tools to integrate its AI into developers’ apps and even a future vision for AI, codenamed Project Astra, which can respond to sight, sounds, voice and text combined. 

While each advance on its own was promising, the onslaught of AI news was overwhelming. Though obviously aimed at developers, these big events are also an opportunity to wow end users about the technology. But after the flood of news, even somewhat tech-savvy consumers may be asking themselves, wait, what’s Astra again? Is it the thing powering Gemini Live? Is Gemini Live sort of like Google Lens? How is it different from Gemini Flash? Is Google actually making AI glasses or is that vaporware? What’s Gemma, what’s LearnLM…what are Gems? When is Gemini coming to your inbox, your docs? How do I use these things?

If you know the answers to those, congratulations, you’re a TechCrunch reader. (If you don’t, click the links to get caught up.)

Image Credits: Google

What was missing from the overall presentation, despite the enthusiasm from the individual presenters or the whooping cheers from the Google employees in the crowd, was a sense of the coming AI revolution. If AI will ultimately lead to a product that will profoundly impact the direction of technology the way the iPhone impacted personal computing, this was not the event where it debuted. 

Instead, the takeaway was that we’re still very much in the early days of AI development. 

On the sidelines of the event, there was a sense that even Googlers knew the work was unfinished. When demoing how AI could compile a student’s study guide and quiz within moments of uploading a multihundred-page document — an impressive feat — we noticed that the quiz answers weren’t annotated with the sources cited. When asked about accuracy, an employee admitted that the AI gets things mostly right and a future version would point to sources so people could fact-check its answers. But if you have to fact-check, then how reliable is an AI study guide in preparing you for the test in the first place? 

In the Astra demo, a camera mounted over a table and linked to a large touchscreen let you do things like play Pictionary with the AI, show it objects, ask questions about those objects, have it tell a story and more. But the use cases for how these abilities will apply to everyday life weren’t readily apparent, despite the technical advances that, on their own, are impressive. 

For example, you could ask the AI to describe objects using alliteration. In the livestreamed keynote, Astra saw a set of crayons and responded “creative crayons colored cheerfully.” Neat party trick.

When we challenged Astra in a private demo to guess the object in a scribbled drawing, it correctly identified the flower and house I drew on the touchscreen right away. When I drew a bug — one bigger circle for the body, one smaller circle for the head, little legs off the sides of the big circle — the AI stumbled. Is it a flower? No. Is it the sun? No. The employee guided the AI to guess something that was alive. I added two more legs for a total of eight. Is it a spider? Yes. A human would have seen the bug immediately, despite my lack of artistic ability. 

No, you were not supposed to record. But here’s a similar demo posted on X.

To give you a sense of where the technology is today, Google staff didn’t allow recording or photographs in the Astra demo room. They also had Astra running on an Android smartphone, but you couldn’t see the app or hold the phone. The demos were fun, and certainly the tech that made them possible is worth exploring, but Google missed an opportunity to showcase how its AI technology will impact your everyday life.

When are you going to need to ask an AI to come up with a band name based on an image of your dog and a stuffed tiger, for example? Do you really need an AI to help you find your glasses? (These were other Astra demos from the keynote.)

Image Credits: Google demo video (opens in a new window)

This is hardly the first time we’ve watched a technology event filled with demos of an advanced future without real-world applications or those that pitch conveniences as more significant upgrades. Google, for instance, has teased its AR glasses in previous years, too. (It even parachuted skydivers into I/O wearing Google Glass, a project built over a decade ago, that has since been killed off.)

After watching I/O, it feels like Google sees AI as just another means to generate additional revenue: Pay for Google One AI Premium if you want its product upgrades. Perhaps, then, Google won’t make the first huge consumer AI breakthrough. Like OpenAI’s CEO Sam Altman recently mused, the original idea for OpenAI was to develop the technology and “create all sorts of benefits for the world.”

“Instead,” he said, “it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from.” 

Google seems to be in the same boat.

Still, there were times when Google’s Astra AI seemed more promising. If it could correctly identify code or make suggestions on how to improve a system based on a diagram, it’s easier to see how it could be a useful work companion. (Clippy, evolved!)

Gemini in Gmail.
Image Credits: Google

There were other moments when the real-world practicality of AI shone through, too. A better search tool for Google Photos, for instance. Plus, having Gemini’s AI in your inbox to summarize emails, draft responses or list action items could help you finally get to inbox zero, or some approximation of that, more quickly. But can it clear out your unwanted but non-spam emails, smartly organize emails into labels, make sure that you never miss an important message and offer an overview of everything in your inbox that you need to take action on as soon as you log in? Can it summarize the most important news from your email newsletters? Not quite. Not yet. 

In addition, some of the more complex features, like AI-powered workflows or the receipt organization that was demoed, won’t roll out to Labs until September.

When thinking about how AI will impact the Android ecosystem — Google’s pitch for the developers in attendance — there was a sense that even Google can’t yet make the case that AI will help Android woo users away from Apple’s ecosystem. “When is the best time to switch from iPhone to Android?”, we posed to Googlers of varying ranks. “This fall” was the general response. In other words, Google’s fall hardware event, which should coincide with Apple’s embrace of RCS, an upgrade to SMS that will make Android messaging more competitive with iMessage.

Simply put, consumers’ adoption of AI in personal computing devices may require new hardware developments — maybe AR glasses? a smarter smartwatch? Gemini-powered Pixel Buds? — but Google isn’t yet ready to reveal its hardware updates or even tease them. And, as we’ve seen already, with the Ai Pin and Rabbit’s underwhelming launches, hardware is still hard. 

Image Credits: Google

Though much can be done today with Google’s AI technology on Android devices, Google’s accessories like the Pixel Watch and the system that powers it, WearOS, were largely overlooked at I/O, beyond some minor performance improvements. Its Pixel Buds earbuds didn’t even get a shout-out. In Apple’s world, these accessories help lock users into its ecosystem, and could someday connect them with an AI-powered Siri. They are critical pieces to its overall strategy, not optional add-ons.

Meanwhile, there’s a sense of waiting for the other shoe to drop: that is, Apple’s WWDC. The tech giant’s Worldwide Developer Conference promises to unveil Apple’s own AI agenda, perhaps through a partnership with OpenAIor even Google. Will it be competitive? How can it be if the AI can’t deeply integrate into the OS, the way Gemini can on Android? The world is waiting for Apple’s response.

With a fall hardware event, Google has time to review Apple’s launches and then attempt to craft its own AI moment that’s as powerful, and as immediately understandable, as Steve Jobs’ introduction of the iPhone: “An iPod, a phone, and an Internet communicator. An iPod, a phone… are you getting it?” 

People got it. But when will they get Google’s AI in the same way? Not from this I/O, at least.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

 





Source Link Website

Gravatar Image
My Miranda cosgrove is an accomplished article writer with a flair for crafting engaging and informative content. With a deep curiosity for various subjects and a dedication to thorough research, Miranda cosgrove brings a unique blend of creativity and accuracy to every piece.

Leave a Reply

Your email address will not be published. Required fields are marked *