It’s hard to miss AI at corporate announcements like Google I/O these days. While the sheer volume of announcements from these events can feel a bit like drinking from a firehose at times, it does give us an insight into what at least the next 6 months looks like for consumer AI. With so many announcements, we can’t cover everything, but there were some key head-turning moments that it’s worth taking a look at.
Google didn’t hold much back in the lead-up to I/O, leading to some speculation that they might have let the air out of the balloon too soon, with updates to their leading Gemini models arriving over a week before the event. We needn’t have worried, though, there was plenty left up their sleeve. Some of the highlights included:
Google’s push to integrate Gemini into every user experience they can is pretty clear. Workspace has been supercharged with most apps now having some sort of Gemini capability, Android now has it’s own Gemini rollout and we’re seeing previous research projects (more on those later) start to become production features. Google painted a pretty clear picture of their vision for consumer AI - a proactive assistant with an intense understanding of your context and needs, that’s there all the time.
Their push for personalisation is a double-edged sword, though. We know that personalisation is a key tool for helping to improve user journeys and experiences, leading to less unwanted friction and a smoother purchasing journey - but we’re increasingly seeing that there’s an undercurrent of distaste for AI-based personalisation in particular.
Where an innocent {name} placeholder in an email might be appreciated, AI has the capability to understand users on a level that they weren’t aware of. Given huge datasets and highly granular tracking, some users express a negative gut reaction to how intimate and intense that hyper-personalisation can feel. Finding the right balance between optimisation and respectful distance is likely going to be a key focus for businesses in the future. It’ll be interesting to see how events like Google I/O shift that wider conversation around data privacy and the ‘Big Brother’ effect.
A big focus for the event was around how Google is re-shaping it’s search offering. Particularly in light of how controversial AI Overviews have been (inaccuracies and controversies abound), it’s interesting to see them double down on AI-synthesised answers becoming the default response for many queries.
The goal is clear: to move beyond just summarising web pages that already exist and to shift towards providing direct, comprehensive answers, layered with a level of ‘agentic’ capabilities that mean you can actually do things from the Search Engine Results Page (SERP), not just see things.
The big news is the wider rollout of ‘AI Mode’ - a system designed to replace the traditional search engine experience and put AI answers front and centre. Released to everyone in the US as of the 20th May, AI Mode features more advanced reasoning capabilities that allow users to ask longer and more complex queries. It’s multimodal too, so image search technology like Google lens is all incorporated, alongside the voice features of Gemini Live. It’s even taken the fundamentals of the ‘Deep Research’ features in Gemini to create ‘Deep Search’ - that can perform hundreds of searches across the web and produce a cited report of it’s findings.
Google is bullish about how successful AI Overviews have been, particularly in their biggest markets of the US and India. That confidence hasn’t been shared by everyone, however, with content producers and publishers feeling particularly hard done by. If Google aims to keep users within it’s own AI bubble and never have them leave to actually visit a website in order to get their answers, there’s a lot less reason for some companies to make the kind of content they do. That’s without digging into the legal ramifications of content ownership, where Google is essentially ‘stealing’ website content to produce it’s own product without offering much in return.
While Google is keen to stress that they send more traffic to websites than ever before - studies done by tools like Ahrefs show that AI Overviews reduce clickthrough rate by roughly 35%. That’s a huge hit from a single SERP feature, especially when they don’t appear in isolation. Organic real-estate on SERPs has been on the decline for many years now, with Featured Snippets, Shopping, Map Packs, Flights, Paid ads and more all crowding out those (now geriatric) traditional 10 blue links.
We’ve seen in our own research, SearchPulse, that adoption of AI tools and increasingly personalised Social Media search is starting to eat away at the ‘Google It’ paradigm we’ve been living in for almost 20 years. That market share hasn’t yet reached critical mass though - will putting more potentially incorrect answers in front of users actually accelerate that growth? We don’t think so, certainly not until the citation quality and ease of double-checking gets a significant boost.
Google has been teasing many a research project over the last 2 years when it comes to AI, and this I/O event has delivered on getting some of those features into the real world for more users. For example, Project Mariner was Google’s big push into the ‘agentic’ space, which just means giving AI the ability to take actions, not just work with information. Imagine asking an AI tool to find you the best price flights and hotels, then being able to get it to book those for you. They have now deployed this capability into a new ‘Agent Mode’ in the Gemini App, where accessing websites and interacting with on-site functionality such as filters and site search is done by the tool. They haven’t talked about retail or bookings with this mode yet, so it seems like Google is definitely still holding back from giving AI access to people’s credit cards for now, which seems wise.
\
That wasn’t the only project on show though - the other big talking point was Project Astra. Project Astra has been partially incorporated into Gemini Live, where camera and screen-sharing capabilities are powered by the technology. The visual and contextual understanding tech that Google has built seems to be a growing area of expertise - with multimodal support becoming pretty much universal for their tool suite. Their vision for Gemini is to make it a ‘universal AI assistant’.
The real application for Project Astra was talked about elsewhere though - visual understanding and contextual awareness are fundamental aspects of their new Android XR software that powers headsets and glasses. It’s an area that has been slow to make it’s way into the real world, but it looks like Google is committed to bringing an AI-integrated wearable tech experience to market - beyond the more widely adopted watches we’ve seen from plenty of companies. It’s been 13 years since the announcement of ‘Google Glass’, their first attempt at this space - so it remains to be seen whether it’s even something that people actually want to use.
For businesses trying to keep up with all the change and development in the AI space at the moment, it can be hard to work out what’s actually going to make a difference, and what’s just marketing fluff. The message from Google is that everything is changing fast - whether that’s the case depends on who we’re talking about.
Absolutely, there has been a growth in early adopters taking advantage of the benefits that some of this truly revolutionary tech offers, but that by no means represents the fundamental shift in user behaviour that they might have you believe. Most users are still following the same patterns they have been for years and we’re a little way off the watershed moment for Search.
Announcements at events like I/O give us a trajectory to follow and we need to be aware of the potential implications. For end-users the promise of a more intuitive, seamless and personalised digital experience that transcends individual websites or search types is a potentially huge opportunity. There are problems with that approach though, for example if AI begins to curate our every experience - personalising it to our tastes, then are we just creating isolated bubbles that reinforce existing choices and biases? How do we show users new products if they’re entrenched in an ecosystem already? How much of our data are we comfortable with an “always-on” AI having access to?
The ease and utility that AI offers might also lead to a resurgence in physical, distinctly human experiences that have been suffering blow after blow from the digital space for decades. It might be that physical retail and human recommendations become an all-the-more valuable commodity.
If AI Overviews and summarised results are the future of Search, then we expect to see big changes in how businesses approach digital content and the trade-off between the value they offer Google and what they get in return. That’s not where we are now - and the fact that Google has launched their new experience as an opt-in mode shows that even they appreciate that it’s not ready for everyone - yet.
Matt is a data and spreadsheet nerd. Having worked in data pipeline engineering, business intelligence and data analysis - he helps us manage and understand data to generate interesting and actionable insights. He helps to drive efficiencies both internally and for clients, creating innovative solutions using automation, machine learning and AI.
More about Matt