There's no beating around the bush on this: it takes too long for Google's Generative AI search to generate results, and like its compatriots in the generative AI space it too often gets stuff wrong.
I've been enrolled in the beta test for Google Search Labs "Search Generative Experience" and the experience has been less than stellar. The biggest experience issue here is that it's too slow. Unlike ChatGPT which typically starts visibly generating a response in less than a second, Google's AI search summaries often take 3-5 seconds or more to show their results. Granted, when it does display it shows the entire thing, but it takes a while to get there.
Generative AI doesn't "know" anything, it's basically just a very advanced next-word prediction engine.
This is just down to the nature of generative AI. It doesn't know things, it's just cross-referencing a massive web of words and weights to build what is the most likely response to your query. And it does this cross-referencing prediction one. word. at. a. time. So yeah, it takes a bit of time to come up with a result compared to the traditional Google Search experience.
Considering that most search users immediately start drilling down into the regular results that were delivered to them in a fraction of a second, that's not great since the AI results are delivered at the top. To make matters worse, when the AI-content loads it pushes down the existing page content, which is really not great when you've already scrolled on down and suddenly the pages shifts. There's a term for that: cumulative layout shift, and it's a metric that Google rightfully harps on in its page performance metrics that feed into search rankings.
The AI-generated results themselves actually look rather nice when they load. The text is clearly formatted and easy to read, a few card-style links to websites on the topic are displayed, and there are options for follow-up questions along the bottom. Overall, nice and more useful than the straight text results provided by Chat GPT. And because it's Google, searching for something like restaurants means they can display a map with where these places are.
Buuuuuuuut… there are problems. Google's Search AI appears to have no real insight into the search rankings that Google uses to power its regular search results and almost invariably it displays different links in the AI-generated section versus the traditional search that follows. For example, when I did a search for "cumulative layout shift" one of the websites it provided had terrible layout shift on every page — nothing like every image shifting the text around when it finally loaded as you scrolled by.
It seems like there is always a "but" when it comes to generative AI…
Worse, like ChatGPT it doesn't know how to say "I don't know" because it knows nothing. It is a predictive model to create passages of text that pass for human-written, but it doesn't know anything about the context of what it's been asked. I did a search for the model number of my dream monitor that was announced at CES 2023 and the AI text got the first parts right: the name and most of the specs.
And then it went off the rails by providing dimensions, weight, and pricing with a statement that it has since been discounted — none of which is known, and the monitor hasn't even released yet much less had a discount. Oh, and in the second sentence it said the monitor "is one of the best selling high-end monitors in 2023". It hasn't been released and given the likely exorbitant price tag it absolutely will not be a top seller. Yet, here's Google's Search Generative Experience spitting out completely wrong information.
Google did tack a notice at the top of the AI results saying that "Generative AI is experimental. Info quality may vary." But is that really good enough?
Billions of users every day trust Google to help them find answers to whatever question they have. It's used for shopping, for personal medical research, for learning more about the world. And its role for years has been to surface the best pages that match your search query. It's not always delivered on that promise, but at its core it's always been "Here are the top 10 links for that search term, go check it for yourself." But if an AI-generated text that may or may not be correct is going to sit at the very top of the page, often taking up more screen real estate than most users have (especially on mobile devices), then that pushes the results we've long trusted down the screen and introduces a questionable quality machine-generated response in its place.
Even with all Google's data there are still blindspots… and the generative AI doesn't know that.
The problem with generative AI like this is two-fold. It doesn't know what it doesn't know (again, it doesn't really "know" anything) and it's simply a very advanced predictive model for what word comes next. It's basically a super-amped-up version of repeatedly hitting the middle button in the next-word prediction provided by your phone's keyboard. And while feeding more and more data into the training set used to make those predictions will improve the quality of the results, there will always be quality problems because of how it works.
And because the neural network basically trained itself, there's no good way for Google engineers to pull it apart and explain why it spat out the result that it did. To be fair, that's also how your brain works — we might be able to remember where we heard a fact, but we can't tell you why we said that sentence the way we did. But humans are fallible and most of us know that even applies to ourselves, we'll admit when we don't know things and most of us know how to admit when we're wrong (unlike some AI chat assistants). Generative AI doesn't' do that, it is simply incapable of doing that, and so we end up with tools like this that confidently state untrue things. And for a product as foundational to the modern internet as Google Search, that's potentially a very bad thing.