What We Can Learn About the State of AI From Google’s AI Overview Launch

cover
25 Jul 2024

In May 2024, Google rolled out its AI Overview feature to enhance users’ experiences in the search engine. This latest feature was intended to streamline searching by summarizing information from multiple sources. The answers it provides are typically concisely written responses to queries that people enter in the search bar. With this new AI Overview, Google’s intentions were clear — to leverage advanced AI to make search faster and more reliable.

However, the reality of the AI Overview’s performance has fallen short of expectations in the months since its release. Almost immediately after its launch, users began encountering some unexpected issues. Social media and community forums buzzed with examples of the AI Overview producing absurd recommendations.

Some of its responses even provided dangerous advice to users. Among the most viral incidents was the suggestion to use ⅛ cup of nontoxic glue on pizza to help the cheese stick. People eventually discovered this recommendation was pulled from an old, satirical post on Reddit. Now, they are expressing their opinions and concerns on the internet about how Google was too quick to launch AI Overview.

What Users Think of Google’s AI Overview

Now, nearly two months after its release, user reactions to Google’s AI Overview still vary. Some people love it, while others express dissatisfaction. A HackerNoon poll found that 29% said they hated the feature and would prefer Google to get rid of it. Meanwhile, another 29% disliked it for now but believe Google could make some improvements.

Reddit users have been vocal about AI Overview's shortcomings and poor advice. From eating a rock a day to incorrect cooking temperatures, the errors have led to widespread criticism of Google.

A Reddit thread in the r/technology community discussed these issues extensively. Users expressed frustration over the feature, pointing out that it does not know the difference between humor and authentic content. Some people even mentioned that AI Overview was never a good idea in the first place.

On the other hand, some people come to Google’s defense by explaining how LLMs work and the reasoning behind some of the inaccurate information they provide. Despite the backlash, some users see potential in the technology. Some believe it is valuable but needs some refinements.

Google’s Thoughts on the Matter

When people read about how poor some of the responses from the overviews are, they wonder whether Google truly considered how its audience would use its products. After all, 60% of online users consider trustworthiness a top priority when using a brand’s products — meaning providing trustworthy answers should have been the number one priority. However, it seems Google skipped over the testing part and launched its AI Overview without question.

Google has admitted to the search feature’s errors. In a blog post, Google’s head of search, Liz Reid, stated that she saw AI Overview feature sarcastic and satirical content. While she finds public forums to be the best source of authentic information, Reid suggested that AI Overview can lead to some unhelpful advice.

However, Google also claims some screenshots of AI Overviews with nonsensical responses may be fake. WIRED has even conducted its own testing to determine if this is true. It found that one screenshot with over 5 million views did not align with how AI Overview presents information to users.

Still, Google realizes that AI Overviews prevent users from experiencing its full potential. That is why it has made over a dozen improvements, including enhancing the detection of queries that do not call for an AI Overview. Google also intends to keep a close eye on the tool and adjust its features according to user feedback.

The Implications of the State of AI

AI has proven to be a helpful tool for many, but some of the issues it presents clearly indicate that it still needs to improve. The overall takeaways about the state of AI involve a few things to keep in mind.

1. AI Relies Too Heavily on Biased Datasets

AI systems are only as good as the data they are trained on, presenting a large problem for bias. When machine learning engineers feed these datasets to AI systems, they often contain bias from decades of content. As a result, LLMs will replicate these biases and open the possibility of perpetuating existing misinformation.

For instance, if an AI system frequently sources data from platforms known for low-quality content, it can propagate false information. Addressing these biases requires a larger effort to curate diverse and high-quality datasets. Additionally, mechanisms should be established to detect and mitigate bias in AI training processes.

2. AI Lacks Contextual Awareness and Advanced Reasoning

While AI can process data quickly, it often fails to understand nuances to provide deeper context. A great example is when Google’s AI Overview misinterpreted information from a travel blog post about slot canyons near Las Vegas. It confused them with normal canyons, not even knowing their differences.

This mistake can confuse some people, showing AI’s limitations in understanding and reasoning contextually. Such errors diminish user trust and highlight the need for more sophisticated AI systems to accurately grasp and analyze context.

3. AI Acts as a People Pleaser

One of the more troubling aspects of AI systems is their propensity to generate a response for every query. Even if they cannot provide a sensible answer, they will still create a response. This eagerness to please often leads to the machine fabricating information or presenting incorrect data confidently. As such, it provides misleading advice that can spread from one person to the next.

AI Overview has had several cases where it offered advice based on satirical content or outdated sources simply because it was compelled to provide an answer. With this tendency comes the necessity to recognize AI’s limitations and refrain from asking it further questions when it cannot answer accurately.

4. AI Is Limited on Data

Another clear issue is the constraints of the datasets on which they are trained. With a fixed cutoff date, AI can quickly become obsolete and provide responses that are no longer relevant or accurate. This is especially problematic in a rapidly changing world. As technology, medicine and global events evolve, AI’s outdated information can lead to significant errors in its answers.

AI would need constant updates to keep information fresh. However, this is a large undertaking because the process must always start from scratch. Yet, AI's reliability diminishes without such updates, making it imperative to address this issue.

Dealing With AI Weaknesses

The recent launch of Google’s AI Overview provides insightful information on the current state of AI and its limitations. However, as AI evolves, users may begin to see it as a more trustworthy tool. Nonetheless, it is quite clear that these tools need serious updates to end people’s complaints. It may take some time before significant advancements are made, but it will be interesting to see what AI's future has in store.