Google faces backlash over AI-generated search summaries; scrambles to make manual changes

Last week, Google launched AI Overviews, a new search feature intended to enhance user experience by providing streamlined search results. However, instead of improving the search process, the new feature ignited a wave of criticism across social media after generating weird AI search results.

In response to the public outcry, Google is now working to manually disable AI Overviews for some specific searches following an outpouring of bizarre and inaccurate responses that have gone viral across social media.

Despite being tested for a year and handling over a billion queries, the AI Overviews, which were introduced at last week’s Google I/O event, have not performed as expected in all cases. Google asserts that the problematic examples circulating online involve uncommon queries or have been manipulated. Nevertheless, the AI-generated summaries, designed to appear above the standard search results, have drawn significant attention for their errors.

Google Scrambles to Manually Remove Weird AI Search Results

Some of the most notable inaccuracies include the AI suggesting glue as a pizza topping and recommending the consumption of rocks. Google maintains that most AI Overview queries provide “high-quality information,” emphasizing that the widely shared examples are rare anomalies.


The company is now focused on manually disabling the AI Overviews for specific searches as the problematic instances continue to spread on social media.

“The incorrect answers in the feature, called AI Overview, have undermined trust in a search engine that more than two billion people turn to for authoritative information. And while other A.I. chatbots tell lies and act weird, the backlash demonstrated that Google is under more pressure to safely incorporate A.I. into its search engine,” The New York Times wrote.

In another post, Tim Keck, founder of The Onion, showcased a glaring error in Google’s new AI Overview feature on social media platform X yesterday. In his post, Keck included a screenshot of a search query that asked, “How many rocks should I eat each day?” The AI Overview response astonishingly read, “According to UC Berkeley geologists, people should eat at least one small rock per day.”


Why does this matter? This situation echoes a similar controversy earlier this year involving viral AI-generated images, placing Google’s AI tools under scrutiny once again. The backlash from this problematic rollout threatens to undermine trust in the tech giant’s search results, casting a shadow over its other successful AI advancements showcased at the I/O event.