Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Glue pizza? Gasoline spaghetti? Google explains what happened with its wonky AI search results

May, 31, 2024 Hi-network.com
google-ai-overview
Artie Beaty/

If you were on social media over the past week, you probably saw them. Screenshots of Google's new AI-powered search summaries went viral, mainly because Google was allegedly making wild recommendations like adding glue to your pizza, cooking spaghetti with gasoline, or suggesting that you should eat rocks for optimal health. 

That was just the beginning.

Also: How to avoid AI Overviews in Google Search: Three easy ways

Other particularly egregious examples also went viral, seemingly of the rogue AI feature suggesting mixing bleach and vinegar to clean a washing machine, which would produce potentially deadly chlorine gas, or jumping off the Golden Gate Bridge in response to a query of "I'm feeling depressed."

So what happened, and why did Google's AI Overview recommend those things?

First, Google says, the majority of what went viral wasn't real.

Many screenshots were simply fake: "Some of these faked results have been obvious and silly. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression." Those AI Overviews never appeared, Google says.

Second, numerous screenshots were from people intending to get silly search results -- like ones about eating rocks. "Prior to these screenshots going viral," Google said, "practically no one asked Google that question." If nobody is googling a given topic, it probably means there's not a lot of information available about it, or a data void. In such cases, there was only satirical content the AI interpreted as accurate.

Also: 7 ways to supercharge your Google searches with AI

Google admits that a few odd or inaccurate results did appear. Even those were for unusual queries, but they did expose some areas that need improvement. The company was able to determine a pattern of things that didn't go right and made more than a dozen technical improvements, including:

  • Better detection for nonsensical queries that shouldn't show an AI Overview and limited inclusion of satire and humor content

  • Limited use of user-generated content in responses that could offer misleading advice

  • Triggering restrictions for queries where AI Overviews were not proving to be helpful

  • Not showing AI Overviews for hard news topics where freshness and factuality are important and for most health topics

With billions of queries coming in every day, Google says, things will get weird sometimes. The company says it's learning from the errors, and promises to keep working to strengthen AI Overviews.

tag-icon Горячие метки: По вопросам бизнеса

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.