Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Think you can spot a fake AI-generated news story? Take this quiz to find out

Dec, 01, 2023 Hi-network.com
FreshSplash/Getty Images

You spot a news story on the internet with the headline "Revolutionary Breakthrough Claims AI Can Now Accurately Predict the Weather 2 Months in Advance." Sounds intriguing. But it is true? Hmm, not necessarily.

Figuring out what information you find online is real versus fake has always been challenging. But AI has turned this into an even trickier task. A recent survey conducted by cybersecurity provider Netskope found that many people were unable to distinguish between real news stories and those cooked up by AI.

Also: 6 AI tools to supercharge your work and everyday life

As a test, Netskope showed 1,000 people in the US and 500 in the UK a fake AI news story alongside a real one. Some 84% of those in the UK boasted that they were skilled at identifying a fake story, yet half of them chose the fake story as real. And in the US, 88% felt confident that they could spot a fake story, but 44% of them thought the AI-generated story was legit.

To see how you would fare at spotting a fake AI story versus a real one, Netskope invites you to take its Fake news quiz. Here you're presented with 12 different stories and challenged to guess whether you think each one is phony or legit. 

The people surveyed by Netskope said that they use social media and are interested in the news. Asked to pinpoint their most most trusted service for news stories, most named newspapers and tabloids. But video-based platforms like TikTok and Snapchat came in second place.

As part of its research, Netskope also looked at the most widespread fake news stories and photos of 2023 based on social views, engagement, and other factors to determine their impact. The top item was an image of Pope Francis wearing an oversized white puffer coat. Though the photo was fake, it racked up more than 20 million views on social media and was highlighted by more than 300 media publications.

Another AI-generated item that gained traction was an image of Donald Trump being arrested in Washington DC this past March. Though Trump has found himself in trouble with the law on several occasions, this image was created using the AI image generator Midjourney. Though it was fake, the photo grabbed more than 10 million views on Twitter (now X) and was covered by 671 news publications.

Also: How to get a perfect face swap using Midjourney AI

These phony photos show how easy it can be to fool people with deceptive images. Using tools like Midjourney and DALL-E, virtually anyone can cook up an image so realistic that it can not only con the average person but also trick news publishers and professionals who otherwise would be more discerning.

And now more AI tools are capable of generating fake videos. Some of the phony ones highlighted by Netskope for 2023 included one of Hillary Clinton endorsing Republican Ron DeSantis for President and another of Elon Musk touting the benefits of eating a cannabis edible.

Once a fake news story or photo has been promoted online, identifying it as phony can take a while. On average, most such items took around six days to be spotted and refuted. But an AI-edited video of Bill Gates ending an interview with ABC News journalist Sarah Ferguson over questions about his involvement in the COVID-19 vaccine distribution took a whopping 15 days to be labeled as fake.

Also: Elections 2024: How AI will fool voters if we don't do something now

To help people better detect real versus phony, Netskope offers the following tips and tricks:

For news stories

  • Try and find the original source of the story. If you see a story making unusual or outlandish claims, check the source. You may be able to find out where it originated by scanning social media. If it's an image, run a reverse image search using Google Reverse Image Search, TinEye, or Yandex.

For image-based stories

  • Enlarge the image to check for errors. Enlarging the image may reveal details that are either inaccurate or of poor quality, indicating possible AI involvement.
  • Check the image's proportions. AI-generated images often err with the proportions of hands, fingers, teeth, ears, glasses, and other body parts and objects.
  • Scrutinize the background. With fake or altered images, the background is often skewed, repeated, or otherwise lacking in detail.
  • Check for imperfections. In AI images, features that would typically be detailed or even rough are often smooth and perfect. Skin, hair, teeth, and faces often look too flawless to be real.
  • Scan the details. Inconsistencies in images can often point to AI fakery. Maybe the color of the person's eyes don't match across different images or the pattern changes from one image to another.

For video-based stories

  • Check the size of the video. Videos that are smaller and of lower resolution than they should be are sometimes a sign of AI generation.
  • Look at any subtitles. With phony videos, subtitles are often positioned to cover faces so that it's more difficult to see that the audio doesn't match the lip movements.
  • Check for misaligned lip shapes. Misaligned lip shapes are another sign of AI manipulation, especially if the spliced area is visible near the middle of the mouths.

Featured

Apple Watch Series 9 review: Don't settle for the less expensive models this year. Here's whyThis is my new favorite default email client for LinuxThe 12 Windows apps I keep (and the 11 I dump) on every new installThe 3 biggest risks from generative AI - and how to deal with them
  • Apple Watch Series 9 review: Don't settle for the less expensive models this year. Here's why
  • This is my new favorite default email client for Linux
  • The 12 Windows apps I keep (and the 11 I dump) on every new install
  • The 3 biggest risks from generative AI - and how to deal with them

tag-icon Горячие метки: Искусственный интеллект 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.