Yuya SHIBUYA

澁谷遊野

Logo
  • CV
  • Publications
  • Research
  • Teaching
  • 日本語

New Paper: How do people evaluate the accuracy of video posts when a warning indicates they were generated by AI?

20 March 2025


Excited to finally share our paper about GenAI and misinformation!

Abstract

Given the rise of concerns about Generative Artificial Intelligence (GenAI) powered misinformation, major platforms like Google, Meta, and TikTok have implemented new policies to warn users of AI-generated content. However, we have not fully understood the impacts of such user interface designs that disclose AI made content on user perceptions. This study investigates how people assess the accuracy of video content when they are warned that it is created by GenAI. We conducted an online experiment in the U.S. (14,930 observations), showing half of the participants warning messages about AI before and after they viewed a mockup of true and false video content on social media, while the other half only viewed the same videos without the warning message. The results indicated that the warning message had an impact on the ability to discern between true and false content only among those who had a positive perception of AI. On the contrary, those with a negative perception of AI tended to perceive all AI-made video posts, including those not containing false information, as less accurate when they knew that a GenAI created the videos. These results indicated the limitations of merely relying on simple warnings to mitigate GenAI-based misinformation. Future research on continuous investigations on designing interfaces that go beyond simple warnings is needed.

Shibuya, Y., Nakazato, T., & Takagi, S. (2025). How do people evaluate the accuracy of video posts when a warning indicates they were generated by AI?. International Journal of Human-Computer Studies, 103485. https://doi.org/10.1016/j.ijhcs.2025.103485



[[ Back to Top Page ]]

@2020-2025 yuya shibuya