20 March 2025
Excited to finally share our paper about GenAI and misinformation!
Given the rise of concerns about Generative Artificial Intelligence (GenAI) powered misinformation, major platforms like Google, Meta, and TikTok have implemented new policies to warn users of AI-generated content. However, we have not fully understood the impacts of such user interface designs that disclose AI made content on user perceptions. This study investigates how people assess the accuracy of video content when they are warned that it is created by GenAI. We conducted an online experiment in the U.S. (14,930 observations), showing half of the participants warning messages about AI before and after they viewed a mockup of true and false video content on social media, while the other half only viewed the same videos without the warning message. The results indicated that the warning message had an impact on the ability to discern between true and false content only among those who had a positive perception of AI. On the contrary, those with a negative perception of AI tended to perceive all AI-made video posts, including those not containing false information, as less accurate when they knew that a GenAI created the videos. These results indicated the limitations of merely relying on simple warnings to mitigate GenAI-based misinformation. Future research on continuous investigations on designing interfaces that go beyond simple warnings is needed.