News consumers are averse to AI-generated headlines, which are seen as potentially inaccurate. AI-generated content is proliferating online, and social media companies have started to label it. Sacha Altay and Fabrizio Gilardi conducted two preregistered online experiments among 4,976 US and UK participants to investigate the effect of labeling headlines as AI. Respondents rated 16 headlines that were either true, false, AI- or human-generated. In Study 1, participants were randomly assigned to conditions in which (i) no headline was labeled as AI, (ii) AI-generated headlines were labeled as AI, (iii) human-generated headlines were labeled as AI, and (iv) false headlines were labeled as false. The results show that respondents rated headlines labeled as AI-generated as less accurate and were less willing to share them, regardless of whether the headlines were true or false, and regardless of whether the headlines were created by humans or AI. The effect of labeling headlines as AI-generated was three times smaller than labeling headlines as false. The authors also investigated the mechanisms behind this AI aversion by experimentally manipulating definitions of AI-generated headlines. The authors found that AI aversion is due to expectations that headlines labeled as AI-generated have been entirely written by AI with no human supervision. Despite wide support for the labeling of AI-generated content, the authors argue that transparency regarding the meaning of the labels is needed as the labels may have negative unintended consequences. According to the authors, to maximize impact, false AI-generated content should be labeled as false rather than solely as AI-generated.
Journal
PNAS Nexus
Article Title
People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation
Article Publication Date
1-Oct-2024