News Release

Study finds readers trust news less when AI is involved, even when they don't understand to what extenst

Two studies found AI increased distrust; readers disliked when humanness was reduced in producing news

Peer-Reviewed Publication

University of Kansas

LAWRENCE — As artificial intelligence becomes more involved in journalism, journalists and editors are grappling not only with how to use the technology, but how to disclose its use to readers. New research from the University of Kansas has found that when readers think AI is involved in some way in news production, they have lower trust in the credibility of the news, even when they don’t fully understand what it contributed.

The findings show that readers are aware of the use of AI in creating news, even if they view it negatively. But understanding what and how the technology contributed to news can be complicated, and how to disclose that to readers in a way they understand is a problem that needs addressed in a clear manner, according to the researchers.

“The growing concentration of AI in journalism is a question we know journalists and educators are talking about, but we were interested in how readers are perceiving it. So we wanted to know more about media byline perceptions and their influence, or what people think about news generated by AI,” said Alyssa Appelman, associate professor in the William Allen White School of Journalism and Mass Communications, and co-author of two studies on the topic.

Appelman and Steve Bien-Aimé, assistant professor in the William Allen White School of Journalism and Mass Communications, helped lead an experiment in which they showed readers a news story about artificial sweetener aspartame and its safety for human consumption. Readers were randomly assigned one of five bylines: written by staff writer, written by staff writer with artificial intelligence tool, written by staff writer with artificial intelligence assistance, written by staff writer with artificial intelligence collaboration and written by artificial intelligence. The article was otherwise consistent in all cases.

The findings were published in two research papers. Both were written by Appelman and Bien-Aimé of KU, along with Haiyan Jia of Lehigh University and Mu Wu of California State University, Los Angeles.

One paper focused on how readers made sense of AI bylines. Readers were surveyed after reading the article about what the specific byline they received meant and whether they agreed with several statements intended to measure their media literacy and attitudes toward AI. Findings showed that regardless of the byline they received, participants had a wide view of what the technology did. The majority reported they felt humans were the primary contributors, while some said they thought AI might have been used as research assistance or in writing a first draft that was edited by a human.

Results showed that participants had an understanding of what AI technology can do, and that it is human-guided with prompts. However, the different byline conditions left much for people to interpret on how specifically it may have contributed to the article they read. When AI contribution was mentioned in the byline, it negatively affected readers’ perceptions of the source and author credibility. Even with the byline “written by staff writer,” readers interpreted it to mean it was at least partially written by AI, as there was not a human’s name connected to the story.

Readers used sensemaking as a technique to interpret the contributions of AI, the authors wrote. The tactic is a way of using information one has already learned to make sense of situations they may not be familiar with.

“People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did,” Appelman said.

The results showed that, regardless of what they thought AI contributed to the story, their opinions of the news’ credibility were negatively affected.

The findings were published in the journal Communication Reports.

A second research paper explored how perceptions of humanness mediated the relationship between perceived AI contribution and credibility judgments. It found that acknowledging AI enhanced transparency and that readers felt human contribution to the news improved trustworthiness.

Participants reported what percentage they thought AI was involved in the creation of the article, regardless of which byline condition they received. The higher percentage they gave, the lower their judgment of its credibility was. Even those who read “written by staff writer” reported they felt AI was involved to some degree.

“The big thing was not between whether it was AI or human: It was how much work they thought the human did,” Bien-Aimé said. “This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not.”

The findings suggest that people give higher credibility to human contributions in fields like journalism that have traditionally been performed by humans. When that is replaced by a technology such as AI, it can affect perceptions of credibility, whereas it might not for things that are not traditionally human, such as YouTube suggesting videos for a person to watch, based on their previous viewing, the authors said.

While it can be construed as positive that readers tend to perceive human-written news as more credible, journalists and educators should also understand they need to be clear in disclosing how or if they use AI. Transparency is a sound practice, as shown by a scandal earlier this year in which Sports Illustrated was alleged to have published AI-generated articles presented as being written by people. However, the researchers argue, simply stating that AI was used may not be clear enough for people to understand what it did and if they feel it contributed more than a human, could negatively influence credibility perceptions.

The findings on perceived authorship and humanness were published in the journal Computers in Human Behavior: Artificial Humans.

Both journal articles indicate that further research should continue to explore how readers perceive the contributions of AI to journalism, the authors said, and they also suggest that journalism as a field can benefit from improvements in how it discloses such practices. Appelman and Bien-Aimé study reader understanding of various journalism practices and have found readers often do not perceive what certain disclosures such as corrections, bylines, ethics training or use of AI mean in a way consistent with what journalists intended.

“Part of our research framework has always been assessing if readers know what journalists do,” Bien-Aimé said. “And we want to continue to better understand how people view the work of journalists.”


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.