A new industry report has found audiences and journalists are growing increasingly concerned by generative artificial intelligence (AI) in journalism.
Summarising three years of research, the RMIT-led Generative AI & Journalism report was launched at the ARC Centre of Excellence for Automated Decision-Making and Society today.
Report lead author, Dr T.J. Thomson from RMIT University in Melbourne, Australia, said the potential of AI-generated or edited content to mislead or deceive was of most concern.
“The concern of AI being used to spread misleading or deceptive content topped the list of challenges for both journalists and news audiences,” he said.
“We found journalists are poorly equipped to identify AI-generated or edited content, leaving them open to unknowingly propelling this content to their audiences.”
This is partly because few newsrooms have systematic processes in place for vetting user-generated or community contributed visual material.
Most journalists interviewed were not aware of the extent to which AI is increasingly and often invisibly being integrated into both cameras and image or video editing and processing software.
“AI is sometimes being used without the journalists or news outlet even knowing,” Thompson said.
While only one quarter of news audiences surveyed thought they had encountered generative AI in journalism, about half were unsure or suspected they had.
“This points to a potential lack of transparency from news organisations when they use generative AI or to a lack of trust between news outlets and audiences,” Thomson said.
News audiences were found to be more comfortable with journalists using AI when they themselves have used it for similar purposes, such as to blur parts of an image.
“The people we interviewed mentioned how they used similar tools when on video conferencing apps or when using the portrait mode on smartphones,” Thomson said.
“We also found this with journalists using AI to add keywords to media since audiences had themselves experienced AI describing images in word processing software.”
Thomson said news audiences and journalists alike were overall concerned about how news organisations are – and could be – using generative AI.
“Most of our participants were comfortable with turning to AI to create icons for an infographic but quite uncomfortable with the idea of an AI avatar presenting the news, for example,” he said.
Part-problem, part-opportunity
The technology, which has advanced significantly in recent years, was found to be both an opportunity and threat to journalism.
For example, Apple recently suspended its automatically generated news notification feature after it produced false claims about high-profile individuals, including false deaths and arrests, and attributed these false claims to reputable outlets, including BBC News and The New York Times.
While AI can perform tasks like sorting and generating captions for photographs, it has well-known biases against, for example, women and people of colour.
But the research also identified lesser-known biases, such as favouring urban over non-urban environments, showing women less often in more specialised roles, and ignoring people living with disabilities.
“These biases exist because of human biases embedded in training data and/or the conscious or unconscious biases of those who develop AI algorithms and models,” Thomson said.
But not all AI tools are equal. The study found those which explain their decisions, disclose their source material, and ensure transparency in outputs regarding their use are less risky for journalists compared to tools that lack these features.
Journalists and audience members were also concerned about generative AI replacing humans in newsrooms, leading to fewer jobs and skills in the industry.
“These fears reflect a long history of technologies impacting on human labour forces in journalism production,” Thompson said.
The report, designed for the media industry, identifies dozens of ways journalists and news organisations can use generative AI and summarises how comfortable news audiences are with each.
It summarises several of the team’s research studies, including the latest peer-reviewed study, published in Journalism Practice.
Portions of the underlying research in the report were financially supported by the Design and Creative Practice, Information in Society, and Social Change Enabling Impact Platforms at RMIT University, the Weizenbaum Institute for the Networked Society / German Internet Institute, the Centre for Advanced Internet Studies, the Global Journalism Innovation Lab, the QUT Digital Media Research Centre, and the Australian Research Council through DE230101233 and CE200100005.
Generative AI and Journalism: Content, Journalistic Perceptions, and Audience Experiences is published by RMIT University (DOI: 10.6084/m9.figshare.28068008).
Old Threats, New Name? Generative AI and Visual Journalism is published in Journalism Practice (DOI: 10.1080/17512786.2025.2451677).
Journal
Journalism Practice
Method of Research
Survey
Subject of Research
People
Article Title
Old Threats, New Name? Generative AI and Visual Journalism
Article Publication Date
11-Jan-2025