News Release

Balancing quantity and quality: How X/Twitter's algorithm influences our consumption of news

Reports and Proceedings

University of Pennsylvania School of Engineering and Applied Science

The Researchers

image: 

The paper’s authors, Shengchun Huang (left) and Stephanie Wang (right).

view more 

Credit: Penn Engineering

Are we only seeing the kind of news we want to see on social media? What effects do personalized algorithms have on our perception of news quality? Do algorithms help us serendipitously encounter information that we didn’t expect? These are the questions researchers are now asking as AI and algorithms infiltrate the information environments we turn to for political news. 

Now more than ever, people are relying on social media to inform them of what is happening in the world. Today, half of U.S. adults get their news at least sometimes from social media, and 52% of X/Twitter users regularly turn to this particular platform for their news updates. While digital dissemination of news can make information more accessible, turning to algorithmically-mediated platforms presents its own set of challenges.

As we near the upcoming election season, voters will be looking to news and political information to make informed decisions. It is important to understand if and how these algorithms are exacerbating political polarization or inequality in political learning. Are they wreaking havoc on how we make decisions as individuals and function as a society? 

To understand if political polarization could be an issue exacerbated by social media algorithms, Penn Engineering researchers and collaborators at the Annenberg School of Communication (ASC) conducted a study of 243 X/Twitter users and over 800,000 tweets during a three-week period in late 2023. The research team, including Danaë Metaxa, Raj and Neera Singh Term Assistant Professor in Computer and Information Science (CIS), Penn doctoral candidates Stephanie Wang (CIS) and Shengchun Huang (ASC), and Alvin Zhou, Assistant Professor at The University of Minnesota Twin Cities, set out to investigate how news from users’ algorithmic feeds differ from news presented in a their chronological X/Twitter news feed composed only of accounts users follow. 

Their study, to be presented at The 27th ACM Conference on Computer-Supported Cooperative Work and Social Computing, performed a sociotechnical audit. This type of audit includes traditional methods of content and browsing history, deploying them directly with real users, alongside user experience surveys to simultaneously gain insight into users’ perceptions of information at multiple points in time.

At the highest level, the researchers confirmed that the X/Twitter algorithm significantly influences what users do and do not see relative to users’ independent choices in the accounts they follow. Why is this an issue?

“As we consume information, we form opinions and take actions based on those opinions,” says Huang. “When you only consume news aligned with what you believe, you miss a lot of what is happening in the world and are less able to see things from other perspectives. It is possible that algorithms of X/Twitter and other social platforms are filtering information and contributing to incomplete views of the world. One obvious issue with this is how those views will inform people’s voting behavior in our upcoming election.” 

But how exactly are these algorithms influencing the information we see? The researchers hypothesized that each user’s news feed would contain more extreme political content aligned with their beliefs that could push them to farther ends of the political spectrum. However, when they compared the personalized algorithm to the chronological timeline news content, they found the opposite.

“It turned out that, during the time we performed this audit, X/Twitter’s algorithmic feeds were presenting users with information that was milder and overall less polarizing than the chronological timeline,” says Wang. “We also found that the algorithmically-curated feeds presented users with less news content in general, specifically less content containing links to news articles and more content featuring other types of information.”

While personalized algorithms were not observed to push polarizing or noticeably controversial news, their significant influence on the type of content users were seeing has implications for the use of X/Twitter at any point in time. 

“During that particular moment, the information may not have been very extreme or disruptive, but this doesn’t mean we can rely on these algorithms to continue to operate in that way,” says Metaxa. “What concerns me is that users of these platforms have very little control over the algorithms. The lack of transparency, restricted APIs and the current controversies surrounding the direction and ownership of X/Twitter make it a challenging space for people to find and trust quality news.”

And even when users were able to find news content from legitimate sources, they tended to question its credibility.  

“Users reported that just the fact that they read the news on social media made it less credible,” says Zhou. “Additionally, if that news content expressed opposing views or opinions, the user reported it as being even less reliable. It’s a very interesting phenomenon that touches on our natural human instinct and behavior. We don’t like to see things we don’t agree with, so we tend to doubt their credibility. Without the surveys in this audit, we would not have captured this insight.”

This is the first time sociotechnical auditing has been used in a social media news consumption study, and Metaxa plans to continue using this tool to investigate other social media platforms.

“These types of audit studies are very important for any system that aims to instigate human action or behavior change,” says Metaxa. “Search engines, generative AI, targeted advertising and social media all fundamentally rely on human interaction, and they influence people and society. We need to incorporate users’ experiences into our audits to evaluate how well these systems work.”

The data gained from the sociotechnical audits in this study shows just how sensitive our perceptions are to the news we see on social media. But, rather than putting all of the responsibility on the user, the research team believes platforms such as X/Twitter should take measures to provide a safe, reliable and informative media environment. In today’s era of “fake news,” the team believes effective solutions will be made at the institutional level rather than the individual level.

This study was supported by research funds from the University of Pennsylvania and the University of Minnesota in addition to Amazon Web Service’s AI ASSET award which supported lead author Stephanie Wang’s doctorate degree.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.