As artificial intelligence (AI) becomes more prevalent, a new Rutgers University-New Brunswick survey sheds light on public attitudes, revealing widespread concerns about its impact on politics and media, alongside an increasing adoption of AI tools in daily life.
More than half of the respondents expressed worry about AI’s impact on politics (58%) and news media (53%), with researchers suggesting these concerns may stem from fears of misinformation and manipulation, particularly during the 2024 election cycle when the survey was conducted.
The survey is part of the National AI Opinion Monitor (NAIOM), a new Rutgers-led, long-term project monitoring public attitudes toward AI. Researchers found 41% of Americans said AI does more harm than good in protecting personal information.
Despite these concerns, the findings show one-third of Americans have used generative AI to ask health-related questions or seek information – a finding that underscores both the opportunities and challenges as these tools evolve. The researchers define AI as a collection of advanced technologies that allow machines to perform tasks typically requiring human intelligence, such as understanding language, decision making and recognizing images. Generative AI is a subset of those technologies which focuses on creating original content, including text, images, audio and video.
To gather these insights, the researchers surveyed nearly 5,000 people between Oct. 25 and Nov. 8 about AI usage and attitudes across demographic groups, including differences by gender, age, socioeconomic status and geographic location.
Katherine Ognyanova, an associate professor of communication at the Rutgers School of Communication and Information and a coauthor of the report, explained that the project was launched in response to the increasing prevalence of AI.
“These tools have the potential to transform a wide range of industries – technology, media, entertainment, marketing, education and health care,” said Ognyanova. “It’s critical to understand how Americans are using and perceiving AI now, as trust in these technologies will shape their adoption, development and regulation. We are at a pivotal moment where public opinion about AI is being formed and rapidly changing as people engage with it firsthand and encounter related narratives in the news.”
“AI development and adoption are accelerating at an unprecedented pace,” said Vivek Singh, an associate professor with the School of Communication, a coauthor of the report and an expert in AI and algorithmic fairness. “Today, AI is no longer confined to the algorithms of tech companies; it has become an integral part of everyday life.”
According to survey results, more than half of Americans (53%) have used a generative AI service such as ChatGPT, Google Gemini or Microsoft Copilot, further demonstrating the increasing influence of these technologies.
Among other findings:
- Knowledge gaps: While 90% of Americans have heard of AI, 51% recognize the term “generative AI,” and 12% are familiar with “large language models.”
- Demographic disparities: Younger, male, better-educated and higher-income Americans are more likely to use and show interest in AI tools.
- Task-specific approval: While 48% of Americans support AI for household chores, majorities disapprove of AI performing surgery (57%) or driving vehicles (53%).
- Daily interactions: Nearly 30% of respondents encounter AI-generated text or summaries daily, with 86% finding them helpful.
“These findings raise critical questions about inclusion and equity,” said Ognyanova, who is also director of the Rutgers Computational Social Science Lab and one of the founders and principal investigators of the COVID States Project and the Civic Health and Institutions Project, which are initiatives by multiple universities exploring public attitudes towards politics and health. “Older Americans and those with lower educational attainment may be less likely to benefit from these tools, which risks creating a new digital divide.”
The NAIOM survey provides baseline data on how Americans perceive and use generative AI, creating a foundation for monitoring changes over time. The researchers stressed this ongoing tracking is vital as public attitudes toward AI continue to evolve.
To capture evolving trends, researchers plan to conduct national surveys three times a year with a sample of 5,000 respondents. This sample includes nationally representative quotas and oversamples groups such as individuals under 25, those older than age 65 and Hispanic and Black respondents. The researchers aim to examine AI's impact on young people, older adults and minority communities.
Reports will explore themes including AI adoption, trust, attitudes toward AI-generated content, regulation and AI’s role in jobs.
“Both of us share a keen interest in understanding how people evaluate information and misinformation, whether it comes from human or nonhuman sources,” said Singh, who is the director of the Behavioral Informatics Lab at Rutgers. “We’ve consulted with experts and hope to expand our advisory board as the project grows.”
The researchers hope NAIOM will serve as a valuable resource for policymakers, media and the public, offering data-driven insights into the evolving role of AI in society.