Feature Story | 19-Mar-2025

Navigating trust in an age of increasing AI influence

“AI isn’t magic. It’s math,” says Journalism Professor Meredith Broussard, whose work outlines what tech tools get right and wrong—and how we can manage the differences

New York University

In 2025, it can seem as though the future generations of AI advocates promised has finally arrived. We see the benefits of artificial intelligence on a daily basis—we use it to help us navigate traffic, to identify new drug combinations in order to treat disease, and to quickly locate scholarship online.  

But with its growing prevalence and sophistication come new levels of unease about AI’s impact on culture and society. When Coca-Cola released a promotional Christmas video created by generative AI models in November 2024, the work was derided as “devoid of any actual creativity” and seen by many as an example of the replacement of human workers by a technology trained on artists’ work, without compensation or attribution. 

Recently, Europe saw the potential impact of AI-created reality on electoral politics. Germany’s far-right Alternative for Germany party, or AfD, developed a campaign ad using AI-generated video and images to depict “a country that never actually existed,” wrote Politico ahead of the nation’s February 23 election.  

“AI-generated content like this is helping…(the) anti-migration, populist Alternative for Germany party…make both sides of its vision—the idyllic, nostalgia-driven future it promises to bring as well as the dystopian one it’s warning about should others win the election—look startlingly real,” it added of the AfD, which doubled its support to 21 percent of the vote last month.

In early March, the Los Angeles Times launched a “bias meter”—an AI tool purportedly aimed at detecting the political slant of the paper’s opinion pieces and providing additional content and context to achieve “balance.” But it pulled the tool from assessing one of its pieces after the meter generated a response that many saw as downplaying the Ku Klux Klan’s racist agenda. 

Meredith Broussard, an associate professor at NYU’s Arthur L. Carter Journalism Institute, has been tracking the technology’s drawbacks, especially around racial, gender, and other biases that are often built into AI tools. In her 2023 book, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press), Broussard warns against assuming technology’s superiority—especially in what she describes as high-risk scenarios, including those involving legal, financial, or medical decisions. 

“Let’s not default to tech when it’s not necessary,” writes Broussard, who is also research director of the NYU Alliance for Public Interest Technology. “Let’s not trust in it when such trust is unfounded and unexamined.” 

In the midst of AI’s acceleration, NYU News spoke with Broussard to better understand its foundational elements and in order to use it to our benefit—cautiously.

You’ve said that “AI systems discriminate by default.” What do you mean by that?

The way that AI systems work is this: they take in a whole bunch of data and make a model, and we say the model is trained on this data. Then, the model can be used to make predictions, or decisions, or generate new material. “Generative AI” creates new text, images, audio, or video based on its training data. The problem is that the training data that we’re using is data that comes from the real world. The training data is largely scraped from the internet—which we all know is an often wonderful, but often toxic place.

There’s no such thing as unbiased data. There’s no such thing as data that does not reflect all of the existing problems of the real world. So, the data that we’re feeding into AI systems has the same biases of the real world. And therefore, the material that the models generate or the decisions that the AI models make are going to be biased. So instead of assuming that AI decisions are unbiased or neutral, it’s more useful to assume that the AI decisions are going to be biased or discriminatory in some way. Then, we can work to prevent AI from replicating historical problems and historical inequalities. 

You’ve encouraged us to “use the right tool for the task.” Sometimes that may involve technology, but at other times not. How should we make such determinations? 

One thing I like to keep in mind is the difference between mathematical fairness and social fairness. Something that is divided equally mathematically is not the same as something that is divided equally socially. I give an example in my book of a cookie and kids. If you have one chocolate chip cookie left and you have two children, you want to divide it in half, right? So mathematically we would divide the cookie 50-50. But in the real world, when you break a cookie, there’s a big half and a small half, so then there’s some negotiation over who gets the big half and who gets the small half—and you want both kids to come out feeling like the division is fair.

When it comes to AI, we need to think about the context, especially when AI is making social decisions—like who gets hired or fired, who gets a loan, or who gets certain kinds of healthcare. AI is really great at math, but it’s not so good at social—and the social context matters. Social determinants of health, for example, directly affect individual outcomes and the health of our communities—and these are usually not factors that AI takes into consideration. 

We can think about distinguishing between high-risk and low-risk uses of AI. This is a distinction made in the EU’s new AI Act. Consider facial recognition, which is a kind of AI. A low-risk use of facial recognition might be using facial recognition to open your phone, which I do 500,000 times a day. It only works half of those times. It’s not a big deal. I put in my passcode and I move on. A high-risk use of facial recognition might be something like police using facial recognition on real-time video surveillance feeds. It’s high-risk because one of the things we know about facial recognition is that it’s biased. It’s better at detecting light skin than dark skin. It’s better at identifying men than women. It generally does not take into account trans and non-binary folks. It’s best at recognizing men with light skin and it’s worst at recognizing women with dark skin. So, people with darker skin are going to be disproportionately flagged by facial recognition systems, especially when used by police. So facial recognition used on a real-time surveillance feed would be a high-risk use, which I would argue we should ban. 

You’ve often said that “technochauvinism,” or the thinking that the technological solution is superior to the human one, may not be good for business. Why not?

When you assume that technology is superior and that technological solutions are superior, you can waste a lot of money implementing computational solutions that simply don’t work. Sometimes it’s just a lot easier to do things manually than to try and get the computer to do it. For example, you can think about endless back and forths over email. One rule of thumb in business is if you have an issue that takes more than two emails, you should just pick up the phone and have a five-minute phone call because it’s more efficient. Unfortunately, not everyone does this. If you’re just doing this endless back and forth using technology, you’re not going to get stuff resolved as efficiently. People are very excited about using AI, specifically large language models (LLMs) like ChatGPT, to accelerate business nowadays—but putting chatbots into everything has not yet proven to be useful.

You and others have remarked that AI makes more vulnerable the already-vulnerable members of society—through, for instance, mortgage-approval algorithms that encourage discriminatory lending. What kinds of safeguards could reverse this effect?

I think we need to first look at each technology in terms of what it does and what is the social context in which it’s being used because technology is a tool. Think about how we select tools. If I want to cut paper, I’m going to use some scissors. If I want to cut wood, I’m going to use a handsaw if it’s small wood, but I'm going to use a circular saw if it’s big wood. We make these decisions about cutting tools effortlessly because we have expertise.

For some reason, people don’t view computers in the same way. Computers have become these mysterious objects that developers often portray as magical. And that has happened because things like science fiction and fantasy are very, very popular among the mainstream software development community. Of course, it’s fun to think that you are doing magic or that you are making something that’s incredibly powerful. But we need to back off of magical thinking when it comes to AI and think about what’s real and what’s true—because truth really matters. AI is not magic. It’s just math. 

One question is, what do we do with these new technologies from a legal standpoint? I’m really concerned with regulation of technology. Technology companies have self-regulated for a very long time, which they’ve advocated for, and it has not worked. So we are at a point where we need regulation and enforcement at the governmental level.

Most of the laws that we have in place in the US around technology were put into place when the telephone was still the dominant method of communicating. Section 230 of the Communications Decency Act, for example, was created at a time when we didn’t have social media platforms. So we really need to update our laws around technology in the same way that we need to iterate on software. Law can be iterative. Policy can be iterative. We just need to catch up.

Is there a model for regulation of earlier technologies that may be instructive? 

I think that automobile seat belts are a good example. 

It used to be that cars were manufactured without seat belts. Then safety advocates said, “We’re going to get fewer deaths if it’s mandatory to put seat belts into cars.” Then we had to have legislation that said, “It is mandatory that you wear a seat belt in a car” because there were seat belts, but people were not wearing them. That was an important change. 

Then researchers realized that the majority of seat-belt research was done on men—on male-sized crash test dummies—and women were getting hurt by the design of seat belts. So we had to refine the design of seat belts. Kids also were getting really hurt because of seat-belt design. Now we have rules that say kids need to be restrained in car seats and they need to be in the back seat until a certain age.

There are always unintended consequences of new technologies. The responsible thing to do is to update our technologies to make them safer as we realize what the problems are. I’d like to see us do this around AI.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.