News Release

UTSA researchers develop energy-efficient AI with $2 million NSF grant

Grant and Award Announcement

University of Texas at San Antonio

Fidel Santamaria

image: 

UTSA Professor Fidel Santamaria is leading a team of researchers who will develop new artificial intelligence (AI) applications in the most energy-efficient manner yet.

 

view more 

Credit: The University of Texas at San Antonio

Fidel Santamaria, a professor in the UTSA College of Sciences’ neuroscience, developmental and regenerative biology department, received a $2 million grant through the National Science Foundation’s Emerging Frontiers in Research and Innovation (EFRI) program to develop new artificial intelligence (AI) applications in the most energy-efficient manner yet.

For machine-learning tools to analyze new data, they must first sort data into various categories. For example, if a tool is sorting photos by color, then it needs to recognize which photos are red, yellow or blue to accurately classify them. While this is an easy chore for a human, the task presents a complicated and energy-hungry job for a machine.

“We want really powerful AI, but to train that AI requires a lot of energy. That’s a national challenge. That’s the challenge of AI,” Santamaria said. “We, as humans, have a good abstraction processing of information with very, very little energy consumption compared to computers. On top of that, we are very flexible in learning. We can process things and use history to retrofit or use as feedback. We’re history dependent.”

Santamaria is collaborating with University of Tennessee Knoxville Professor Stephen Sarles, Portland State University Professor Christof Teuscher, and University of South Carolina Professor Yuriy V. Pershin. The research team is combining their work in biology, physics, computer science, and engineering to test the least energy-hungry electronic models.

One objective of neuroscience is to build better computers by modeling them from the function of the brain. In this project, the research team will use a mathematical theory that explains how real neurons adapt their responses based on their previous activity and apply it to design electric circuits that are both computationally and energetically efficient.

Neuromorphic computing—computer engineering modeled after systems in the human brain—relies on the connectivity between neurons that are identical. But Santamaria says this cannot be further from the biological reality of neurons.

“A single neuron can change because it’s history dependent. It can react differently on its own with no signal from other neurons. That adaptation is history dependence, which is a computational property that has been overlooked, and it underlines efficient computation,” explained Santamaria.

From biological observations, a series of equations called fractional order differential equations can be written. These equations are the natural mathematical language describing history dependent processes, and they explain how neurons behave.

“All the electronics we are using right now are based on resistors. Instead, math told us that if we can find a capacitor—a device for storing electrical energy—that is history dependent; then, we will be able to translate what we know from neuroscience to actually build a circuit that will have the same properties of the real neurons we have been investigating,” Santamaria said.

While resistors limit current flow, capacitors store energy in an electric field until it’s needed. The team will focus on building small circuits with capacitors that mimic biology. These capacitors will then link from a single model to a neural network allowing the team to study how they can train and challenge the creation of an AI application that is as good as or better than current AI but 100 times more energy-efficient.

“That will be disruptive if we are successful. There is not enough energy in the planet to train AI all day the way we want,” Santamaria said. “Everybody’s looking for energy efficiency. But they’ve been looking at it from an algorithmic point of view, coding or traditional neuromorphic approaches, and those approaches are still using the same hardware that is kind of more resistant based.”

He added, “We’re saying we need to change the computation, and we also need the physical hardware to be different in order to implement actual abstract neuronal behavior that right now no one is using because the physics of their materials don’t allow that.”

In addition to the research, the grant will also support four years of training for students and the development of workshops with leading philosophers and bioethicists in the field working on these types of AI applications.

“These technologies, if successful, will be transformative in how we build and use things and the NSF is very interested in trying to understand the ethical consequences of that. For that reason, we have an exciting collaboration with a philosopher here at UTSA, Christopher Stratman, on bioethics,” Santamaria said.

Stratman will be launching ethical and philosophical studies of the effects of AI adoption, which will include the design and implementation of professional workshops and embedded ethics modules in both undergraduate and graduate STEM courses.

“How can we make better science? That requires a philosopher and scientist together, working together. It’s important to have both out there and be a fundamental part of the training of students. And we’re actually going to do that,” Santamaria said.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.