News Release

UVA and the Toyota Research Institute aim to give your car the power to reason

Grant and Award Announcement

University of Virginia School of Engineering and Applied Science

UVA Link Lab Driving Simulator

image: 

Yen-Ling Kuo, an assistant professor of computer science, is building a driving simulator, similar to this one in UVA Engineering’s Link Lab, to collect data on driving behavior. She’ll use the data to enable a robot’s AI to associate the meaning of words with what it sees by watching how humans interact with the environment or by its own interactions with the environment.

view more 

Credit: Graeme Jenvey/University of Virginia School of Engineering and Applied Science

Self-driving cars are coming, but will you really be OK sitting passively while a 2,000-pound autonomous robot motors you and your family around town?

Would you feel more secure if, while autonomous technology is perfected over the next few years, your semi-autonomous car could explain to you what it’s doing — for example, why it suddenly braked when you didn’t? 

Better yet, what if it could help your teenager not only learn to drive, but to drive more safely? 

Yen-Ling Kuo, the Anita Jones Faculty Fellow and assistant professor of computer science at the University of Virginia School of Engineering and Applied Science, is training machines to use human language and reasoning to be capable of doing all of that and more. The work is funded by a two-year Young Faculty Researcher grant from the Toyota Research Institute.

“This project is about how artificial intelligence can understand the meaning of drivers’ actions through language modeling and use this understanding to augment our human capabilities,” Kuo said.

“By themselves, robots aren’t perfect, and neither are we. We don’t necessarily want machines to take over for us, but we can work with them for better outcomes.”

Eliminating the Need to Program Every Scenario

To reach that level of cooperation, you need machine learning models that imbue robots with generalizable reasoning skills.

That’s “as opposed to collecting large datasets to train for every scenario, which will be expensive, if not impossible,” Kuo said.

Kuo is collaborating with a team at the Toyota Research Institute to build language representations of driving behavior that enable a robot to associate the meaning of words with what it sees by watching how humans interact with the environment or by its own interactions with the environment.

Let’s say you’re an inexperienced driver, or maybe you grew up in Miami and moved to Boston. A car that helps you drive on icy roads would be handy, right?

This new intelligence will be especially important for handling out-of-the-ordinary circumstances, such as helping inexperienced drivers adjust to road conditions or guiding them through challenging situations.

“We would like to apply the learned representations in shared autonomy. For example, the AI can describe a high-level intention of turning right without skidding and give guidance to slow to a certain speed while turning right,” Kuo said. “If the driver doesn’t slow enough, the AI will adjust the speed further, or if the driver’s turn is too sharp, the AI will correct for it.”

Kuo will develop the language representations from a variety of data sources, including from a driving simulator she is building for her lab this summer.

Her work is being noticed. Kuo recently gave an invited talk on related research at the Association for the Advancement of Artificial Intelligence’s New Faculty Highlights 2024 program. She also has a forthcoming paper, “Learning Representations for Robust Human-Robot Interaction,” slated for publication in AI Magazine.

Advancing Human-Centered AI

Kuo’s proposal closely aligns with the Toyota Research Institute’s goals for advancing human-centered AI, interactive driving and robotics. 

“Once language-based representations are learned, their semantics can be used to share autonomy between humans and vehicles or robots, promoting usability and teaming,” said Kuo’s co-investigator, Guy Rosman, who manages the institute’s Human Aware Interaction and Learning team.

“This harnesses the power of language-based reasoning into driver-vehicle interactions that better generalize our notion of common sense, well beyond existing approaches,” Rosman said.

That means if you ever do hand the proverbial keys over to your car, the trust enabled by Kuo’s research should help you steer clear of any worries.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.