As artificial intelligence (AI) increasingly affects peoples’ everyday lives, Hoda Eldardiry, associate professor in the Department of Computer Science and core faculty at the Sanghani Center for Artificial Intelligence and Data Analytics, is conducting research in engineering and computing education that will help students in majors such as computer science, computer engineering, and data science bridge the gap between the classroom and the job site.
Recently, she received a $349,360 grant from the National Science Foundation’s (NSF) Engineering Education program to support her work.
“We want to ensure that every student is adequately prepared to not only confront but act on the challenges that new AI technologies pose to humans and society,” said Eldardiry.
Her team for the estimated three-year project includes co-principal investigators Qin Zhu, associate professor, and Dayoung Kim, assistant professor, both in the Department of Engineering Education; James Weichert, a master’s degree student in computer science advised by Eldardiry; and two Ph.D. students in engineering education, Yixiang Sun advised by Zhu and Emad Ali advised by Kim.
Eldardiry said their research — which includes AI ethics issues related to autonomous vehicles, privacy, and bias — differs from theoretical AI ethics research because their approach is to improve AI ethics education from the perspective of industry professionals currently working in AI and AI policy.
They have already interviewed a group of these professionals to get a better sense of how they view the AI policy landscape and more crucially, what skills they need to apply their technical backgrounds to real-world problems involving the ethical use of AI. With this project, they aim to engage practicing AI engineers to better understand how they translate AI ethics principles into practical applications when designing AI systems.
“We call these skills ‘translational competencies,’ and this is really the heart of our research,” Eldardiry said. “A curriculum shaped by this research can help cultivate the competencies needed for students to apply often vague ethical principles to concrete decision-making in the development and use of AI systems."
In reviewing current curricula, Eldardiry said, one ethical concern that arises with more and more powerful AI tools and vast amounts of data is the privacy of user data. This is especially important when AI technologies can leverage that user data to find connections or identify users in a way that humans cannot. The social media platform TikTok is a good example of this because it collects so much data about what videos you are watching and is, therefore, really good at triangulating what your interests are and perhaps more personal things like your political ideology or sexual orientation.
“When we talk about privacy in a computer science ethics class, it is brought up as a fundamental ethical principle, but then the conversation normally stops there. Our current curriculum does not go further into depth about what specific kind of privacy we want to guarantee or the technical details required to build a system that does actually preserve user privacy,” she said. “This is seen as an ‘advanced topic’ that is outside the scope of an undergraduate or even graduate ethics course, but the reality is that these details might be key when a student graduates and is in charge of using or developing an AI system.”
Another example is self-driving cars and how they should be programmed to prioritize human life. While it is easy to say that the car should avoid any harm to humans all the time, there are inevitably situations where that is not possible and the car must make a split-second decision. So what should the car be programmed to do in that case? Perhaps there is no single “correct” answer, but this is also not an unrealistic scenario to be talking about in an ethics class, Eldardiry said.
“Ultimately, we would like to see a paradigm shift in AI ethics education away from a hands-off approach where students are not engaging with the course material to a very hands-on approach where students are taught and expected to apply the ethical principles they learn or develop to their engineering work,” said Eldardiry. “These translational skills are something that future AI engineers will undoubtedly need in their toolkit and will form a growing part of their job expectations as even the development of AI programs becomes more automated."