An expert's take on why we should not fear AI
University of Texas at Dallas
image: Dr. Sriraam Natarajan, professor of computer science at The University of Texas at Dallas and fellow of the Association for the Advancement of Artificial Intelligence (AAAI)
Credit: The University of Texas at Dallas
Movies like “The Terminator,” in which an artificial intelligence system goes rogue and tries to wipe out humanity, depict our worst fears about AI.
But outside of science fiction, there’s no need to be afraid of the technology becoming self-aware like the AI in the movies anytime soon, said Dr. Sriraam Natarajan, a professor of computer science in the Erik Jonsson School of Engineering and Computer Science at The University of Texas at Dallas.
“I want to reassure everyone that AI-driven Armageddon is not happening,” Natarajan said. “‘The Terminator’ is a great movie. ‘The Matrix’ is great, but they are fiction and are not going to happen in reality.”
Natarajan, who recently was named a fellow of the Association for the Advancement of Artificial Intelligence, said fear of AI often stems from misconceptions about the technology. There are a number of reasons not to be afraid of AI, he said.
AI cannot ‘think.’
AI does not have consciousness. Instead, AI mimics and predicts, Natarajan said.
“There’s this idea of artificial intelligence becoming all-knowing, pervasive and understanding everything, but we are not even close to having that type of technology,” he said. “It’s science fiction, a fantasy created by humans.”
AI is not close to being as smart as humans, he said. For example, AI would require exponentially more information to be capable of interpreting visual cues such as eye contact, a nod or a wave from another driver that humans easily understand as signals to go first at a four-way stop.
Could AI achieve human-like thinking in the future?
“Not in the frameworks that we have today, not in the next few decades,” Natarajan said.
AI’s knowledge is limited.
AI systems are trained on data and cannot generate new knowledge outside of the scope of their training information, which humans control. Claims that AI can “learn” refer to its ability to identify patterns and relationships and to produce insights from the data and make predictions.
“Whatever data is being used to train an AI system is all that it can learn from,” Natarajan said.
But what if humans use AI to harm instead of help people, such as developing toxins instead of treatments? That is why much of current AI research is focused on safeguarding the technology from tampering and malicious use, Natarajan said.
“Safety is of paramount importance as it is with any critical invention, all the way from transportation to nuclear energy,” Nataranjan said. “We need to make sure that the deployment of AI systems is done with human safety in mind.
“However, I don’t fear AI; I fear people who misuse AI. That’s why guardrails are needed to keep AI from falling into the wrong hands.”
AI cannot do most jobs.
Natarajan sees AI as a technology breakthrough that will help increase productivity rather than make people’s jobs obsolete.
“The goal of AI is not to replace jobs but to train people to more effectively do things they are good at,” Natarajan said. “The mundane aspects of a job can be offloaded to AI. The creativity of these jobs will still rely on humans.”
AI has the potential to help humans solve pressing problems, Natarajan said, adding that a major focus of AI research aims to protect the technology from misuse.
“With AI, we have the potential to help cure diseases. We have the potential to better understand the impact of climate change on our environment. We have the potential to predict the next big forest fires and develop strategies to mitigate them,” he said. “There is a lot of potential with AI, and we should first understand the robustness of the AI systems before deploying them.”
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.