News Release

Physics can assist with key challenges in artificial intelligence

A physical mechanism a priori reveals how many examples in deep learning are required to achieve a desired test accuracy. It surprisingly indicates that learning each example once is almost equivalent to learning examples repeatedly.

Peer-Reviewed Publication

Bar-Ilan University

Physics Can Assist with Key Challenges in Artificial Intelligence

image: In an article published today in the journal Scientific Reports, researchers from Bar-Ilan University show how two challenges in current research and applications in the field of artificial intelligence are solved by adopting a physical concept that was introduced a century ago to describe the formation of a magnet during a process of iron bulk cooling. Using a careful optimization procedure and exhaustive simulations, the scientists have demonstrated the usefulness of the physical concept of power-law scaling to deep learning. This central concept in physics, which arises from diverse phenomena, including the timing and magnitude of earthquakes, Internet topology and social networks, stock price fluctuations, word frequencies in linguistics, and signal amplitudes in brain activity, has also been found to be applicable in the ever-growing field of AI, and especially deep learning. Image Rapid decision making: A deep learning neural network where each handwritten digit is presented only once to the trained network view more 

Credit: Prof. Ido Kanter, Bar-Ilan University

Current research and applications in the field of artificial intelligence (AI) include several key challenges. These include: (a) A priori estimation of the required dataset size to achieve a desired test accuracy. For example, how many handwritten digits does a machine have to learn before being able to predict a new one with a success rate of 99%? Similarly, how many specific types of circumstances does an autonomous vehicle have to learn before its reaction will not lead to an accident? (b) The achievement of reliable decision-making under a limited number of examples, where each example can be trained only once, i.e., observed only for a short period. This type of realization of fast on-line decision making is representative of many aspects of human activity, robotic control and network optimization.

In an article published today in the journal Scientific Reports, researchers show how these two challenges are solved by adopting a physical concept that was introduced a century ago to describe the formation of a magnet during a process of iron bulk cooling.

Using a careful optimization procedure and exhaustive simulations, a group of scientists from Bar-Ilan University has demonstrated the usefulness of the physical concept of power-law scaling to deep learning. This central concept in physics, which arises from diverse phenomena, including the timing and magnitude of earthquakes, Internet topology and social networks, stock price fluctuations, word frequencies in linguistics, and signal amplitudes in brain activity, has also been found to be applicable in the ever-growing field of AI, and especially deep learning.

"Test errors with online learning, where each example is trained only once, are in close agreement with state-of-the-art algorithms consisting of a very large number of epochs, where each example is trained many times. This result has an important implication on rapid decision making such as robotic control," said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldshmied) Multidisciplinary Brain Research Center, who led the research. "The power-law scaling, governing different dynamical rules and network architectures, enables the classification and hierarchy creation among the different examined classification or decision problems," he added.

"One of the important ingredients of the advanced deep learning algorithm is the recent new bridge between experimental neuroscience and advanced artificial intelligence learning algorithms," said PhD student Shira Sardi, a co-author of the study. Our new type of experiments on neuronal cultures indicate that an increase in the training frequency enables us to significantly accelerate the neuronal adaptation process. "This accelerated brain-inspired mechanism enables building advanced deep learning algorithms which outperform existing ones," said PhD student Yuval Meir, another co-author.

The reconstructed bridge from physics and experimental neuroscience to machine learning is expected to advance artificial intelligence and especially ultrafast decision making under limited training examples as to contribute to the formation of a theoretical framework of the field of deep learning.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.