There are, without a doubt, two broad technological fields that have been developing at an increasingly fast pace over the past decade: artificial intelligence (AI) and the Internet of Things (IoT). By excelling at tasks such as data analysis, image recognition, and natural language processing, AI systems have become undeniably powerful tools in both academic and industry settings. Meanwhile, miniaturization and advances in electronics have made it possible to massively reduce the size of functional devices capable of connecting to the Internet. Engineers and researchers alike foresee a world where IoT devices are ubiquitous, comprising the foundation of a highly interconnected world.
However, bringing AI capabilities to IoT edge devices presents a significant challenge. Artificial neural networks (ANNs)—one of the most important AI technologies—require substantial computational resources. Meanwhile, IoT edge devices are inherently small, with limited power, processing speed, and circuit space. Developing ANNs that can efficiently learn, deploy, and operate on edge devices is a major hurdle.
In response, Professor Takayuki Kawahara and Mr. Yuya Fujiwara from the Tokyo University of Science, are working hard towards finding elegant solutions to this challenge. In their latest study published in IEEE Access on October 08, 2024, they introduced a novel training algorithm for a special type of ANN called binarized neural network (BNN), as well as an innovative implementation of this algorithm in a cutting-edge computing-in-memory (CiM) architecture suitable for IoT devices.
“BNNs are ANNs that employ weights and activation values of only -1 and +1, and they can minimize the computing resources required by the network by reducing the smallest unit of information to just one bit,” explains Kawahara, “However, although weights and activation values can be stored in a single bit during inference, weights and gradients are real numbers during learning, and most calculations performed during learning are real number calculations as well. For this reason, it has been difficult to provide learning capabilities to BNNs on the IoT edge side.”
To overcome this, the researchers developed a new training algorithm called ternarized gradient BNN (TGBNN), featuring three key innovations. First, it employs ternary gradients during training, while keeping weights and activations binary. Second, they enhanced the Straight Through Estimator (STE), improving the control of gradient backpropagation to ensure efficient learning. Third, they adopted a probabilistic approach for updating parameters by leveraging the behavior of MRAM cells.
Afterwards, the research team implemented this novel TGBNN algorithm in a CiM architecture—a modern design paradigm where calculations are performed directly in memory, rather than in a dedicated processor, to save circuit space and power. To realize this, they developed a completely new XNOR logic gate as the building block for a Magnetic Random Access Memory (MRAM) array. This gate uses a magnetic tunnel junction to store information in its magnetization state.
To change the stored value of an individual MRAM cell, the researchers leveraged two different mechanisms. The first was spin-orbit torque—the force that occurs when an electron spin current is injected into a material. The second was voltage-controlled magnetic anisotropy, which refers to the manipulation of the energy barrier that exists between different magnetic states in a material. Thanks to these methods, the size of the product-of-sum calculation circuit was reduced to half of that of conventional units.
The team tested the performance of their proposed MRAM-based CiM system for BNNs using the MNIST handwriting dataset, which contains images of individual handwritten digits that ANNs have to recognize. “The results showed that our ternarized gradient BNN achieved an accuracy of over 88% using Error-Correcting Output Codes (ECOC)-based learning, while matching the accuracy of regular BNNs with the same structure and achieving faster convergence during training,” notes Kawahara. “We believe our design will enable efficient BNNs on edge devices, preserving their ability to learn and adapt.”
This breakthrough could pave the way to powerful IoT devices capable of leveraging AI to a greater extent. This has notable implications for many rapidly developing fields. For example, wearable health monitoring devices could become more efficient, smaller, and reliable without requiring cloud connectivity at all times to function. Similarly, smart houses would be able to perform more complex tasks and operate in a more responsive way. Across these and all other possible use cases, the proposed design could also reduce energy consumption, thus contributing to sustainability goals.
Let us hope further studies lead to a seamless integration of AI into IoT devices!
***
Reference
DOI: https://doi.org/10.1109/ACCESS.2024.3476417
About The Tokyo University of Science
Tokyo University of Science (TUS) is a well-known and respected university, and the largest science-specialized private research university in Japan, with four campuses in central Tokyo and its suburbs and in Hokkaido. Established in 1881, the university has continually contributed to Japan's development in science through inculcating the love for science in researchers, technicians, and educators.
With a mission of “Creating science and technology for the harmonious development of nature, human beings, and society," TUS has undertaken a wide range of research from basic to applied science. TUS has embraced a multidisciplinary approach to research and undertaken intensive study in some of today's most vital fields. TUS is a meritocracy where the best in science is recognized and nurtured. It is the only private university in Japan that has produced a Nobel Prize winner and the only private university in Asia to produce Nobel Prize winners within the natural sciences field.
Website: https://www.tus.ac.jp/en/mediarelations/
About Professor Takayuki Kawahara from Tokyo University of Science
Takayuki Kawahara received B.S. and M.S. degrees in physics and a Ph.D. in electronics from Kyushu University in 1983, 1985, and 1993, respectively. After an extensive research career at Hitachi Central Research Laboratory, he joined the Tokyo University of Science in 2014, where he currently serves as a Professor. His lab focuses on developing sustainable electronics is the focus of his lab, which includes low-power artificial intelligence (AI) devices and circuits, sensors and AI signal processing, spin current applications, and quantum computing techniques. He has published over 45 refereed papers and participated in nearly a hundred refereed proceedings.
Journal
IEEE Access
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
TGBNN: Training Algorithm of Binarized Neural Network with Ternary Gradients for MRAM-based Computing-in-Memory Architecture
Article Publication Date
8-Oct-2024