ITHACA, N.Y. – A Cornell University-led research team has developed an artificial intelligence-powered ring equipped with micro-sonar technology that can continuously and in real time track fingerspelling in American Sign Language (ASL).
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling, which is used in ASL to spell out words without corresponding signs, such as proper nouns, names and technical terms. With further development, the device – believed to be the first of its kind – could revolutionize ASL translation by continuously tracking entire signed words and sentences.
“Many other technologies that recognize fingerspelling in ASL have not been adopted by the deaf and hard-of-hearing community because the hardware is bulky and impractical,” said Hyunchul Lim, a doctoral student in the field of information science. “We sought to develop a single ring to capture all of the subtle and complex finger movement in ASL.”
Lim is lead author of “SpellRing: Recognizing Continuous Fingerspelling in American Sign Language using a Ring,” which will be presented at the Association of Computing Machinery’s conference on Human Factors in Computing Systems (CHI), April 26-May 1 in Yokohama, Japan.
SpellRing is worn on the thumb and equipped with a microphone and speaker. Together they send and receive inaudible sound waves that track the wearer’s hand and finger movements, while a mini gyroscope tracks the hand’s motion.
A proprietary deep-learning algorithm then processes the sonar images and predicts the ASL fingerspelled letters in real time and with similar accuracy as many existing systems that require more hardware.
Developers evaluated SpellRing with 20 experienced and novice ASL signers, having them naturally and continuously fingerspell a total of more than 20,000 words of varying lengths. SpellRing’s accuracy rate was between 82% and 92%, depending on the difficulty of words.
“There’s always a gap between the technical community who develop tools and the target community who use them,” said Cheng Zhang, assistant professor of information science and a paper co-author. “We’ve bridged some of that gap. We designed SpellRing for target users who evaluated it.”
Lim’s future work will include integrating the micro-sonar system into eyeglasses to capture upper body movements and facial expressions, for a more comprehensive ASL translation system.
“Deaf and hard-of-hearing people use more than their hands for ASL. They use facial expressions, upper body movements and head gestures,” said Lim, who completed basic and intermediate ASL courses at Cornell as part of his SpellRing research. “ASL is a very complicated, complex visual language.”
This research was funded by the National Science Foundation.
For additional information, see this Cornell Chronicle story.
Media note: Video can be viewed and downloaded here: https://cornell.box.com/v/SpellRing-ASL
-30-