A smart neckband allows wearers to monitor their dietary intake. Automatically monitoring food and fluid intake can be useful when managing conditions including diabetes and obesity, or when maximizing fitness. But wearable technologies must be able to distinguish eating and drinking from similar movements, such as speaking and walking. Chi Hwan Lee and colleagues propose a machine-learning enabled neckband that can differentiate body movements, speech, and fluid and food intake. The neckband’s sensor module includes a surface electromyography sensor, a three-axis accelerometer, and a microphone. Together, these sensors can capture muscle activation patterns in the thyrohyoid muscle of the neck, along with body movements and acoustic signals. In a study of six volunteers, the machine-learning algorithm correctly determined which movements were eating or drinking with an accuracy rate of about 96% for individual activities and 89% for concurrent activities. The neckband is made of a stretchable, twistable, breathable, mesh-structured textile loaded with 47 active and passive components that can run on battery power for more than 18 hours between charges. According to the authors, the neckband could be used in a closed-loop system combined with continuous glucose meter and insulin pump to calculate insulin dosages for diabetic patients by identifying meal timings—or to aid athletes and other individuals interested in increasing their overall health and wellness.
Journal
PNAS Nexus
Article Title
A machine-learning-enabled smart neckband for monitoring dietary intake
Article Publication Date
7-May-2024