Feature Story | 19-Nov-2024

Leveraging machine learning to detect middle ear diseases

Leveraging machine learning and cutting-edge imaging, USC students and otolaryngology professor pioneer new methods for detecting middle ear diseases

University of Southern California

Understanding the middle ear is essential—not only for hearing but also for balance and quality of life. According to the National Institutes of Health, in the U.S., one in eight adults has hearing loss, and nearly 28% of those with moderate to severe loss face challenges in daily activities.  

Among children, five out of six experience ear infections, and recurring infections increase the risk of permanent hearing loss. 

Yet currently, doctors can only look at the surface of the eardrum, which makes it harder to detect deeper issues inside the ear until they advance. 

Instead, imagine if you could walk into a clinic, get a quick 3D scan of your ear, and the machine would immediately help your physician diagnose the problem. 

Fortunately, this might become reality sooner than later.  

At USC, a group of undergraduate students are working with Brian Applegate, a professor of otolaryngology-head and neck surgery and biomedical engineering, to develop a machine learning model to quickly and accurately identify specific ear problems.  

The interdisciplinary team, all members of CAIS++, the student branch of the USC Center for Artificial Intelligence in Society (CAIS), include majors in computer science, human biology, politics, philosophy and law, healthcare data science and applied mathematics.  

Team members are Claude Yoo, Will Dolan, Matthew Rodriguez, Lucia Zhang, Irika Katiyar, Lauren Sun, Seena Pourzand, and Sana Jayaswal. 

Interpreting scans of the middle ear  

The team’s novel method uses deep learning tools to automate the process of interpreting optical coherence tomography (OCT) scans of the middle ear.  

OCT is an imaging technique that creates high-resolution cross-sectional images of tissue, such as the eardrum and middle ear, to identify problems. But interpreting these detailed images can be difficult and time-consuming, especially for primary care providers, who may not have specialist experience. 

3D render of tympanic membrane with a retraction pocket imaged by Optical Coherence Tomography (OCT)

 “What really underscores this research is our desire to help develop easier, more efficient preventative care for people suffering from middle ear diseases,” said Matthew Rodriguez, a major in human biology and with a minor in applied analytics.   

Yoo added, “Hopefully, it will lead to more quantitative and accessible diagnoses.”  

More efficient and effective disease detection  

Current methods for diagnosing middle ear diseases are qualitative and limited to examining the surface of the middle ear with a special tool called an otoscope. However, otoscopy only provides a limited view past the eardrum, which can restrict its diagnostic effectiveness. 

Instead, the USC team is using OCT, a non-invasive, quantitative imaging technique that creates a 3D image of the middle ear. Typically used to examine the eye, including the retina’s layer and the optic nerve fiber, Applegate’s research group is one of the first to apply this technology to middle ear examinations.  

Using OCT scans, doctors can view a more in-depth 3D reconstruction of the ear without invasive methods, increasing the efficiency and precision of their diagnosis.  

“With an OCT scan device, we can catch middle ear disorders during annual physicals before significant hearing loss happens and automation can increase the efficiency of these diagnoses,” said Applegate, an expert in functional imaging of the middle and inner ear who first explored OCT to better understand cochlear mechanics. 

Generating results in seconds  

The CAIS++ students worked with Applegate to train a machine learning model to recognize ear conditions from OCT scans. The model picks up on features in the OCT scan to detect signs of disease that would be missed by less experienced clinicians.  

When trained, the model can predict diagnoses of middle ear diseases from OCT scans, generating results in a few seconds or less. 

“Doctors get a lot of extra information from OCT scans, such as visuals of abnormal cell growths, small holes in the eardrum, or even retraction pockets – which are deformations of the middle eardrum,” Dolan said.  

“We currently use neural networks which take in 2D images, filtering and extracting specific features that point to the presence of middle ear diseases. To train the machine learning model, we have to slice the 3D ear scans into 2D images, label the relevant data on the scan with specific diagnoses, then feed the images to the model.”  

The challenge: handling large datasets  

One of the biggest challenges the CAIS++ student team faced was handling the sheer amount of data generated from the OCT scans.  

“OCT scans are very large due to their 3D nature,” Yoo said. “So, we experimented with data augmentation techniques to create more manageable samples for lightweight models. We had to develop ways to optimize our 2D image slices as our computers’ memory could not store that much data.”  

The team was able to solve this by pooling together their scientific knowledge, combined with Applegate’s expertise on applying new imaging technologies to otology in novel ways.  

“It was very helpful to combine our domain knowledge, especially since we’re from all kinds of backgrounds from health to biology to computer science,” Dolan said. “We came up with solutions that overlapped different disciplines and drew from our past project experiences.”  

What’s in store for automated OCT scans?   

Looking to the future, the CAIS++ team hopes that their machine learning discoveries can be applied to the medical field as a supplementary tool for physicians to quickly obtain quantitative diagnoses of middle ear diseases.  

“Hopefully, by automating healthcare diagnoses, we can direct physicians’ time and effort away from such laborious tasks to more creative tasks,” Yoo said.  

The team and Applegate also have a vision to one day deploy their automated diagnostic device in the hearing clinic at USC.

“This system would make it much easier for someone without as much expertise to at least get the initial diagnosis and then refer the patient to an expert otologist,” said Applegate. 

“Ultimately, our long-term plan is to get a diagnostic tool in the hands of a primary care physician, so we catch problems earlier.”  

 

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.