News Release

Novel dice loss functions for improved image segmentation

Researchers develop new Dice loss functions with only a simple formula transformation for improved accuracy in image segmentation tasks

Peer-Reviewed Publication

Meijo University

Qualitative result

image: 

Qualitative evaluation results using endoscopic images

view more 

Credit: Elsevier / Hotta Kazuhiro

In recent years, the use of computer algorithms for identifying specific objects within images, called “image segmentation,” has become popular in medical image analysis, helping doctors diagnose diseases faster. One key mathematical method in training these algorithms is Dice loss, which measures how accurately the algorithms identify various parts of an image. In this method, the deep learning algorithm guesses the outline of an object, called a “class,” in the image, which is then compared with user-defined “ground truth.”

 

An important parameter in the process is the Dice Coefficient Score (DSC), which measures the performance of the algorithm during training. Based on the amount of overlap between the predicted outlines and ground truth, the DSC varies from 0 (representing no overlap) to 1 for an exact match. While this method is quite effective, there is still room for improvement, particularly in multi-class segmentation tasks where imbalanced DSC scores can occur. Classes with large areas get a higher DSC score than smaller classes and also suffer from overtraining.

 

To address this problem, a researcher duo from Meijo University in Japan, including Professor Kazuhiro Hotta and Sota Kato, has now developed a novel algorithm called “t-vMF Dice loss.” This was done by replacing the cosine similarity condition of the original Dice loss with an adaptive t-vMF similarity condition. “We discovered that the cosine similarity function, which is an important component of the Dice loss function, is used indiscriminately for all classes, resulting in imbalanced segmentation,” explains Prof. Hotta. Their findings will be published in the journal Computers in Biology and Medicine in January 2024.

 

In their study, the duo showed that the t-vMF similarity condition, which is an extension of the cosine similarity, provided a more compact similarity loss function. Moreover, they also developed another algorithm, called “Adaptive t-vMF Dice loss,” that automatically determines a special parameter using the DSC of the validation set. This algorithm allows the use of compact similarities for easier classes and wider similarities for difficult classes in the image.

 

They tested their method on four datasets, including the binary segmentation datasets of CVC-ClinicDB and Kvasir-SEG and the multi-segmentation datasets of Automated Cardiac Diagnosis Challenge and Synapse multi-organ segmentation. The outcomes revealed that both t-vMF Dice loss and Adaptive t-vMF Dice loss were considerably more accurate than the conventional formulation. This was in addition to the loss functions using only one human-decided parameter.

 

Our method is not only applicable to medical images but also is general and can be used for segmentation tasks of cell images, material images, and autonomous driving images,” highlights Prof. Hotta.

 

Overall, this study marks a significant step forward in refining image segmentation analysis and holds great potential for critical fields like medical imaging and diagnosis.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.