News Release

Acquiring weak annotations for tumor localization in temporal and volumetric data

Peer-Reviewed Publication

Beijing Zhongke Journal Publising Co. Ltd.

Annotation and propagation process of the proposed Drag&Drop

image: 

The pancreas and tumor are represented by the colors orange and green, respectively. Given the weak label, we expand it to a lesion marker (teal green) and background markers (red). We then utilize the marker-based watershed algorithm to generate the initial segmentation area (blue), following which dilated tumors (pink) are applied to compute the masked back-propagation.

view more 

Credit: Beijing Zhongke Journal Publising Co. Ltd.

Tumor detection and localization are often approached as a semantic segmentation task known as detection by segmentation. The hypothesis is that identifying and delineating tumor boundaries can improve the tumor detection rate. However, this idea might not apply to all medical scenarios, particularly for screening purposes, in which it is more critical to predict the approximate location and size of the tumors rather than focusing on the accurate segmentation of tumor boundaries. For instance, polyp detection only requires the identification of the polyp, which can then be removed during the colonoscopy procedure. In such cases, accurate segmentation of the polyp's boundary may not be necessary. However, many public datasets for polyp detection provide per-pixel annotation for every polyp, which is exceptionally time-consuming and costly. Similar issues arise in other medical scenarios that focus on tumor detection but allocate annotations at the pixel level. This stresses the potential wastage of resources when using the detection by segmentation strategy for creating large-scale annotated datasets for tumor detection. Researchers posit that for certain detection tasks, high precision in boundary segmentation is not crucial, and therefore per-pixel annotations may not be necessary.

 

On the contrary, weak annotations are more cost-effective and require less time than per-pixel annotations. Researchers hypothesize that weak annotations are more appropriate for tumor detection and localization than the detection by segmentation strategy. They justify this point from three perspectives. Firstly, with a certain budget, per-pixel annotations inevitably sacrifice data diversity and population due to the high annotation cost. Weak annotations allow for greater diversity and thus improve the tumor detection rate in minority cases, such as age. Secondly, the formulation of tumor segmentation can generate numerous false positives. Pixel-wise annotated datasets, e.g., KiTS, only provide images with tumors. This can create a bias where AI algorithms learn to predict tumors in every unseen image. Thirdly, per-pixel annotations require significant time and resources to perform. Specifically, per-pixel annotations for pancreatic tumors from 3D volumetric CT scans require four minutes per subject, whereas weak annotations only require an average of two seconds per subject. Similarly, for polyp detection, weak annotations are eight times faster to perform than per-pixel annotations (2s VS. 16s).

 

While per-pixel annotation is dauntingly expensive and time-consuming, it is still widely adopted to train and test AI algorithms for tumor detection and localization. Researchers design a new weak annotation strategy for high-dimensional data, such as temporal and volumetric medical images, by exploiting contextual information across dimensions. Researchers call this strategy “Drag&Drop” because it involves clicking on the tumor and then dragging and dropping to provide the approximate radius of the tumor. This annotation strategy is sufficient to capture the size and location of each tumor without requiring precise boundary segmentation. To utilize Drag&Drop annotations, researchers further develop a weakly supervised framework based on the classical watershed algorithm, and it is optimized using the approximate tumor size and location constraints provided by Drag& Drop. Their weakly supervised framework significantly reduces the impact of noisy labels that commonly occur at tumor boundaries in per-pixel annotations. Researchers demonstrate in the experiments that training using the weak annotations by Drag&Drop, AI algorithms can perform similarly to pixel-wise annotations in tumor detection and localization tasks. Researchers also show the superiority of their Drag&Drop annotations over the previous weak annotation strategies, such as scribbles, points, bounding boxes, and ellipses annotations, in terms of tumor detection and localization efficacy.

 

Section 2 reviews the related works. By contrast, researchers propose that their Drag&Drop annotation is the first weak annotating strategy that focuses on high-dimensional data, such as temporal and volumetric medical images. Different from low-dimensional data, high-dimensional data requires significant effort and time for pixel-wise annotation due to the high-dimensional nature of the data, involving either temporary or spatial dimensions. By leveraging contextual information across dimensions, their Drag&Drop annotation enables manual labeling based on a single 2D annotation in high-dimensional volumetric data, eliminating the need for annotating on a slice-by-slice basis.

 

Section 3 introduces the method of this study. To reduce manual annotation efforts, researchers propose a novel annotation strategy termed Drag&Drop and a weakly supervised learning framework, consisting of 3D annotation propagation and noise reduction to achieve a better cost-accuracy trade-off.

 

Section 4 is about the detailed information of the experiment and the result. The experimental results demonstrate that this framework in the study achieves a comparable tumor detection rate to per-pixel annotations and higher rates than alternative weak annotation strategies. More importantly, researchers show that allocating weak annotations from a larger data population, given a certain annotation budget, improves the model generalizability of minority cases compared to per-pixel annotations from a small dataset.

 

In this paper, to simplify the annotation of temporal and volumetric medical images, researchers propose a novel annotation strategy called Drag&Drop and a weakly supervised framework to exploit these annotations. Researchers hope their Drag&Drop strategy can streamline and accelerate the annotation procedure for tumor detection and localization in various medical modalities.

 

 

See the article:

Acquiring Weak Annotations for Tumor Localization in Temporal and Volumetric Data

http://doi.org/10.1007/s11633-023-1380-5


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.