Learning from crowds for automated histopathological image segmentation
Metadatos
Afficher la notice complèteAuteur
López Pérez, Miguel; Morales Álvarez, Pablo; Cooper, Lee A.D.; Felicelli, Christopher; Goldstein, Jeffery; Vadasz, Brian; Molina Soriano, Rafael; Katsaggelos, AggelosEditorial
Elsevier
Materia
Segmentation Histopathology Crowdsourcing
Date
2024Referencia bibliográfica
Computerized Medical Imaging and Graphics 112 (2024) 102327 [10.1016/j.compmedimag.2024.102327]
Patrocinador
Spanish Ministry of Science and Innovation under Project PID2022-140189OB-C22; FEDER/Junta de Andalucia under Project P20_00286; FEDER/Junta de Andalucía, and Universidad de Granada under Project B-TIC-324-UGR20; ‘‘Contrato puente" of the University of Granada; Funding for open access charge: Universidad de Granada / CBUA.Résumé
Automated semantic segmentation of histopathological images is an essential task in Computational Pathology
(CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations.
Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost
by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this
scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation
and the annotators’ expertise is currently a commonly used approach. Unfortunately, this approach is frequently
carried out by learning a different neural network for each annotator, which scales poorly when the number
of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation.
This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach
consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an
annotator network (for learning the annotators’ expertise). We propose to estimate the annotators’ behavior
with only one network that receives the annotator ID as input, achieving scalability on the number of
annotators. Our family is composed of three different models for the annotator network. Within this family,
we propose a novel modeling of the annotator network in the CR segmentation literature, which considers
the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast
Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827,
outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert
labels (0.7723). The code is available at https://github.com/wizmik12/CRowd_Seg.