Mostrar el registro sencillo del ítem

dc.contributor.authorLópez Pérez, Miguel 
dc.contributor.authorMorales Álvarez, Pablo 
dc.contributor.authorCooper, Lee A.D.
dc.contributor.authorFelicelli, Christopher
dc.contributor.authorGoldstein, Jeffery
dc.contributor.authorVadasz, Brian
dc.contributor.authorMolina Soriano, Rafael 
dc.contributor.authorKatsaggelos, Aggelos
dc.date.accessioned2024-04-09T10:04:47Z
dc.date.available2024-04-09T10:04:47Z
dc.date.issued2024
dc.identifier.citationComputerized Medical Imaging and Graphics 112 (2024) 102327 [10.1016/j.compmedimag.2024.102327]es_ES
dc.identifier.urihttps://hdl.handle.net/10481/90541
dc.description.abstractAutomated semantic segmentation of histopathological images is an essential task in Computational Pathology (CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations. Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation and the annotators’ expertise is currently a commonly used approach. Unfortunately, this approach is frequently carried out by learning a different neural network for each annotator, which scales poorly when the number of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation. This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an annotator network (for learning the annotators’ expertise). We propose to estimate the annotators’ behavior with only one network that receives the annotator ID as input, achieving scalability on the number of annotators. Our family is composed of three different models for the annotator network. Within this family, we propose a novel modeling of the annotator network in the CR segmentation literature, which considers the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827, outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert labels (0.7723). The code is available at https://github.com/wizmik12/CRowd_Seg.es_ES
dc.description.sponsorshipSpanish Ministry of Science and Innovation under Project PID2022-140189OB-C22es_ES
dc.description.sponsorshipFEDER/Junta de Andalucia under Project P20_00286es_ES
dc.description.sponsorshipFEDER/Junta de Andalucía, and Universidad de Granada under Project B-TIC-324-UGR20es_ES
dc.description.sponsorship‘‘Contrato puente" of the University of Granadaes_ES
dc.description.sponsorshipFunding for open access charge: Universidad de Granada / CBUA.es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/*
dc.subjectSegmentationes_ES
dc.subjectHistopathologyes_ES
dc.subjectCrowdsourcinges_ES
dc.titleLearning from crowds for automated histopathological image segmentationes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.1016/j.compmedimag.2024.102327
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 4.0 Internacional
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 4.0 Internacional