Mostrar el registro sencillo del ítem

dc.contributor.authorYu, Xiang
dc.contributor.authorGorriz Sáez, Juan Manuel 
dc.date.accessioned2022-03-25T12:30:32Z
dc.date.available2022-03-25T12:30:32Z
dc.date.issued2022-01-14
dc.identifier.citationYu, X... [et al.]. PeMNet for Pectoral Muscle Segmentation. Biology 2022, 11, 134. [https://doi.org/10.3390/biology11010134]es_ES
dc.identifier.urihttp://hdl.handle.net/10481/73769
dc.descriptionX.Y. holds a CSC scholarship with the University of Leicester. The authors declare that there is no conflict of interest. This paper is partially supported by Royal Society International Exchanges Cost Share Award, UK (RP202G0230); Medical Research Council Confidence in Concept Award, UK (MC_PC_17171); Hope Foundation for Cancer Research, UK (RM60G0680); Sino-UK Industrial Fund, UK (RP202G0289); Global Challenges Research Fund (GCRF), UK (P202PF11); British Heart Foundation Accelerator Award, UK (AA/18/3/34220); Guangxi Key Laboratory of Trusted Software (kx201901); MCIN/AEI/10.13039/501100011033/ and FEDER Una manera de hacer Europa under the RTI2018-098913-B100 project, by the Consejeria de Economia, Innovacion, Ciencia y Empleo (Junta de Andalucia) and FEDER under CV20-45250, A-TIC-080-UGR18, B-TIC-586-UGR20 and P20-00525 projects.es_ES
dc.description.abstractAs an important imaging modality, mammography is considered to be the global gold standard for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation in mammography images. In the proposed PeMNet, we integrated a novel attention module called the Global Channel Attention Module (GCAM), which can effectively improve the segmentation performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps (CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron (MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution network can be effectively passed on to later stages of the network and therefore leads to better information usage. The experiments on a merged dataset derived from two datasets, INbreast and OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of 93.33%, respectively.es_ES
dc.description.sponsorshipCSCes_ES
dc.description.sponsorshipRoyal Society International Exchanges Cost Share Award, UK RP202G0230es_ES
dc.description.sponsorshipMedical Research Council Confidence in Concept Award, UK MC_PC_17171es_ES
dc.description.sponsorshipHope Foundation for Cancer Research, UK RM60G0680es_ES
dc.description.sponsorshipSino-UK Industrial Fund, UK RP202G0289es_ES
dc.description.sponsorshipGlobal Challenges Research Fund (GCRF), UK P202PF11es_ES
dc.description.sponsorshipBritish Heart Foundation Accelerator Award, UK AA/18/3/34220es_ES
dc.description.sponsorshipGuangxi Key Laboratory of Trusted Software kx201901es_ES
dc.description.sponsorshipFEDER Una manera de hacer Europa RTI2018-098913-B100es_ES
dc.description.sponsorshipJunta de Andaluciaes_ES
dc.description.sponsorshipEuropean Commission CV20-45250 A-TIC-080-UGR18 B-TIC-586-UGR20 P20-00525es_ES
dc.description.sponsorshipMCIN/AEI/10.13039/501100011033/es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAtribución 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectPectoral segmentationes_ES
dc.subjectDeep learninges_ES
dc.subjectGlobal channel attention modulees_ES
dc.titlePeMNet for Pectoral Muscle Segmentationes_ES
dc.typejournal articlees_ES
dc.rights.accessRightsopen accesses_ES
dc.identifier.doi10.3390/biology11010134
dc.type.hasVersionVoRes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 3.0 España