PeMNet for Pectoral Muscle Segmentation
Metadatos
Mostrar el registro completo del ítemEditorial
MDPI
Materia
Pectoral segmentation Deep learning Global channel attention module
Fecha
2022-01-14Referencia bibliográfica
Yu, X... [et al.]. PeMNet for Pectoral Muscle Segmentation. Biology 2022, 11, 134. [https://doi.org/10.3390/biology11010134]
Patrocinador
CSC; Royal Society International Exchanges Cost Share Award, UK RP202G0230; Medical Research Council Confidence in Concept Award, UK MC_PC_17171; Hope Foundation for Cancer Research, UK RM60G0680; Sino-UK Industrial Fund, UK RP202G0289; Global Challenges Research Fund (GCRF), UK P202PF11; British Heart Foundation Accelerator Award, UK AA/18/3/34220; Guangxi Key Laboratory of Trusted Software kx201901; FEDER Una manera de hacer Europa RTI2018-098913-B100; Junta de Andalucia; European Commission CV20-45250 A-TIC-080-UGR18 B-TIC-586-UGR20 P20-00525; MCIN/AEI/10.13039/501100011033/Resumen
As an important imaging modality, mammography is considered to be the global gold standard
for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role
in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists
were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region
partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient
breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel
deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation
in mammography images. In the proposed PeMNet, we integrated a novel attention module called
the Global Channel Attention Module (GCAM), which can effectively improve the segmentation
performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps
(CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and
global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron
(MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating
this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final
feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution
network can be effectively passed on to later stages of the network and therefore leads to better
information usage. The experiments on a merged dataset derived from two datasets, INbreast and
OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an
IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of
93.33%, respectively.