Improving ductal carcinoma in situ classification by convolutional neural network with exponential linear unit and rank-based weighted pooling
Metadatos
Mostrar el registro completo del ítemEditorial
Springer Nature
Materia
Ductal carcinoma in situ Thermal images Deep learning Convolutional neural networks Breast thermography Exponential linear unit Rank-based weighted pooling Data augmentation Color jittering Visual question answering
Fecha
2020-11Referencia bibliográfica
Zhang, Y. D., Satapathy, S. C., Wu, D., Guttery, D. S., Górriz, J. M., & Wang, S. H. (2020). Improving ductal carcinoma in situ classification by convolutional neural network with exponential linear unit and rank-based weighted pooling. Complex & Intelligent Systems, 1-16. [https://doi.org/10.1007/s40747-020-00218-4]
Patrocinador
British Heart Foundation Accelerator Award, UK; Royal Society International Exchanges Cost Share Award, UK RP202G0230; Hope Foundation for Cancer Research, UK RM60G0680; Medical Research Council Confidence in Concept Award, UK MC_PC_17171; MINECO/FEDER, Spain/Europe RTI2018-098913-B100 A-TIC-080-UGR18Resumen
Ductal carcinoma in situ (DCIS) is a pre-cancerous lesion in the ducts of the breast, and early diagnosis is crucial for optimal
therapeutic intervention. Thermography imaging is a non-invasive imaging tool that can be utilized for detection of DCIS and
although it has high accuracy (~88%), it is sensitivity can still be improved. Hence, we aimed to develop an automated artificial
intelligence-based system for improved detection of DCIS in thermographs. This study proposed a novel artificial intelligence
based system based on convolutional neural network (CNN) termed CNN-BDER on a multisource dataset containing 240
DCIS images and 240 healthy breast images. Based on CNN, batch normalization, dropout, exponential linear unit and
rank-based weighted pooling were integrated, along with L-way data augmentation. Ten runs of tenfold cross validation were
chosen to report the unbiased performances. Our proposed method achieved a sensitivity of 94.08±1.22%, a specificity
of 93.58±1.49 and an accuracy of 93.83±0.96. The proposed method gives superior performance than eight state-of-theart
approaches and manual diagnosis. The trained model could serve as a visual question answering system and improve
diagnostic accuracy.