• español 
    • español
    • English
    • français
  • FacebookPinterestTwitter
  • español
  • English
  • français
Ver ítem 
  •   DIGIBUG Principal
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Ciencias de la Computación e Inteligencia Artificial
  • DCCIA - Artículos
  • Ver ítem
  •   DIGIBUG Principal
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Ciencias de la Computación e Inteligencia Artificial
  • DCCIA - Artículos
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists

[PDF] Enhancing_Interpretability_of_Deep_Learning_Techniques_for_Oncological_Diseases__Current_Trends_and_Future_Horizons copia.pdf (2.036Mb)
Identificadores
URI: https://hdl.handle.net/10481/87830
DOI: 10.1109/rbme.2021.3131358
Exportar
RISRefworksMendeleyBibtex
Estadísticas
Ver Estadísticas de uso
Metadatos
Mostrar el registro completo del ítem
Autor
Amorim, José P.; Abreu, Pedro H.; Fernández Hilario, Alberto Luis; Reyes, Mauricio; Santos, Joao; Abreu, Miguel H.
Editorial
IEEE Reviews in Biomedical Engineering
Fecha
2023-01
Referencia bibliográfica
J. P. Amorim, P. H. Abreu, A. Fernández, M. Reyes, J. Santos and M. H. Abreu, "Interpreting Deep Machine Learning Models: An Easy Guide for Oncologists," in IEEE Reviews in Biomedical Engineering, vol. 16, pp. 192-207, 2023, doi: 10.1109/RBME.2021.3131358. keywords: {Cancer;Neurons;Tumors;Training;Feature extraction;Shape;Breast cancer;Big Data;interpretability;deep learning;decision-support systems;oncology},
Resumen
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using "deep learning techniques," "interpretability" and "oncology" as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
Colecciones
  • DCCIA - Artículos

Mi cuenta

AccederRegistro

Listar

Todo DIGIBUGComunidades y ColeccionesPor fecha de publicaciónAutoresTítulosMateriaFinanciaciónPerfil de autor UGREsta colecciónPor fecha de publicaciónAutoresTítulosMateriaFinanciación

Estadísticas

Ver Estadísticas de uso

Servicios

Pasos para autoarchivoAyudaLicencias Creative CommonsSHERPA/RoMEODulcinea Biblioteca UniversitariaNos puedes encontrar a través deCondiciones legales

Contacto | Sugerencias