• español 
    • español
    • English
    • français
  • FacebookPinterestTwitter
  • español
  • English
  • français
Ver ítem 
  •   DIGIBUG Principal
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Teoría de la Señal, Telemática y Comunicaciones
  • DTSTC - Artículos
  • Ver ítem
  •   DIGIBUG Principal
  • 1.-Investigación
  • Departamentos, Grupos de Investigación e Institutos
  • Departamento de Teoría de la Señal, Telemática y Comunicaciones
  • DTSTC - Artículos
  • Ver ítem
JavaScript is disabled for your browser. Some features of this site may not work without it.

Enhancing multimodal patterns in neuroimaging by siamese neural networks with self-attention mechanism

[PDF] manuscript_ijns copia.pdf (7.361Mb)
Identificadores
URI: https://hdl.handle.net/10481/85816
DOI: 10.1142/S0129065723500193
Exportar
RISRefworksMendeleyBibtex
Estadísticas
Ver Estadísticas de uso
Metadatos
Mostrar el registro completo del ítem
Autor
Arco Martín, Juan Eloy; Ortiz, Andrés; Gallego Molina; Gorriz Sáez, Juan Manuel; Ramírez Pérez De Inestrosa, Javier
Editorial
World Scientific Publishing Company
Materia
Multimodal combination
 
Deep learning
 
Medical imaging
 
Self-attention
 
Siamese neural network
 
Fecha
2022
Referencia bibliográfica
Published version: Vol. 33, No. 04, 2350019 (2023) [10.1142/S0129065723500193]
Patrocinador
Projects PGC2018- 098813-B-C32 and RTI2018-098913-B100 (Spanish “Ministerio de Ciencia, Innovación y Universidades”); UMA20-FEDERJA-086, A-TIC-080- UGR18 and P20 00525 (Consejería de economía y conocimiento, Junta de Andalucía); European Regional Development Funds (ERDF); Spanish “Ministerio de Universidades” through Margarita-Salas grant
Resumen
The combination of different sources of information is currently one of the most relevant aspects in the diagnostic process of several diseases. In the field of neurological disorders, different imaging modalities providing structural and functional information are frequently available. Those modalities are usually analyzed separately, although a joint of the features extracted from both sources can improve the classification performance of Computer-Aided Diagnosis (CAD) tools. Previous studies have computed independent models from each individual modality and combined them in a subsequent stage, which is not an optimum solution. In this work, we propose a method based on the principles of siamese neural networks to fuse information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). This framework quantifies the similarities between both modalities and relates them with the diagnostic label during the training process. The resulting latent space at the output of this network is then entered into an attention module in order to evaluate the relevance of each brain region at different stages of the development of Alzheimer's disease. The excellent results obtained and the high flexibility of the method proposed allow fusing more than two modalities, leading to a scalable methodology that can be used in a wide range of contexts.
Colecciones
  • DTSTC - Artículos

Mi cuenta

AccederRegistro

Listar

Todo DIGIBUGComunidades y ColeccionesPor fecha de publicaciónAutoresTítulosMateriaFinanciaciónPerfil de autor UGREsta colecciónPor fecha de publicaciónAutoresTítulosMateriaFinanciación

Estadísticas

Ver Estadísticas de uso

Servicios

Pasos para autoarchivoAyudaLicencias Creative CommonsSHERPA/RoMEODulcinea Biblioteca UniversitariaNos puedes encontrar a través deCondiciones legales

Contacto | Sugerencias