A novel fine-tuning and evaluation methodology for large language models on IoT raw data summaries (LLM-RawDMeth): A joint perspective in diabetes care
Metadatos
Mostrar el registro completo del ítemAutor
Gaitán-Guerrero, Juan F.; Martínez Cruz, Carmen; Espinilla, Macarena; Díaz-Jiménez, David; López, Jose L.Editorial
Elsevier
Materia
Diabetes management Prompt engineering Continuous glucose monitoring
Fecha
2025Referencia bibliográfica
Gaitán-Guerrero JF, Martínez-Cruz C, Espinilla M, Díaz-Jiménez D, López JL. A novel fine-tuning and evaluation methodology for large language models on IoT raw data summaries (LLM-RawDMeth): A joint perspective in diabetes care. Comput Methods Programs Biomed. 2025 Sep;269:108878. doi: 10.1016/j.cmpb.2025.108878
Patrocinador
MICIU/AEI/10.13039/501100011033, TPID2021-127275OB-I00, PID2021-126363NB-I00 and PDC2023-145863-I00; European Union NextGenerationEU/PRTR; Universidad de Jaén/ CBUAResumen
This study addresses the challenge of interpreting complex continuous glucose monitoring data in diabetes management by proposing a domain-guided fine-tuning methodology for Large Language Models. Using expert-modeled fuzzy logic datasets and task-aware prompt engineering, the approach enables LLMs to generate accurate, concise, and clinically meaningful summaries from raw glucose data. Experimental results show that fine-tuned GPT-4o achieves superior performance, demonstrating the potential of expert-aligned language models to support medical decision-making and reduce the burden on healthcare systems.





