A novel fine-tuning and evaluation methodology for large language models on IoT raw data summaries (LLM-RawDMeth): A joint perspective in diabetes care Gaitán-Guerrero, Juan F. Martínez Cruz, Carmen Espinilla, Macarena Díaz-Jiménez, David López, Jose L. Diabetes management Prompt engineering Continuous glucose monitoring This result has been partially supported by grant PID2021-127275OB-I00 and grant PID2021-126363NB-I00 funded by MICIU/AEI/10.13039/501100011033, Spain and by ‘‘ERDF A way of making Europe’’, and by grant PDC2023-145863-I00 funded by MICIU/AEI/ 10.13039/501100011033, Spain and by the ‘‘European Union NextGenerationEU/PRTR’’. Funding for open access charge: Universidad de Jaén/ CBUA This study addresses the challenge of interpreting complex continuous glucose monitoring data in diabetes management by proposing a domain-guided fine-tuning methodology for Large Language Models. Using expert-modeled fuzzy logic datasets and task-aware prompt engineering, the approach enables LLMs to generate accurate, concise, and clinically meaningful summaries from raw glucose data. Experimental results show that fine-tuned GPT-4o achieves superior performance, demonstrating the potential of expert-aligned language models to support medical decision-making and reduce the burden on healthcare systems. 2026-01-21T11:05:22Z 2026-01-21T11:05:22Z 2025 journal article Gaitán-Guerrero JF, Martínez-Cruz C, Espinilla M, Díaz-Jiménez D, López JL. A novel fine-tuning and evaluation methodology for large language models on IoT raw data summaries (LLM-RawDMeth): A joint perspective in diabetes care. Comput Methods Programs Biomed. 2025 Sep;269:108878. doi: 10.1016/j.cmpb.2025.108878 0169-2607 https://hdl.handle.net/10481/110032 10.1016/j.cmpb.2025.108878 eng http://creativecommons.org/licenses/by-nc-nd/4.0/ open access Attribution-NonCommercial-NoDerivatives 4.0 Internacional Elsevier