<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>DLSI - Artículos</title>
<link href="https://hdl.handle.net/10481/15208" rel="alternate"/>
<subtitle/>
<id>https://hdl.handle.net/10481/15208</id>
<updated>2026-04-13T21:00:53Z</updated>
<dc:date>2026-04-13T21:00:53Z</dc:date>
<entry>
<title>Fractal characterization of restored paintings</title>
<link href="https://hdl.handle.net/10481/112781" rel="alternate"/>
<author>
<name>Ruiz de Miras, Juan</name>
</author>
<author>
<name>López-Montes, Ana</name>
</author>
<author>
<name>Vílchez Quero, José Luis</name>
</author>
<author>
<name>Blanc García, María Rosario</name>
</author>
<author>
<name>Martín Perandrés, Domingo</name>
</author>
<id>https://hdl.handle.net/10481/112781</id>
<updated>2026-04-13T08:42:08Z</updated>
<summary type="text">Fractal characterization of restored paintings
Ruiz de Miras, Juan; López-Montes, Ana; Vílchez Quero, José Luis; Blanc García, María Rosario; Martín Perandrés, Domingo
The fractal dimension (FD) is a quantitative measure of complexity that has been effectively used over the past two decades to analyze paintings for several purposes, including forgery detection, artist classification, characterization of pictorial genres, and analysis of historical periods. However, the potential of FD to characterize the variations that may occur during restoration processes, such as consolidation, cleaning, and reintegration, remains largely unexplored. In this study, we present a novel methodology that combines FD computation on color images with a sliding window approach to generate detailed FD maps of paintings before and after restoration. We applied this methodology to a dataset of twenty-four restored paintings, which includes three types of alterations: craquelure, paint losses, and aged varnishes. Statistical comparisons of FD distributions before and after restoration were conducted using the Wilcoxon rank-sum test and Levene’s test. Our results show a consistent decrease in FD after restoration in paintings affected by craquelure or paint losses, and an increase in FD in aged-varnish paintings following restoration. Additionally, most paintings exhibited increased variance in FD after restoration, regardless of the type of damage. The difference FD maps, obtained by subtracting the post-restoration FD map from the pre-restoration one, revealed the specific areas where restoration had the greatest impact. These findings suggest that the proposed FD-based methodology offers a valuable, image-based tool for restorers, serving as a complementary resource to traditional restoration techniques for assessing the extent of alterations and monitoring applied treatments.
Funding for open access publishing: Universidad de Granada/CBUA. This research was partially&#13;
funded by the Spanish Ministry of Science, Innovation and University MICIU/AEI/10.13039/501100011033&#13;
and FEDER EU (grant number PID2024-161348OB-I00).
</summary>
</entry>
<entry>
<title>Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges</title>
<link href="https://hdl.handle.net/10481/112654" rel="alternate"/>
<author>
<name>Kharitonova, Ksenia</name>
</author>
<author>
<name>Pérez Fernández, David</name>
</author>
<author>
<name>Callejas Carrión, Zoraida</name>
</author>
<author>
<name>Griol Barres, David</name>
</author>
<id>https://hdl.handle.net/10481/112654</id>
<updated>2026-04-07T11:11:11Z</updated>
<summary type="text">Automating the Initial Development of Intent-Based Task-Oriented Dialog Systems Using Large Language Models: Experiences and Challenges
Kharitonova, Ksenia; Pérez Fernández, David; Callejas Carrión, Zoraida; Griol Barres, David
Building reliable intent-based, task-oriented dialog systems typically requires substantial manual effort: designers must derive intents, entities, responses, and control logic from raw conversational data, then iterate until the assistant behaves consistently. This paper investigates how far large language models (LLMs) can automate this development. In this paper, we use two reference corpora, Let’s Go (English, public transport) and MEDIA (French, hotel booking), to prompt four LLM families (GPT-4o, Claude, Gemini, Mistral Small) and generate the core specifications required by the rasa platform. These include intent sets with example utterances, entity definitions with slot mappings, response templates, and basic dialog flows. To structure this process, we introduce a model- and platform-agnostic pipeline with two phases. The first normalizes and validates LLM-generated artifacts, enforcing cross-file consistency and making slot usage explicit. The second uses a lightweight dialog harness that runs scripted tests and incrementally patches failure points until conversations complete reliably. Across eight projects, all models required some targeted repairs before training. After applying our pipeline, all reached &#13;
≥&#13;
70% task completion (many above 84%), while NLU performance ranged from mid-0.6 to 1.0 macro-F1 depending on domain breadth. These results show that, with modest guidance, current LLMs can produce workable end-to-end dialog prototypes directly from raw transcripts. Our main contributions are: (i) a reusable bootstrap method aligned with industry domain-specific languages (DSLs), (ii) a small set of high-impact corrective patterns, and (iii) a simple but effective harness for closed-loop refinement across conversational platforms.
</summary>
</entry>
<entry>
<title>Developing and Evaluating With Usability and Accessibility in Mind: A Case Study on Cultural Heritage Information Systems</title>
<link href="https://hdl.handle.net/10481/112474" rel="alternate"/>
<author>
<name>Almeraj, Zainab</name>
</author>
<author>
<name>López Escudero, Luis</name>
</author>
<author>
<name>Torres Cantero, Juan Carlos</name>
</author>
<id>https://hdl.handle.net/10481/112474</id>
<updated>2026-03-25T12:32:32Z</updated>
<summary type="text">Developing and Evaluating With Usability and Accessibility in Mind: A Case Study on Cultural Heritage Information Systems
Almeraj, Zainab; López Escudero, Luis; Torres Cantero, Juan Carlos
Over the last decade, interest in creating Cultural Heritage Information Systems (CHISystems) to document conservation and preservation efforts has grown globally. However, due to their interactive, multi-lingual, and distributed nature, their users face various usability challenges especially those systems developed or supervised in house by non UX/UI expert software designers and developers. Ongoing efforts to promote adopting usability and digital accessibility best practices are slowly making a difference, but more solutions are needed. This work aims to make international standards for basic usability and accessibility reachable and easy to recognize for researchers. The first of two contributions includes a simple heuristic evaluation framework, UsA11y, to guide in the adoption of digital accessibility and usability principles from early stages of the information system design, development and evaluation. To the best of the authors’ knowledge, there is no simple and concise assessment targeting designers and developers with limited knowledge in UX/UI, especially in niche fields such as cultural heritage. The second contribution includes a usability test and a heuristic evaluation (with UsA11y) on an existing cultural heritage system with users in the field to gain insight into experiences and needs. This work also offers design implications and insights into effectively adopting usability and accessibility for researchers, designers and developers in light of universal design concepts to ensure reliably and sustainably.
</summary>
</entry>
<entry>
<title>EDRS: Extremity-density representative selection for semi-supervised learning on imbalanced data</title>
<link href="https://hdl.handle.net/10481/112360" rel="alternate"/>
<author>
<name>Durán López, Alberto</name>
</author>
<author>
<name>Bolaños Martinez, Daniel</name>
</author>
<author>
<name>Bermúdez Edo, María del Campo</name>
</author>
<id>https://hdl.handle.net/10481/112360</id>
<updated>2026-04-06T10:25:59Z</updated>
<summary type="text">EDRS: Extremity-density representative selection for semi-supervised learning on imbalanced data
Durán López, Alberto; Bolaños Martinez, Daniel; Bermúdez Edo, María del Campo
Representative sample selection improves training in semi-supervised learning (SSL) where labeled data are limited and must reflect the original dataset. Recent SSL methods ignore class imbalance and lack tabular data case studies. To fill this gap, we propose Extremity-Density Representative Selection (EDRS), a preprocessing point selection method for imbalanced tabular datasets. EDRS ranks unlabeled candidates by combining two scores: density, which favors regions with many individuals, and extremity, which ensures inclusion of extreme cases likely belonging to minority classes. We first cluster the data to ensure diverse and representative coverage of the space, and then select samples with the highest density and extremity values, balancing outlier avoidance with coverage of extreme values. EDRS is used to select samples for labeling in an SSL framework and is compared with Random Sampling, Stratified Sampling, K-Means–derived methods, USL, Hybrid-CEAL, FDMat, Gaussian Mapping, and ESC-FFS. We validate EDRS on twelve synthetic and six real-world imbalanced datasets using SSL VIME, Manifold Mixup and Contrastive Mixup. EDRS achieves a class imbalance ratio (IR) close to 1 and is 99% faster than other algorithms with similar IR, improves F1-score by 3–5% in well-separated classes, and includes an ablation test evaluating the impact of density and extremity.
This work was supported by Grant C-SEJ-128-UGR23, funded by Consejería de Universidad, Investigación e Innovación and by ERDF Andalusia Program 2021-2027; project PID2023-149185OBI00 funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EU. Funding for open access charge: Universidad de Granada/CBUA.
</summary>
</entry>
<entry>
<title>GCT: A Granger-Causal Transformer for Multivariate Traffic Analysis in Smart Villages</title>
<link href="https://hdl.handle.net/10481/112331" rel="alternate"/>
<author>
<name>Durán López, Alberto</name>
</author>
<author>
<name>Bolaños Martinez, Daniel</name>
</author>
<author>
<name>De, Suparna</name>
</author>
<author>
<name>Bermúdez Edo, María del Campo</name>
</author>
<id>https://hdl.handle.net/10481/112331</id>
<updated>2026-03-24T09:48:22Z</updated>
<summary type="text">GCT: A Granger-Causal Transformer for Multivariate Traffic Analysis in Smart Villages
Durán López, Alberto; Bolaños Martinez, Daniel; De, Suparna; Bermúdez Edo, María del Campo
Predicting vehicle traffic optimizes transportation management and urban planning. In this paper, we combine real-timedata from vehicle-detection Internet of Things (IoT) devices with external variables from Google Trends. Integrating suchheterogeneous, complex data streams is challenging for traditional machine learning models that struggle to capture thedynamics of traffic patterns, which are influenced by multiple interdependent factors. To effectively model these complex,interdependent factors, we introduce the Granger-Causal Transformer (GCT), a transformer-based architecture for trafficprediction that integrates an LSTM network with a modified multi-head attention mechanism. This mechanism extendsGranger causality to the spatio-temporal domain to analyze all causality relations between features consistently, whilecapturing long-range dependencies and temporal patterns. Before applying GCT, we generate lagged versions of the GoogleTrends time series to capture lead and lag effects. Tourists usually make searches about their destination weeks beforetraveling, so peaks in search interest occur earlier than peaks in weekly traffic volume. Using lags aligns the predictors withweekly traffic volume and allows the model to use past searches to predict future traffic. We semantically validate the GoogleTrends terms by comparing each term with a reference string describing the study area, using a language model alignedwith the data’s linguistic context. We then apply a dual filtering process comprising Granger noncausality and correlationtests to minimize noise and redundancy. We evaluate our proposed methodology against classical statistical models, deeplearning models, large foundation models, and transformers across two case studies. The results demonstrate consistentlysuperior performance and generalizability, with GCT achieving R^2 improvements between 47% and 68% compared to the bestperforming baselines across both settings, alongside substantial reductions in MAE and MSE.
</summary>
</entry>
</feed>
