<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>Grupo: Soft Computing and Intelligent Information Systems (SCI2S)</title>
<link href="https://hdl.handle.net/10481/31276" rel="alternate"/>
<subtitle/>
<id>https://hdl.handle.net/10481/31276</id>
<updated>2026-04-06T11:06:17Z</updated>
<dc:date>2026-04-06T11:06:17Z</dc:date>
<entry>
<title>Multigranulation Supertrust Model for Attribute Reduction</title>
<link href="https://hdl.handle.net/10481/110290" rel="alternate"/>
<author>
<name>Ding, Weiping</name>
</author>
<author>
<name>Pedrycz, Witold</name>
</author>
<author>
<name>Triguero, Isaac</name>
</author>
<author>
<name>Cao, Zehong</name>
</author>
<author>
<name>Lin, Chin-Teng</name>
</author>
<id>https://hdl.handle.net/10481/110290</id>
<updated>2026-01-26T13:42:24Z</updated>
<summary type="text">Multigranulation Supertrust Model for Attribute Reduction
Ding, Weiping; Pedrycz, Witold; Triguero, Isaac; Cao, Zehong; Lin, Chin-Teng
As big data often contains a significant amount of uncertain, unstructured, and imprecise data that are structurally complex and incomplete, traditional attribute reduction methods are less effective when applied to large-scale incomplete information systems to extract knowledge. Multigranular computing provides a powerful tool for use in big data analysis conducted at different levels of information granularity. In this article, we present a novel multigranulation supertrust fuzzy-rough set-based attribute reduction (MSFAR) algorithm to support the formation of hierarchies of information granules of higher types and higher orders, which addresses newly emerging data mining problems in big data analysis. First, a multigranulation supertrust model based on the valued tolerance relation is constructed to identify the fuzzy similarity of the changing knowledge granularity with multimodality attributes. Second, an ensemble consensus compensatory scheme was adopted to calculate the multigranular trust degree based on the reputation at different granularities to create reasonable subproblems with different granulation levels. Third, an equilibrium method of multigranular coevolution is employed to ensure a wide range of balancing of exploration and exploitation, and this strategy can classify super elitists' preferences and detect noncooperative behaviors with a global convergence ability and high search accuracy. The experimental results demonstrate that the MSFAR algorithm achieves a high performance in addressing uncertain and fuzzy attribute reduction problems with a large number of multigranularity variables
</summary>
</entry>
<entry>
<title>Multigranulation supertrust model for attribute reduction</title>
<link href="https://hdl.handle.net/10481/110289" rel="alternate"/>
<author>
<name>Ding, Weiping</name>
</author>
<author>
<name>Pedrycz, Witold</name>
</author>
<author>
<name>Triguero, Isaac</name>
</author>
<author>
<name>Cao, Zehong</name>
</author>
<author>
<name>Lin, Chin-Teng</name>
</author>
<id>https://hdl.handle.net/10481/110289</id>
<updated>2026-01-26T13:41:16Z</updated>
<summary type="text">Multigranulation supertrust model for attribute reduction
Ding, Weiping; Pedrycz, Witold; Triguero, Isaac; Cao, Zehong; Lin, Chin-Teng
As big data often contains a significant amount of uncertain, unstructured, and imprecise data that are structurally complex and incomplete, traditional attribute reduction methods are less effective when applied to large-scale incomplete information systems to extract knowledge. Multigranular computing provides a powerful tool for use in big data analysis conducted at different levels of information granularity. In this article, we present a novel multigranulation supertrust fuzzy-rough set-based attribute reduction (MSFAR) algorithm to support the formation of hierarchies of information granules of higher types and higher orders, which addresses newly emerging data mining problems in big data analysis. First, a multigranulation supertrust model based on the valued tolerance relation is constructed to identify the fuzzy similarity of the changing knowledge granularity with multimodality attributes. Second, an ensemble consensus compensatory scheme was adopted to calculate the multigranular trust degree based on the reputation at different granularities to create reasonable subproblems with different granulation levels. Third, an equilibrium method of multigranular coevolution is employed to ensure a wide range of balancing of exploration and exploitation, and this strategy can classify super elitists' preferences and detect noncooperative behaviors with a global convergence ability and high search accuracy. The experimental results demonstrate that the MSFAR algorithm achieves a high performance in addressing uncertain and fuzzy attribute reduction problems with a large number of multigranularity variables.
This work was supported in part by the National Natural Science Foundation of&#13;
China under Grant 61300167 and Grant 61976120, in part by the Natural&#13;
Science Foundation of Jiangsu Province under Grant BK20151274 and Grant&#13;
BK20191445, in part by the Six Talent Peaks Project of Jiangsu Province&#13;
under Grant XYDXXJS-048, in part by the Jiangsu Provincial Government&#13;
Scholarship Program under Grant JS-2016-065, and sponsored by Qing Lan&#13;
Project of Jiangsu Province.
</summary>
</entry>
<entry>
<title>EUSC: A clustering-based surrogate model to accelerate evolutionary undersampling in imbalanced classification</title>
<link href="https://hdl.handle.net/10481/109592" rel="alternate"/>
<author>
<name>Lam Le, Hoang</name>
</author>
<author>
<name>Landa-Silva, Dario</name>
</author>
<author>
<name>Galar, Mikel</name>
</author>
<author>
<name>García López, Salvador</name>
</author>
<author>
<name>Triguero, Isaac</name>
</author>
<id>https://hdl.handle.net/10481/109592</id>
<updated>2026-01-13T08:54:22Z</updated>
<summary type="text">EUSC: A clustering-based surrogate model to accelerate evolutionary undersampling in imbalanced classification
Lam Le, Hoang; Landa-Silva, Dario; Galar, Mikel; García López, Salvador; Triguero, Isaac
earning from imbalanced datasets is highly demanded in real-world applications and a challenge for standard classifiers that tend to be biased towards the classes with the majority of the examples. Undersampling approaches reduce the size of the majority class to balance the class distributions. Evolutionary-based approaches are prominent, treating undersampling as a binary optimisation problem that determines which examples are removed. However, their utilisation is limited to small datasets due to fitness evaluation costs. This work proposes a two-stage clustering-based surrogate model that enables evolutionary undersampling to compute fitness values faster. The main novelty lies in the development of a surrogate model for binary optimisation which is based on the meaning (phenotype) rather than their binary representation (genotype). We conduct an evaluation on 44 imbalanced datasets, showing that in comparison with the original evolutionary undersampling, we can save up to 83% of the runtime without significantly deteriorating the classification performance.
</summary>
</entry>
<entry>
<title>Multi-Head CNN-RNN for Multi-Time Series Anomaly Detection: An industrial case study</title>
<link href="https://hdl.handle.net/10481/109577" rel="alternate"/>
<author>
<name>Canizo, Mikel</name>
</author>
<author>
<name>Triguero Velázquez, Isaac</name>
</author>
<author>
<name>Conde, Angel</name>
</author>
<author>
<name>Onieva, Enrique</name>
</author>
<id>https://hdl.handle.net/10481/109577</id>
<updated>2026-01-13T07:43:45Z</updated>
<summary type="text">Multi-Head CNN-RNN for Multi-Time Series Anomaly Detection: An industrial case study
Canizo, Mikel; Triguero Velázquez, Isaac; Conde, Angel; Onieva, Enrique
Detecting anomalies in time series data is becoming mainstream in a wide variety of industrial applications in which sensors monitor expensive machinery. The complexity of this task increases when multiple heterogeneous sensors provide information of different nature, scales and frequencies from the same machine. Traditionally, machine learning techniques require a separate data pre-processing before training, which tends to be very time-consuming and often requires domain knowledge. Recent deep learning approaches have shown to perform well on raw time series data, eliminating the need for pre-processing. In this work, we propose a deep learning based approach for supervised multi-time series anomaly detection that combines a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) in different ways. Unlike other approaches, we use independent CNNs, so-called convolutional heads, to deal with anomaly detection in multi-sensor systems. We address each sensor individually avoiding the need for data pre-processing and allowing for a more tailored architecture for each type of sensor. We refer to this architecture as Multi-head CNN–RNN. The proposed architecture is assessed against a real industrial case study, provided by an industrial partner, where a service elevator is monitored. Within this case study, three type of anomalies are considered: point, context-specific, and collective.The experimental results show that the proposed architecture is suitable for multi-time series anomaly detection as it obtained promising results on the real industrial scenario
The authors wish to express their thanks to the Basque Government for their financial support of this research through the Elkartek program under the TEKINTZE project (Grant agreement No. KK-2018/00104). Any opinions, findings and conclusions expressed in this article are those of the authors and do not necessarily reflect the views of funding agencies.
</summary>
</entry>
<entry>
<title>Handling uncertainty in citizen science data: Towards an improved amateur-based large-scale classification</title>
<link href="https://hdl.handle.net/10481/109576" rel="alternate"/>
<author>
<name>Jiménez, Manuel</name>
</author>
<author>
<name>Triguero, Isaac</name>
</author>
<author>
<name>John, Robert</name>
</author>
<id>https://hdl.handle.net/10481/109576</id>
<updated>2026-01-13T07:41:13Z</updated>
<summary type="text">Handling uncertainty in citizen science data: Towards an improved amateur-based large-scale classification
Jiménez, Manuel; Triguero, Isaac; John, Robert
Citizen Science, traditionally known as the engagement of amateur participants in research, is showing great potential for large-scale processing of data. In areas such as astronomy, biology, or geo-sciences, where emerging technologies generate huge volumes of data, Citizen Science projects enable image classification at a rate not possible to accomplish by experts alone. However, this approach entails the spread of biases and uncertainty in the results, since participants involved are typically non-experts in the problem and hold variable skills. Consequently, the research community tends not to trust Citizen Science outcomes, claiming a generalised lack of accuracy and validation.&#13;
We introduce a novel multi-stage approach to handle uncertainty within data labelled by amateurs in Citizen Science projects. Firstly, our method proposes a set of transformations that leverage the uncertainty in amateur classifications. Then, a hybridisation strategy provides the best aggregation of the transformed data for improving the quality and confidence in the results. As a case study, we consider the Galaxy Zoo, a project pursuing the labelling of galaxy images. A limited set of expert classifications allow us to validate the experiments, confirming that our approach is able to greatly boost accuracy and classify more images with respect to the state-of-art.
The work of M. Jiménez was funded by a Ph.D. scholarship from the School of Computer Science of the University of Nottingham.
</summary>
</entry>
</feed>
