SEG-SSC: A Framework based on Synthetic Examples Generation for Self-Labeled Semi-Supervised Classification
Metadatos
Mostrar el registro completo del ítemEditorial
IEEE
Materia
Self-Labeled methods Co-training Synthetic examples Semi-supervised classification
Fecha
2014Referencia bibliográfica
Publisher version: I. Triguero, S. García and F. Herrera, "SEG-SSC: A Framework Based on Synthetic Examples Generation for Self-Labeled Semi-Supervised Classification," in IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 622-634, April 2015, [doi: 10.1109/TCYB.2014.2332003]
Patrocinador
TIN2011-28488; P10-TIC-6858; P11-TIC-7765Resumen
Self-labeled techniques are semi-supervised classification methods that address the shortage of labeled examples via a self-learning process based on supervised models. They progressively classify unlabeled data and use them to modify the hypothesis learned from labeled samples. Most relevant proposals are currently inspired by boosting schemes to iteratively enlarge the labeled set. Despite their effectiveness, these methods are constrained by the number of labeled examples and their distribution, which in many cases is sparse and scattered. The aim of this paper is to design a framework, named synthetic examples generation for self-labeled semi-supervised classification, to improve the classification performance of any given self-labeled method by using synthetic labeled data. These are generated via an oversampling technique and a positioning adjustment model that use both labeled and unlabeled examples as reference. Next, these examples are incorporated in the main stages of the self-labeling process. The principal aspects of the proposed framework are: 1) introducing diversity to the multiple classifiers used by using more (new) labeled data; 2) fulfilling labeled data distribution with the aid of unlabeled data; and 3) being applicable to any kind of self-labeled method. In our empirical studies, we have applied this scheme to four recent self-labeled methods, testing their capabilities with a large number of data sets. We show that this framework significantly improves the classification capabilities of self-labeled techniques.