Mostrar el registro sencillo del ítem

dc.contributor.authorBang, Jaehun
dc.contributor.authorHur, Taeho
dc.contributor.authorKim, Dohyeong
dc.contributor.authorHuynh-The, Thien
dc.contributor.authorLee, Jongwon
dc.contributor.authorHan, Yongkoo
dc.contributor.authorBaños Legrán, Oresti 
dc.contributor.authorKim, Jee-In
dc.contributor.authorLee, Sungyoung
dc.date.accessioned2019-03-27T08:59:36Z
dc.date.available2019-03-27T08:59:36Z
dc.date.issued2018-11-02
dc.identifier.citationBang, J.[et al.]. Adaptive Data Boosting Technique for Robust Personalized Speech Emotion in Emotionally-Imbalanced Small-Sample Environments. Sensors 2018, 18, 3744.es_ES
dc.identifier.issn1660-4601
dc.identifier.urihttp://hdl.handle.net/10481/55222
dc.description.abstractPersonalized emotion recognition provides an individual training model for each target user in order to mitigate the accuracy problem when using general training models collected from multiple users. Existing personalized speech emotion recognition research has a cold-start problem that requires a large amount of emotionally-balanced data samples from the target user when creating the personalized training model. Such research is difficult to apply in real environments due to the difficulty of collecting numerous target user speech data with emotionally-balanced label samples. Therefore, we propose the Robust Personalized Emotion Recognition Framework with the Adaptive Data Boosting Algorithm to solve the cold-start problem. The proposed framework incrementally provides a customized training model for the target user by reinforcing the dataset by combining the acquired target user speech with speech from other users, followed by applying SMOTE (Synthetic Minority Over-sampling Technique)-based data augmentation. The proposed method proved to be adaptive across a small number of target user datasets and emotionally-imbalanced data environments through iterative experiments using the IEMOCAP (Interactive Emotional Dyadic Motion Capture) database.es_ES
dc.description.sponsorshipThis research was supported by an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korean government (MSIT) (No. 2017-0-00655). This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2017-0-01629) supervised by the IITP (Institute for Information & communications Technology Promotion). This research was supported by the MIST (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW supervised by the IITP (Institute for Information & communications Technology Promotion) (2017-0-00093).es_ES
dc.language.isoenges_ES
dc.publisherMDPIes_ES
dc.rightsAtribución 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/es/*
dc.subjectSpeech emotion recognitiones_ES
dc.subjectPersonalizationes_ES
dc.subjectMachine learninges_ES
dc.subjectData selectiones_ES
dc.subjectData augmentationes_ES
dc.titleAdaptive Data Boosting Technique for Robust Personalized Speech Emotion in Emotionally-Imbalanced Small-Sample Environmentses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución 3.0 España