Mostrar el registro sencillo del ítem

dc.contributor.authorGonzález López, José Andrés 
dc.contributor.authorGónzalez Atienza, Míriam 
dc.contributor.authorGómez Alanís, Alejandro 
dc.contributor.authorPérez-Córdoba, Alejandro
dc.contributor.authorGreen, Phil D
dc.date.accessioned2021-02-18T09:08:08Z
dc.date.available2021-02-18T09:08:08Z
dc.date.issued2021-01-28
dc.identifier.urihttp://hdl.handle.net/10481/66644
dc.description.abstractArticulatory-to-acoustic (A2A) synthesis refers to the generation of audible speech from captured movement of the speech articulators. This technique has numerous applications, such as restoring oral communication to people who cannot longer speak due to illness or injury. Most successful techniquesso far adopt a supervised learning framework, in which timesynchronousarticulatory-and-speech recordings are used to train a supervised machine learning algorithm that can be used later to map articulator movements to speech. This, however, prevents the application of A2A techniques in cases where parallel data is unavailable, e.g., a person has already lost her/his voice and only articulatory data can be captured. In this work, we propose a solution to this problem based on the theory of multi-view learning. The proposed algorithm attempts to find an optimal temporal alignment between pairs of nonaligned articulatory-and-acoustic sequences with the same phonetic content by projecting them into a common latent space where both views are maximally correlated and then applying dynamic time warping. Several variants of this idea are discussed and explored. We show that the quality of speech generated in the non-aligned scenario is comparable to that obtained in the parallel scenario.es_ES
dc.description.sponsorshipThis work was funded by the Spanish State Research Agency (SRA) under the grant PID2019-108040RBC22/ SRA/10.13039/501100011033. Jose A. Gonzalez-Lopez holds a Juan de la Cierva-Incorporation Fellowship from the Spanish Ministry of Science, Innovation and Universities (IJCI-2017-32926).es_ES
dc.language.isoenges_ES
dc.publisherISCAes_ES
dc.rightsAtribución-NoComercial 3.0 España*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/3.0/es/*
dc.titleMulti-view Temporal Alignment for Non-parallel Articulatory-to-Acoustic Speech Synthesises_ES
dc.typeconference outputes_ES
dc.rights.accessRightsopen accesses_ES
dc.type.hasVersionSMURes_ES


Ficheros en el ítem

[PDF]

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución-NoComercial 3.0 España
Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución-NoComercial 3.0 España