Non-Parallel Articulatory-to-Acoustic Conversion Using Multiview-based Time Warping
Metadatos
Mostrar el registro completo del ítemAutor
González López, José Andrés; Gómez Alanís, Alejandro; Pérez Córdoba, José Luis; Green, Phil D.Editorial
MDPI
Materia
Deep learning Multiview learning Dynamic time warping Canonical correlation analysis Silent speech interface Latent embedding
Fecha
2022-01-23Referencia bibliográfica
Gonzalez-Lopez, J.A.; Gomez-Alanis, A.; Pérez-Córdoba, J.L.; Green, P.D. Non-Parallel Articulatory-to-Acoustic Conversion Using Multiview-Based Time Warping. Appl. Sci. 2022, 12, 1167. [https://doi.org/10.3390/app12031167]
Patrocinador
Spanish State Research Agency (SRA) PID2019-108040RB-C22/SRA/10.13039/501100011033; FEDER/Junta de AndalucíaConsejería de Transformación Económica, Industria, Conocimiento y Universidades project no. B-SEJ-570-UGR20.Resumen
In this paper, we propose a novel algorithm called multiview temporal alignment by dependence maximisation in the latent space (TRANSIENCE) for the alignment of time series consisting of sequences of feature vectors with different length and dimensionality of the feature vectors. The proposed algorithm, which is based on the theory of multiview learning, can be seen as an extension of the well-known dynamic time warping (DTW) algorithm but, as mentioned, it allows the sequences to have different dimensionalities. Our algorithm attempts to find an optimal temporal alignment between pairs of nonaligned sequences by first projecting their feature vectors into a common latent space where both views are maximally similar. To do this, powerful, nonlinear deep neural network (DNN) models are employed. Then, the resulting sequences of embedding vectors are aligned using DTW. Finally, the alignment paths obtained in the previous step are applied to the original sequences to align them. In the paper, we explore several variants of the algorithm that mainly differ in the way the DNNs are trained. We evaluated the proposed algorithm on a articulatory-to-acoustic (A2A) synthesis task involving the generation of audible speech from motion data captured from the lips and tongue of healthy speakers using a technique known as permanent magnet articulography (PMA). In this task, our algorithm is applied during the training stage to align pairs of nonaligned speech and PMA recordings that are later used to train DNNs able to synthesis speech from PMA data. Our results show the quality of speech generated in the nonaligned scenario is comparable to that obtained in the parallel scenario.