Picture for Noé Tits

Noé Tits

TIPAA-SSL: Text Independent Phone-to-Audio Alignment based on Self-Supervised Learning and Knowledge Transfer

Add code
May 03, 2024
Viaarxiv icon

MUST&P-SRL: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning

Add code
Oct 17, 2023
Viaarxiv icon

Flowchase: a Mobile Application for Pronunciation Training

Add code
Jul 05, 2023
Figure 1 for Flowchase: a Mobile Application for Pronunciation Training
Figure 2 for Flowchase: a Mobile Application for Pronunciation Training
Figure 3 for Flowchase: a Mobile Application for Pronunciation Training
Viaarxiv icon

Where Is My Mind ? Predicting Visual Attention from Brain Activity

Add code
Jan 11, 2022
Figure 1 for Where Is My Mind ? Predicting Visual Attention from Brain Activity
Figure 2 for Where Is My Mind ? Predicting Visual Attention from Brain Activity
Figure 3 for Where Is My Mind ? Predicting Visual Attention from Brain Activity
Figure 4 for Where Is My Mind ? Predicting Visual Attention from Brain Activity
Viaarxiv icon

Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system

Add code
Mar 06, 2021
Figure 1 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Figure 2 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Figure 3 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Figure 4 for Analysis and Assessment of Controllability of an Expressive Deep Learning-based TTS system
Viaarxiv icon

Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition

Add code
Oct 05, 2020
Figure 1 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Figure 2 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Figure 3 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Figure 4 for Modulated Fusion using Transformer for Linguistic-Acoustic Emotion Recognition
Viaarxiv icon

ICE-Talk: an Interface for a Controllable Expressive Talking Machine

Add code
Aug 25, 2020
Figure 1 for ICE-Talk: an Interface for a Controllable Expressive Talking Machine
Figure 2 for ICE-Talk: an Interface for a Controllable Expressive Talking Machine
Viaarxiv icon

Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning

Add code
Aug 20, 2020
Figure 1 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Figure 2 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Figure 3 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Figure 4 for Laughter Synthesis: Combining Seq2seq modeling with Transfer Learning
Viaarxiv icon

A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis

Add code
Jun 29, 2020
Figure 1 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Figure 2 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Figure 3 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Figure 4 for A Transformer-based joint-encoding for Emotion Recognition and Sentiment Analysis
Viaarxiv icon

The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach

Add code
Oct 14, 2019
Figure 1 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Figure 2 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Figure 3 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Figure 4 for The Theory behind Controllable Expressive Speech Synthesis: a Cross-disciplinary Approach
Viaarxiv icon