Picture for Tianrui Wang

Tianrui Wang

X-VC: Zero-shot Streaming Voice Conversion in Codec Space

Add code
Apr 14, 2026
Viaarxiv icon

MSR-HuBERT: Self-supervised Pre-training for Adaptation to Multiple Sampling Rates

Add code
Mar 24, 2026
Viaarxiv icon

AudioRAG: A Challenging Benchmark for Audio Reasoning and Information Retrieval

Add code
Feb 11, 2026
Viaarxiv icon

EmoShift: Lightweight Activation Steering for Enhanced Emotion-Aware Speech Synthesis

Add code
Jan 30, 2026
Viaarxiv icon

Towards Fine-Grained and Multi-Granular Contrastive Language-Speech Pre-training

Add code
Jan 06, 2026
Viaarxiv icon

POTSA: A Cross-Lingual Speech Alignment Framework for Low Resource Speech-to-Text Translation

Add code
Nov 12, 2025
Viaarxiv icon

ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning

Add code
Jul 03, 2025
Figure 1 for ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
Figure 2 for ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
Figure 3 for ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
Figure 4 for ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
Viaarxiv icon

MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix

Add code
May 19, 2025
Figure 1 for MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix
Figure 2 for MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix
Figure 3 for MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix
Figure 4 for MMAR: A Challenging Benchmark for Deep Reasoning in Speech, Audio, Music, and Their Mix
Viaarxiv icon

EmoVoice: LLM-based Emotional Text-To-Speech Model with Freestyle Text Prompting

Add code
Apr 22, 2025
Viaarxiv icon

Characteristic-Specific Partial Fine-Tuning for Efficient Emotion and Speaker Adaptation in Codec Language Text-to-Speech Models

Add code
Jan 24, 2025
Figure 1 for Characteristic-Specific Partial Fine-Tuning for Efficient Emotion and Speaker Adaptation in Codec Language Text-to-Speech Models
Figure 2 for Characteristic-Specific Partial Fine-Tuning for Efficient Emotion and Speaker Adaptation in Codec Language Text-to-Speech Models
Figure 3 for Characteristic-Specific Partial Fine-Tuning for Efficient Emotion and Speaker Adaptation in Codec Language Text-to-Speech Models
Figure 4 for Characteristic-Specific Partial Fine-Tuning for Efficient Emotion and Speaker Adaptation in Codec Language Text-to-Speech Models
Viaarxiv icon