Picture for Shuo Ren

Shuo Ren

TESSP: Text-Enhanced Self-Supervised Speech Pre-training

Add code
Nov 24, 2022
Viaarxiv icon

RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction

Add code
Oct 18, 2022
Figure 1 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Figure 2 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Figure 3 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Figure 4 for RAPO: An Adaptive Ranking Paradigm for Bilingual Lexicon Induction
Viaarxiv icon

SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data

Add code
Sep 30, 2022
Figure 1 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Figure 2 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Figure 3 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Figure 4 for SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
Viaarxiv icon

Speech Pre-training with Acoustic Piece

Add code
Apr 07, 2022
Figure 1 for Speech Pre-training with Acoustic Piece
Figure 2 for Speech Pre-training with Acoustic Piece
Figure 3 for Speech Pre-training with Acoustic Piece
Figure 4 for Speech Pre-training with Acoustic Piece
Viaarxiv icon

KESA: A Knowledge Enhanced Approach For Sentiment Analysis

Add code
Feb 24, 2022
Figure 1 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Figure 2 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Figure 3 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Figure 4 for KESA: A Knowledge Enhanced Approach For Sentiment Analysis
Viaarxiv icon

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Oct 29, 2021
Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon

Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding

Add code
Oct 23, 2021
Figure 1 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Figure 2 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Figure 3 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Figure 4 for Optimizing Alignment of Speech and Language Latent Spaces for End-to-End Speech Recognition and Understanding
Viaarxiv icon

SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing

Add code
Oct 14, 2021
Figure 1 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 2 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 3 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 4 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Viaarxiv icon

CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation

Add code
Feb 09, 2021
Figure 1 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Figure 2 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Figure 3 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Figure 4 for CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Viaarxiv icon

GraphCodeBERT: Pre-training Code Representations with Data Flow

Add code
Sep 29, 2020
Figure 1 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 2 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 3 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 4 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Viaarxiv icon