Picture for Junlan Feng

Junlan Feng

China Mobile Research Institute, Beijing, China

MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing

Add code
Aug 21, 2024
Viaarxiv icon

Exploring Energy-Based Models for Out-of-Distribution Detection in Dialect Identification

Add code
Jun 26, 2024
Viaarxiv icon

On Calibration of Speech Classification Models: Insights from Energy-Based Model Investigations

Add code
Jun 26, 2024
Viaarxiv icon

Large Language Models Are Cross-Lingual Knowledge-Free Reasoners

Add code
Jun 24, 2024
Viaarxiv icon

CEC: A Noisy Label Detection Method for Speaker Recognition

Add code
Jun 19, 2024
Viaarxiv icon

PolySpeech: Exploring Unified Multitask Speech Models for Competitiveness with Single-task Models

Add code
Jun 12, 2024
Viaarxiv icon

EMERGE: Integrating RAG for Improved Multimodal EHR Predictive Modeling

Add code
May 27, 2024
Viaarxiv icon

Large Language Models are Good Spontaneous Multilingual Learners: Is the Multilingual Annotated Data Necessary?

Add code
May 22, 2024
Viaarxiv icon

The 2nd FutureDial Challenge: Dialog Systems with Retrieval Augmented Generation (FutureDial-RAG)

Add code
May 21, 2024
Viaarxiv icon

InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting

Add code
Mar 05, 2024
Figure 1 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Figure 2 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Figure 3 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Figure 4 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Viaarxiv icon