Picture for Chao Deng

Chao Deng

China Mobile Research Institute, Beijing, China

FatesGS: Fast and Accurate Sparse-View Surface Reconstruction using Gaussian Splatting with Depth-Feature Consistency

Add code
Jan 08, 2025
Viaarxiv icon

Sparis: Neural Implicit Surface Reconstruction of Indoor Scenes from Sparse Views

Add code
Jan 02, 2025
Viaarxiv icon

MacLight: Multi-scene Aggregation Convolutional Learning for Traffic Signal Control

Add code
Dec 24, 2024
Viaarxiv icon

LongDocURL: a Comprehensive Multimodal Long Document Benchmark Integrating Understanding, Reasoning, and Locating

Add code
Dec 24, 2024
Viaarxiv icon

Uni-AdaFocus: Spatial-temporal Dynamic Computation for Video Recognition

Add code
Dec 15, 2024
Viaarxiv icon

MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing

Add code
Aug 21, 2024
Figure 1 for MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing
Figure 2 for MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing
Figure 3 for MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing
Figure 4 for MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing
Viaarxiv icon

Large Language Models Are Cross-Lingual Knowledge-Free Reasoners

Add code
Jun 24, 2024
Figure 1 for Large Language Models Are Cross-Lingual Knowledge-Free Reasoners
Figure 2 for Large Language Models Are Cross-Lingual Knowledge-Free Reasoners
Figure 3 for Large Language Models Are Cross-Lingual Knowledge-Free Reasoners
Figure 4 for Large Language Models Are Cross-Lingual Knowledge-Free Reasoners
Viaarxiv icon

PolySpeech: Exploring Unified Multitask Speech Models for Competitiveness with Single-task Models

Add code
Jun 12, 2024
Viaarxiv icon

Large Language Models are Good Spontaneous Multilingual Learners: Is the Multilingual Annotated Data Necessary?

Add code
May 22, 2024
Figure 1 for Large Language Models are Good Spontaneous Multilingual Learners: Is the Multilingual Annotated Data Necessary?
Figure 2 for Large Language Models are Good Spontaneous Multilingual Learners: Is the Multilingual Annotated Data Necessary?
Figure 3 for Large Language Models are Good Spontaneous Multilingual Learners: Is the Multilingual Annotated Data Necessary?
Figure 4 for Large Language Models are Good Spontaneous Multilingual Learners: Is the Multilingual Annotated Data Necessary?
Viaarxiv icon

InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting

Add code
Mar 05, 2024
Figure 1 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Figure 2 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Figure 3 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Figure 4 for InjectTST: A Transformer Method of Injecting Global Information into Independent Channels for Long Time Series Forecasting
Viaarxiv icon