Picture for Guanghui Xu

Guanghui Xu

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

Boost Test-Time Performance with Closed-Loop Inference

Add code
Mar 26, 2022
Figure 1 for Boost Test-Time Performance with Closed-Loop Inference
Figure 2 for Boost Test-Time Performance with Closed-Loop Inference
Figure 3 for Boost Test-Time Performance with Closed-Loop Inference
Figure 4 for Boost Test-Time Performance with Closed-Loop Inference
Viaarxiv icon

AdaXpert: Adapting Neural Architecture for Growing Data

Add code
Jul 01, 2021
Figure 1 for AdaXpert: Adapting Neural Architecture for Growing Data
Figure 2 for AdaXpert: Adapting Neural Architecture for Growing Data
Figure 3 for AdaXpert: Adapting Neural Architecture for Growing Data
Figure 4 for AdaXpert: Adapting Neural Architecture for Growing Data
Viaarxiv icon

Towards Accurate Text-based Image Captioning with Content Diversity Exploration

Add code
Apr 23, 2021
Figure 1 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Figure 2 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Figure 3 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Figure 4 for Towards Accurate Text-based Image Captioning with Content Diversity Exploration
Viaarxiv icon

How to Train Your Agent to Read and Write

Add code
Jan 04, 2021
Figure 1 for How to Train Your Agent to Read and Write
Figure 2 for How to Train Your Agent to Read and Write
Figure 3 for How to Train Your Agent to Read and Write
Figure 4 for How to Train Your Agent to Read and Write
Viaarxiv icon

Improving Prosody Modelling with Cross-Utterance BERT Embeddings for End-to-end Speech Synthesis

Add code
Nov 06, 2020
Figure 1 for Improving Prosody Modelling with Cross-Utterance BERT Embeddings for End-to-end Speech Synthesis
Figure 2 for Improving Prosody Modelling with Cross-Utterance BERT Embeddings for End-to-end Speech Synthesis
Figure 3 for Improving Prosody Modelling with Cross-Utterance BERT Embeddings for End-to-end Speech Synthesis
Figure 4 for Improving Prosody Modelling with Cross-Utterance BERT Embeddings for End-to-end Speech Synthesis
Viaarxiv icon

REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs

Add code
Oct 08, 2019
Figure 1 for REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs
Figure 2 for REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs
Figure 3 for REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs
Figure 4 for REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs
Viaarxiv icon

Building a mixed-lingual neural TTS system with only monolingual data

Add code
Apr 12, 2019
Figure 1 for Building a mixed-lingual neural TTS system with only monolingual data
Figure 2 for Building a mixed-lingual neural TTS system with only monolingual data
Figure 3 for Building a mixed-lingual neural TTS system with only monolingual data
Figure 4 for Building a mixed-lingual neural TTS system with only monolingual data
Viaarxiv icon

You Only Look & Listen Once: Towards Fast and Accurate Visual Grounding

Add code
Mar 17, 2019
Figure 1 for You Only Look & Listen Once: Towards Fast and Accurate Visual Grounding
Figure 2 for You Only Look & Listen Once: Towards Fast and Accurate Visual Grounding
Figure 3 for You Only Look & Listen Once: Towards Fast and Accurate Visual Grounding
Figure 4 for You Only Look & Listen Once: Towards Fast and Accurate Visual Grounding
Viaarxiv icon