Picture for Fuzhao Xue

Fuzhao Xue

MixEval-X: Any-to-Any Evaluations from Real-World Data Mixtures

Add code
Oct 17, 2024
Figure 1 for MixEval-X: Any-to-Any Evaluations from Real-World Data Mixtures
Figure 2 for MixEval-X: Any-to-Any Evaluations from Real-World Data Mixtures
Figure 3 for MixEval-X: Any-to-Any Evaluations from Real-World Data Mixtures
Figure 4 for MixEval-X: Any-to-Any Evaluations from Real-World Data Mixtures
Viaarxiv icon

LongVILA: Scaling Long-Context Visual Language Models for Long Videos

Add code
Aug 21, 2024
Figure 1 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Figure 2 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Figure 3 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Figure 4 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Viaarxiv icon

Wolf: Captioning Everything with a World Summarization Framework

Add code
Jul 26, 2024
Figure 1 for Wolf: Captioning Everything with a World Summarization Framework
Figure 2 for Wolf: Captioning Everything with a World Summarization Framework
Figure 3 for Wolf: Captioning Everything with a World Summarization Framework
Figure 4 for Wolf: Captioning Everything with a World Summarization Framework
Viaarxiv icon

MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures

Add code
Jun 03, 2024
Viaarxiv icon

OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models

Add code
Jan 29, 2024
Figure 1 for OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Figure 2 for OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Figure 3 for OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Figure 4 for OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models
Viaarxiv icon

Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline

Add code
May 22, 2023
Figure 1 for Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline
Figure 2 for Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline
Figure 3 for Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline
Figure 4 for Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline
Viaarxiv icon

To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis

Add code
May 22, 2023
Viaarxiv icon

Hierarchical Dialogue Understanding with Special Tokens and Turn-level Attention

Add code
Apr 29, 2023
Viaarxiv icon

Adaptive Computation with Elastic Input Sequence

Add code
Jan 30, 2023
Viaarxiv icon

Deeper vs Wider: A Revisit of Transformer Configuration

Add code
May 24, 2022
Figure 1 for Deeper vs Wider: A Revisit of Transformer Configuration
Figure 2 for Deeper vs Wider: A Revisit of Transformer Configuration
Figure 3 for Deeper vs Wider: A Revisit of Transformer Configuration
Figure 4 for Deeper vs Wider: A Revisit of Transformer Configuration
Viaarxiv icon