Picture for Haoran Que

Haoran Que

MIO: A Foundation Model on Multimodal Tokens

Add code
Sep 26, 2024
Figure 1 for MIO: A Foundation Model on Multimodal Tokens
Figure 2 for MIO: A Foundation Model on Multimodal Tokens
Figure 3 for MIO: A Foundation Model on Multimodal Tokens
Figure 4 for MIO: A Foundation Model on Multimodal Tokens
Viaarxiv icon

HelloBench: Evaluating Long Text Generation Capabilities of Large Language Models

Add code
Sep 24, 2024
Viaarxiv icon

DDK: Distilling Domain Knowledge for Efficient Large Language Models

Add code
Jul 23, 2024
Viaarxiv icon

D-CPT Law: Domain-specific Continual Pre-Training Scaling Law for Large Language Models

Add code
Jun 03, 2024
Viaarxiv icon

E^2-LLM: Efficient and Extreme Length Extension of Large Language Models

Add code
Jan 18, 2024
Figure 1 for E^2-LLM: Efficient and Extreme Length Extension of Large Language Models
Figure 2 for E^2-LLM: Efficient and Extreme Length Extension of Large Language Models
Figure 3 for E^2-LLM: Efficient and Extreme Length Extension of Large Language Models
Figure 4 for E^2-LLM: Efficient and Extreme Length Extension of Large Language Models
Viaarxiv icon

RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models

Add code
Oct 01, 2023
Figure 1 for RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
Figure 2 for RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
Figure 3 for RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
Figure 4 for RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
Viaarxiv icon