Picture for Chunting Zhou

Chunting Zhou

CAT: Content-Adaptive Image Tokenization

Add code
Jan 06, 2025
Viaarxiv icon

LMFusion: Adapting Pretrained Language Models for Multimodal Generation

Add code
Dec 26, 2024
Viaarxiv icon

LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation

Add code
Dec 19, 2024
Viaarxiv icon

Byte Latent Transformer: Patches Scale Better Than Tokens

Add code
Dec 13, 2024
Viaarxiv icon

ALMA: Alignment with Minimal Annotation

Add code
Dec 05, 2024
Figure 1 for ALMA: Alignment with Minimal Annotation
Figure 2 for ALMA: Alignment with Minimal Annotation
Figure 3 for ALMA: Alignment with Minimal Annotation
Figure 4 for ALMA: Alignment with Minimal Annotation
Viaarxiv icon

Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models

Add code
Nov 07, 2024
Figure 1 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Figure 2 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Figure 3 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Figure 4 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Viaarxiv icon

Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model

Add code
Aug 20, 2024
Viaarxiv icon

Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length

Add code
Apr 12, 2024
Viaarxiv icon

Instruction-tuned Language Models are Better Knowledge Learners

Add code
Feb 20, 2024
Figure 1 for Instruction-tuned Language Models are Better Knowledge Learners
Figure 2 for Instruction-tuned Language Models are Better Knowledge Learners
Figure 3 for Instruction-tuned Language Models are Better Knowledge Learners
Figure 4 for Instruction-tuned Language Models are Better Knowledge Learners
Viaarxiv icon

MART: Improving LLM Safety with Multi-round Automatic Red-Teaming

Add code
Nov 13, 2023
Figure 1 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Figure 2 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Figure 3 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Figure 4 for MART: Improving LLM Safety with Multi-round Automatic Red-Teaming
Viaarxiv icon