Picture for Fangcheng Fu

Fangcheng Fu

ByteScale: Efficient Scaling of LLM Training with a 2048K Context Length on More Than 12,000 GPUs

Add code
Feb 28, 2025
Viaarxiv icon

Training-free and Adaptive Sparse Attention for Efficient Long Video Generation

Add code
Feb 28, 2025
Viaarxiv icon

Demystifying Workload Imbalances in Large Transformer Model Training over Variable-length Sequences

Add code
Dec 10, 2024
Viaarxiv icon

Data-Centric and Heterogeneity-Adaptive Sequence Parallelism for Efficient LLM Training

Add code
Dec 02, 2024
Viaarxiv icon

Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models

Add code
Oct 08, 2024
Figure 1 for Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
Figure 2 for Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
Figure 3 for Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
Figure 4 for Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
Viaarxiv icon

Retrofitting Temporal Graph Neural Networks with Transformer

Add code
Sep 10, 2024
Figure 1 for Retrofitting Temporal Graph Neural Networks with Transformer
Figure 2 for Retrofitting Temporal Graph Neural Networks with Transformer
Figure 3 for Retrofitting Temporal Graph Neural Networks with Transformer
Figure 4 for Retrofitting Temporal Graph Neural Networks with Transformer
Viaarxiv icon

Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management

Add code
Sep 05, 2024
Figure 1 for Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management
Figure 2 for Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management
Figure 3 for Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management
Figure 4 for Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management
Viaarxiv icon

Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs

Add code
Jul 16, 2024
Figure 1 for Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs
Figure 2 for Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs
Figure 3 for Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs
Figure 4 for Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs
Viaarxiv icon

Retrieval-Augmented Generation for AI-Generated Content: A Survey

Add code
Feb 29, 2024
Figure 1 for Retrieval-Augmented Generation for AI-Generated Content: A Survey
Figure 2 for Retrieval-Augmented Generation for AI-Generated Content: A Survey
Figure 3 for Retrieval-Augmented Generation for AI-Generated Content: A Survey
Figure 4 for Retrieval-Augmented Generation for AI-Generated Content: A Survey
Viaarxiv icon

Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning

Add code
Oct 24, 2023
Figure 1 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Figure 2 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Figure 3 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Figure 4 for Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
Viaarxiv icon