Picture for Lu Yu

Lu Yu

Language Guided Concept Bottleneck Models for Interpretable Continual Learning

Add code
Mar 30, 2025
Viaarxiv icon

MASS: Mathematical Data Selection via Skill Graphs for Pretraining Large Language Models

Add code
Mar 19, 2025
Viaarxiv icon

Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs

Add code
Mar 07, 2025
Figure 1 for Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs
Figure 2 for Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs
Figure 3 for Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs
Figure 4 for Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs
Viaarxiv icon

Advancing Wasserstein Convergence Analysis of Score-Based Models: Insights from Discretization and Second-Order Acceleration

Add code
Feb 07, 2025
Figure 1 for Advancing Wasserstein Convergence Analysis of Score-Based Models: Insights from Discretization and Second-Order Acceleration
Viaarxiv icon

Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks

Add code
Nov 11, 2024
Figure 1 for Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks
Figure 2 for Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks
Figure 3 for Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks
Figure 4 for Model Partition and Resource Allocation for Split Learning in Vehicular Edge Networks
Viaarxiv icon

Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models

Add code
Oct 29, 2024
Viaarxiv icon

Towards Multi-dimensional Explanation Alignment for Medical Classification

Add code
Oct 28, 2024
Viaarxiv icon

Faithful Interpretation for Graph Neural Networks

Add code
Oct 09, 2024
Viaarxiv icon

Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning

Add code
Aug 02, 2024
Figure 1 for Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning
Figure 2 for Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning
Figure 3 for Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning
Figure 4 for Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning
Viaarxiv icon

Semi-supervised Concept Bottleneck Models

Add code
Jun 27, 2024
Viaarxiv icon