Picture for Andi Han

Andi Han

On the Comparison between Multi-modal and Single-modal Contrastive Learning

Add code
Nov 05, 2024
Viaarxiv icon

Provably Transformers Harness Multi-Concept Word Semantics for Efficient In-Context Learning

Add code
Nov 04, 2024
Viaarxiv icon

Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning

Add code
Oct 08, 2024
Figure 1 for Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
Figure 2 for Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
Figure 3 for Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
Figure 4 for Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
Viaarxiv icon

When Graph Neural Networks Meet Dynamic Mode Decomposition

Add code
Oct 08, 2024
Figure 1 for When Graph Neural Networks Meet Dynamic Mode Decomposition
Figure 2 for When Graph Neural Networks Meet Dynamic Mode Decomposition
Figure 3 for When Graph Neural Networks Meet Dynamic Mode Decomposition
Figure 4 for When Graph Neural Networks Meet Dynamic Mode Decomposition
Viaarxiv icon

On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent

Add code
Oct 07, 2024
Figure 1 for On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent
Figure 2 for On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent
Figure 3 for On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent
Figure 4 for On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent
Viaarxiv icon

Secondary Structure-Guided Novel Protein Sequence Generation with Latent Graph Diffusion

Add code
Jul 10, 2024
Viaarxiv icon

SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining

Add code
Jun 04, 2024
Figure 1 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 2 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 3 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Figure 4 for SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Viaarxiv icon

Riemannian coordinate descent algorithms on matrix manifolds

Add code
Jun 04, 2024
Viaarxiv icon

Unleash Graph Neural Networks from Heavy Tuning

Add code
May 21, 2024
Figure 1 for Unleash Graph Neural Networks from Heavy Tuning
Figure 2 for Unleash Graph Neural Networks from Heavy Tuning
Figure 3 for Unleash Graph Neural Networks from Heavy Tuning
Figure 4 for Unleash Graph Neural Networks from Heavy Tuning
Viaarxiv icon

A Framework for Bilevel Optimization on Riemannian Manifolds

Add code
Feb 06, 2024
Viaarxiv icon