Picture for Xiaohan Chen

Xiaohan Chen

Chasing Better Deep Image Priors between Over- and Under-parameterization

Add code
Oct 31, 2024
Viaarxiv icon

Expressive Power of Graph Neural Networks for Quadratic Programs

Add code
Jun 09, 2024
Viaarxiv icon

Learning to optimize: A tutorial for continuous and mixed-integer optimization

Add code
May 24, 2024
Viaarxiv icon

Rethinking the Capacity of Graph Neural Networks for Branching Strategy

Add code
Feb 11, 2024
Viaarxiv icon

DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming with Feasibility Guarantee

Add code
Oct 20, 2023
Viaarxiv icon

Towards Constituting Mathematical Structures for Learning to Optimize

Add code
May 29, 2023
Viaarxiv icon

More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity

Add code
Jul 07, 2022
Figure 1 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 2 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 3 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Figure 4 for More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity
Viaarxiv icon

The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training

Add code
Feb 05, 2022
Figure 1 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 2 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 3 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Figure 4 for The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Viaarxiv icon

Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better

Add code
Dec 18, 2021
Figure 1 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 2 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 3 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Figure 4 for Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Viaarxiv icon

Hyperparameter Tuning is All You Need for LISTA

Add code
Oct 29, 2021
Figure 1 for Hyperparameter Tuning is All You Need for LISTA
Figure 2 for Hyperparameter Tuning is All You Need for LISTA
Figure 3 for Hyperparameter Tuning is All You Need for LISTA
Figure 4 for Hyperparameter Tuning is All You Need for LISTA
Viaarxiv icon