Picture for Tianjin Huang

Tianjin Huang

Principal Eigenvalue Regularization for Improved Worst-Class Certified Robustness of Smoothed Classifiers

Add code
Mar 21, 2025
Viaarxiv icon

Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam

Add code
Feb 24, 2025
Figure 1 for Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam
Figure 2 for Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam
Figure 3 for Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam
Figure 4 for Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam
Viaarxiv icon

SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training

Add code
Jan 12, 2025
Figure 1 for SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Figure 2 for SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Figure 3 for SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Figure 4 for SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
Viaarxiv icon

Are Sparse Neural Networks Better Hard Sample Learners?

Add code
Sep 13, 2024
Figure 1 for Are Sparse Neural Networks Better Hard Sample Learners?
Figure 2 for Are Sparse Neural Networks Better Hard Sample Learners?
Figure 3 for Are Sparse Neural Networks Better Hard Sample Learners?
Figure 4 for Are Sparse Neural Networks Better Hard Sample Learners?
Viaarxiv icon

(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork

Add code
Jul 24, 2024
Figure 1 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Figure 2 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Figure 3 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Figure 4 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Viaarxiv icon

Composable Interventions for Language Models

Add code
Jul 09, 2024
Figure 1 for Composable Interventions for Language Models
Figure 2 for Composable Interventions for Language Models
Figure 3 for Composable Interventions for Language Models
Figure 4 for Composable Interventions for Language Models
Viaarxiv icon

The Counterattack of CNNs in Self-Supervised Learning: Larger Kernel Size might be All You Need

Add code
Dec 12, 2023
Viaarxiv icon

Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective

Add code
Dec 03, 2023
Figure 1 for Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
Figure 2 for Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
Figure 3 for Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
Figure 4 for Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective
Viaarxiv icon

Heterophily-Based Graph Neural Network for Imbalanced Classification

Add code
Oct 12, 2023
Figure 1 for Heterophily-Based Graph Neural Network for Imbalanced Classification
Figure 2 for Heterophily-Based Graph Neural Network for Imbalanced Classification
Figure 3 for Heterophily-Based Graph Neural Network for Imbalanced Classification
Figure 4 for Heterophily-Based Graph Neural Network for Imbalanced Classification
Viaarxiv icon

Enhancing Adversarial Training via Reweighting Optimization Trajectory

Add code
Jul 07, 2023
Figure 1 for Enhancing Adversarial Training via Reweighting Optimization Trajectory
Figure 2 for Enhancing Adversarial Training via Reweighting Optimization Trajectory
Figure 3 for Enhancing Adversarial Training via Reweighting Optimization Trajectory
Figure 4 for Enhancing Adversarial Training via Reweighting Optimization Trajectory
Viaarxiv icon