Picture for Chong You

Chong You

HiRE: High Recall Approximate Top-$k$ Estimation for Efficient LLM Inference

Add code
Feb 14, 2024
Viaarxiv icon

Generalized Neural Collapse for a Large Number of Classes

Add code
Oct 15, 2023
Viaarxiv icon

It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models

Add code
Oct 13, 2023
Viaarxiv icon

Functional Interpolation for Relative Positions Improves Long Context Transformers

Add code
Oct 06, 2023
Viaarxiv icon

Revisiting Sparse Convolutional Model for Visual Recognition

Add code
Oct 24, 2022
Viaarxiv icon

Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers

Add code
Oct 12, 2022
Figure 1 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 2 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 3 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 4 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Viaarxiv icon

Are All Losses Created Equal: A Neural Collapse Perspective

Add code
Oct 08, 2022
Figure 1 for Are All Losses Created Equal: A Neural Collapse Perspective
Figure 2 for Are All Losses Created Equal: A Neural Collapse Perspective
Figure 3 for Are All Losses Created Equal: A Neural Collapse Perspective
Figure 4 for Are All Losses Created Equal: A Neural Collapse Perspective
Viaarxiv icon

Teacher Guided Training: An Efficient Framework for Knowledge Transfer

Add code
Aug 14, 2022
Figure 1 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 2 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 3 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 4 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Viaarxiv icon

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

Add code
Mar 12, 2022
Figure 1 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Figure 2 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Figure 3 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Figure 4 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Viaarxiv icon

Robust Training under Label Noise by Over-parameterization

Add code
Feb 28, 2022
Figure 1 for Robust Training under Label Noise by Over-parameterization
Figure 2 for Robust Training under Label Noise by Over-parameterization
Figure 3 for Robust Training under Label Noise by Over-parameterization
Figure 4 for Robust Training under Label Noise by Over-parameterization
Viaarxiv icon