Picture for Tan Nguyen

Tan Nguyen

Unified Local and Global Attention Interaction Modeling for Vision Transformers

Add code
Dec 25, 2024
Viaarxiv icon

Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Feature Model

Add code
Jan 04, 2024
Figure 1 for Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Feature Model
Figure 2 for Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Feature Model
Figure 3 for Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Feature Model
Viaarxiv icon

Unveiling Comparative Sentiments in Vietnamese Product Reviews: A Sequential Classification Framework

Add code
Jan 02, 2024
Viaarxiv icon

Touch, press and stroke: a soft capacitive sensor skin

Add code
Jul 06, 2023
Viaarxiv icon

Posterior Collapse in Linear Conditional and Hierarchical Variational Autoencoders

Add code
Jun 08, 2023
Viaarxiv icon

Neural Collapse in Deep Linear Network: From Balanced to Imbalanced Data

Add code
Jan 01, 2023
Viaarxiv icon

Revisiting Over-smoothing and Over-squashing using Ollivier's Ricci Curvature

Add code
Nov 28, 2022
Viaarxiv icon

Hierarchical Sliced Wasserstein Distance

Add code
Sep 30, 2022
Figure 1 for Hierarchical Sliced Wasserstein Distance
Figure 2 for Hierarchical Sliced Wasserstein Distance
Figure 3 for Hierarchical Sliced Wasserstein Distance
Figure 4 for Hierarchical Sliced Wasserstein Distance
Viaarxiv icon

Improving Generative Flow Networks with Path Regularization

Add code
Sep 29, 2022
Figure 1 for Improving Generative Flow Networks with Path Regularization
Figure 2 for Improving Generative Flow Networks with Path Regularization
Figure 3 for Improving Generative Flow Networks with Path Regularization
Figure 4 for Improving Generative Flow Networks with Path Regularization
Viaarxiv icon

Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization

Add code
Aug 01, 2022
Figure 1 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 2 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 3 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Figure 4 for Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
Viaarxiv icon