Picture for Chenglin Yang

Chenglin Yang

Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens

Add code
Jan 13, 2025
Figure 1 for Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
Figure 2 for Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
Figure 3 for Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
Figure 4 for Democratizing Text-to-Image Masked Generative Models with Compact Text-Aware One-Dimensional Tokens
Viaarxiv icon

1.58-bit FLUX

Add code
Dec 24, 2024
Viaarxiv icon

IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers

Add code
Nov 27, 2023
Figure 1 for IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers
Figure 2 for IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers
Figure 3 for IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers
Figure 4 for IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers
Viaarxiv icon

MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models

Add code
Oct 04, 2022
Figure 1 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Figure 2 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Figure 3 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Figure 4 for MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Viaarxiv icon

Lite Vision Transformer with Enhanced Self-Attention

Add code
Dec 20, 2021
Figure 1 for Lite Vision Transformer with Enhanced Self-Attention
Figure 2 for Lite Vision Transformer with Enhanced Self-Attention
Figure 3 for Lite Vision Transformer with Enhanced Self-Attention
Figure 4 for Lite Vision Transformer with Enhanced Self-Attention
Viaarxiv icon

Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms

Add code
Jul 12, 2021
Figure 1 for Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms
Figure 2 for Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms
Figure 3 for Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms
Figure 4 for Locally Enhanced Self-Attention: Rethinking Self-Attention as Local and Context Terms
Viaarxiv icon

Meticulous Object Segmentation

Add code
Dec 13, 2020
Viaarxiv icon

Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks

Add code
Dec 01, 2020
Figure 1 for Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks
Figure 2 for Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks
Figure 3 for Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks
Figure 4 for Robustness Out of the Box: Compositional Representations Naturally Defend Against Black-Box Patch Attacks
Viaarxiv icon

PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning

Add code
Apr 12, 2020
Figure 1 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 2 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 3 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Figure 4 for PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning
Viaarxiv icon

Snapshot Distillation: Teacher-Student Optimization in One Generation

Add code
Dec 01, 2018
Figure 1 for Snapshot Distillation: Teacher-Student Optimization in One Generation
Figure 2 for Snapshot Distillation: Teacher-Student Optimization in One Generation
Figure 3 for Snapshot Distillation: Teacher-Student Optimization in One Generation
Figure 4 for Snapshot Distillation: Teacher-Student Optimization in One Generation
Viaarxiv icon