Picture for Chaofan Tao

Chaofan Tao

Autoregressive Models in Vision: A Survey

Add code
Nov 08, 2024
Figure 1 for Autoregressive Models in Vision: A Survey
Figure 2 for Autoregressive Models in Vision: A Survey
Figure 3 for Autoregressive Models in Vision: A Survey
Figure 4 for Autoregressive Models in Vision: A Survey
Viaarxiv icon

UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference

Add code
Oct 04, 2024
Figure 1 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Figure 2 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Figure 3 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Figure 4 for UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference
Viaarxiv icon

NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality

Add code
Aug 18, 2024
Figure 1 for NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality
Figure 2 for NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality
Figure 3 for NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality
Figure 4 for NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality
Viaarxiv icon

Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies

Add code
Jul 18, 2024
Viaarxiv icon

D2O:Dynamic Discriminative Operations for Efficient Generative Inference of Large Language Models

Add code
Jun 18, 2024
Viaarxiv icon

Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models

Add code
Apr 03, 2024
Figure 1 for Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models
Figure 2 for Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models
Figure 3 for Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models
Figure 4 for Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models
Viaarxiv icon

Electrocardiogram Instruction Tuning for Report Generation

Add code
Mar 13, 2024
Viaarxiv icon

RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis

Add code
Feb 25, 2024
Viaarxiv icon

A Spectral Perspective towards Understanding and Improving Adversarial Robustness

Add code
Jun 25, 2023
Viaarxiv icon

CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers

Add code
May 27, 2023
Viaarxiv icon