Picture for Leonidas Gee

Leonidas Gee

Efficient Online Inference of Vision Transformers by Training-Free Tokenization

Add code
Nov 23, 2024
Figure 1 for Efficient Online Inference of Vision Transformers by Training-Free Tokenization
Figure 2 for Efficient Online Inference of Vision Transformers by Training-Free Tokenization
Figure 3 for Efficient Online Inference of Vision Transformers by Training-Free Tokenization
Figure 4 for Efficient Online Inference of Vision Transformers by Training-Free Tokenization
Viaarxiv icon

Code-Optimise: Self-Generated Preference Data for Correctness and Efficiency

Add code
Jun 18, 2024
Viaarxiv icon

Are Compressed Language Models Less Subgroup Robust?

Add code
Mar 26, 2024
Figure 1 for Are Compressed Language Models Less Subgroup Robust?
Figure 2 for Are Compressed Language Models Less Subgroup Robust?
Figure 3 for Are Compressed Language Models Less Subgroup Robust?
Figure 4 for Are Compressed Language Models Less Subgroup Robust?
Viaarxiv icon

Multi-Word Tokenization for Sequence Compression

Add code
Feb 15, 2024
Figure 1 for Multi-Word Tokenization for Sequence Compression
Figure 2 for Multi-Word Tokenization for Sequence Compression
Figure 3 for Multi-Word Tokenization for Sequence Compression
Figure 4 for Multi-Word Tokenization for Sequence Compression
Viaarxiv icon

Fast Vocabulary Transfer for Language Model Compression

Add code
Feb 15, 2024
Figure 1 for Fast Vocabulary Transfer for Language Model Compression
Figure 2 for Fast Vocabulary Transfer for Language Model Compression
Figure 3 for Fast Vocabulary Transfer for Language Model Compression
Figure 4 for Fast Vocabulary Transfer for Language Model Compression
Viaarxiv icon