Picture for Klaudia Bałazy

Klaudia Bałazy

LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters

Add code
May 27, 2024
Viaarxiv icon

Exploiting Transformer Activation Sparsity with Dynamic Inference

Add code
Oct 06, 2023
Viaarxiv icon

r-softmax: Generalized Softmax with Controllable Sparsity Rate

Add code
Apr 21, 2023
Viaarxiv icon

Step by Step Loss Goes Very Far: Multi-Step Quantization for Adversarial Text Attacks

Add code
Feb 10, 2023
Viaarxiv icon

Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models

Add code
Feb 08, 2023
Viaarxiv icon

Direction is what you need: Improving Word Embedding Compression in Large Language Models

Add code
Jun 15, 2021
Figure 1 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Figure 2 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Figure 3 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Figure 4 for Direction is what you need: Improving Word Embedding Compression in Large Language Models
Viaarxiv icon

Zero Time Waste: Recycling Predictions in Early Exit Neural Networks

Add code
Jun 09, 2021
Figure 1 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Figure 2 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Figure 3 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Figure 4 for Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
Viaarxiv icon

Finding the Optimal Network Depth in Classification Tasks

Add code
Apr 17, 2020
Figure 1 for Finding the Optimal Network Depth in Classification Tasks
Figure 2 for Finding the Optimal Network Depth in Classification Tasks
Figure 3 for Finding the Optimal Network Depth in Classification Tasks
Figure 4 for Finding the Optimal Network Depth in Classification Tasks
Viaarxiv icon