Picture for Cody Blakeney

Cody Blakeney

Does your data spark joy? Performance gains from domain upsampling at the end of training

Add code
Jun 05, 2024
Figure 1 for Does your data spark joy? Performance gains from domain upsampling at the end of training
Figure 2 for Does your data spark joy? Performance gains from domain upsampling at the end of training
Figure 3 for Does your data spark joy? Performance gains from domain upsampling at the end of training
Figure 4 for Does your data spark joy? Performance gains from domain upsampling at the end of training
Viaarxiv icon

Perplexed by Perplexity: Perplexity-Based Data Pruning With Small Reference Models

Add code
May 30, 2024
Viaarxiv icon

LoRA Learns Less and Forgets Less

Add code
May 15, 2024
Figure 1 for LoRA Learns Less and Forgets Less
Figure 2 for LoRA Learns Less and Forgets Less
Figure 3 for LoRA Learns Less and Forgets Less
Figure 4 for LoRA Learns Less and Forgets Less
Viaarxiv icon

Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation

Add code
Nov 01, 2022
Viaarxiv icon

Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks

Add code
Oct 08, 2021
Figure 1 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Figure 2 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Figure 3 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Figure 4 for Measure Twice, Cut Once: Quantifying Bias and Fairness in Deep Neural Networks
Viaarxiv icon

Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation

Add code
Jun 15, 2021
Figure 1 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Figure 2 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Figure 3 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Figure 4 for Simon Says: Evaluating and Mitigating Bias in Pruned Neural Networks with Knowledge Distillation
Viaarxiv icon

Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression

Add code
Dec 05, 2020
Figure 1 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Figure 2 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Figure 3 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Figure 4 for Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Viaarxiv icon