Picture for Virginia Smith

Virginia Smith

NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA

Add code
Nov 06, 2024
Figure 1 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Figure 2 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Figure 3 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Figure 4 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Viaarxiv icon

Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models

Add code
Nov 01, 2024
Figure 1 for Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models
Figure 2 for Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models
Figure 3 for Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models
Figure 4 for Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models
Viaarxiv icon

Position: LLM Unlearning Benchmarks are Weak Measures of Progress

Add code
Oct 03, 2024
Figure 1 for Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Figure 2 for Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Figure 3 for Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Figure 4 for Position: LLM Unlearning Benchmarks are Weak Measures of Progress
Viaarxiv icon

Revisiting Cascaded Ensembles for Efficient Inference

Add code
Jul 02, 2024
Figure 1 for Revisiting Cascaded Ensembles for Efficient Inference
Figure 2 for Revisiting Cascaded Ensembles for Efficient Inference
Figure 3 for Revisiting Cascaded Ensembles for Efficient Inference
Figure 4 for Revisiting Cascaded Ensembles for Efficient Inference
Viaarxiv icon

Grass: Compute Efficient Low-Memory LLM Training with Structured Sparse Gradients

Add code
Jun 25, 2024
Viaarxiv icon

RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold

Add code
Jun 20, 2024
Viaarxiv icon

Jogging the Memory of Unlearned Model Through Targeted Relearning Attack

Add code
Jun 19, 2024
Figure 1 for Jogging the Memory of Unlearned Model Through Targeted Relearning Attack
Figure 2 for Jogging the Memory of Unlearned Model Through Targeted Relearning Attack
Figure 3 for Jogging the Memory of Unlearned Model Through Targeted Relearning Attack
Figure 4 for Jogging the Memory of Unlearned Model Through Targeted Relearning Attack
Viaarxiv icon

Federated LoRA with Sparse Communication

Add code
Jun 07, 2024
Figure 1 for Federated LoRA with Sparse Communication
Figure 2 for Federated LoRA with Sparse Communication
Figure 3 for Federated LoRA with Sparse Communication
Figure 4 for Federated LoRA with Sparse Communication
Viaarxiv icon

Privacy Amplification for the Gaussian Mechanism via Bounded Support

Add code
Mar 07, 2024
Figure 1 for Privacy Amplification for the Gaussian Mechanism via Bounded Support
Figure 2 for Privacy Amplification for the Gaussian Mechanism via Bounded Support
Figure 3 for Privacy Amplification for the Gaussian Mechanism via Bounded Support
Figure 4 for Privacy Amplification for the Gaussian Mechanism via Bounded Support
Viaarxiv icon

Many-Objective Multi-Solution Transport

Add code
Mar 06, 2024
Viaarxiv icon