Picture for Seanie Lee

Seanie Lee

HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models

Add code
Oct 02, 2024
Viaarxiv icon

Optimized Speculative Sampling for GPU Hardware Accelerators

Add code
Jun 16, 2024
Figure 1 for Optimized Speculative Sampling for GPU Hardware Accelerators
Figure 2 for Optimized Speculative Sampling for GPU Hardware Accelerators
Figure 3 for Optimized Speculative Sampling for GPU Hardware Accelerators
Figure 4 for Optimized Speculative Sampling for GPU Hardware Accelerators
Viaarxiv icon

Learning diverse attacks on large language models for robust red-teaming and safety tuning

Add code
May 28, 2024
Viaarxiv icon

Effective and Efficient Conversation Retrieval for Dialogue State Tracking with Implicit Text Summaries

Add code
Feb 21, 2024
Viaarxiv icon

Self-Supervised Dataset Distillation for Transfer Learning

Add code
Oct 16, 2023
Viaarxiv icon

Drug Discovery with Dynamic Goal-aware Fragments

Add code
Oct 02, 2023
Viaarxiv icon

Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-Intensive Tasks

Add code
May 28, 2023
Viaarxiv icon

DiffusionNAG: Task-guided Neural Architecture Generation with Diffusion Models

Add code
May 26, 2023
Viaarxiv icon

On Divergence Measures for Bayesian Pseudocoresets

Add code
Oct 12, 2022
Figure 1 for On Divergence Measures for Bayesian Pseudocoresets
Figure 2 for On Divergence Measures for Bayesian Pseudocoresets
Figure 3 for On Divergence Measures for Bayesian Pseudocoresets
Figure 4 for On Divergence Measures for Bayesian Pseudocoresets
Viaarxiv icon

Self-Distillation for Further Pre-training of Transformers

Add code
Sep 30, 2022
Figure 1 for Self-Distillation for Further Pre-training of Transformers
Figure 2 for Self-Distillation for Further Pre-training of Transformers
Figure 3 for Self-Distillation for Further Pre-training of Transformers
Figure 4 for Self-Distillation for Further Pre-training of Transformers
Viaarxiv icon