Picture for Bhavya Kailkhura

Bhavya Kailkhura

Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis

Add code
Oct 12, 2024
Figure 1 for Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis
Viaarxiv icon

Speculative Diffusion Decoding: Accelerating Language Generation through Diffusion

Add code
Aug 10, 2024
Viaarxiv icon

ELFS: Enhancing Label-Free Coreset Selection via Clustering-based Pseudo-Labeling

Add code
Jun 06, 2024
Viaarxiv icon

Low-rank finetuning for LLMs: A fairness perspective

Add code
May 28, 2024
Figure 1 for Low-rank finetuning for LLMs: A fairness perspective
Figure 2 for Low-rank finetuning for LLMs: A fairness perspective
Figure 3 for Low-rank finetuning for LLMs: A fairness perspective
Figure 4 for Low-rank finetuning for LLMs: A fairness perspective
Viaarxiv icon

Transformers Can Do Arithmetic with the Right Embeddings

Add code
May 27, 2024
Figure 1 for Transformers Can Do Arithmetic with the Right Embeddings
Figure 2 for Transformers Can Do Arithmetic with the Right Embeddings
Figure 3 for Transformers Can Do Arithmetic with the Right Embeddings
Figure 4 for Transformers Can Do Arithmetic with the Right Embeddings
Viaarxiv icon

SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning

Add code
Apr 28, 2024
Viaarxiv icon

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Figure 1 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 2 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 3 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Figure 4 for Introducing v0.5 of the AI Safety Benchmark from MLCommons
Viaarxiv icon

End-to-End Mesh Optimization of a Hybrid Deep Learning Black-Box PDE Solver

Add code
Apr 17, 2024
Viaarxiv icon

Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies

Add code
Apr 14, 2024
Viaarxiv icon

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

Add code
Mar 18, 2024
Figure 1 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 2 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 3 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 4 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Viaarxiv icon