Picture for Furong Huang

Furong Huang

PEnGUiN: Partially Equivariant Graph NeUral Networks for Sample Efficient MARL

Add code
Mar 19, 2025
Viaarxiv icon

PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models

Add code
Mar 10, 2025
Viaarxiv icon

Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis

Add code
Feb 27, 2025
Viaarxiv icon

MAFE: Multi-Agent Fair Environments for Decision-Making Systems

Add code
Feb 25, 2025
Viaarxiv icon

MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs

Add code
Feb 04, 2025
Figure 1 for MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
Figure 2 for MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
Figure 3 for MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
Figure 4 for MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
Viaarxiv icon

HashEvict: A Pre-Attention KV Cache Eviction Strategy using Locality-Sensitive Hashing

Add code
Dec 24, 2024
Figure 1 for HashEvict: A Pre-Attention KV Cache Eviction Strategy using Locality-Sensitive Hashing
Figure 2 for HashEvict: A Pre-Attention KV Cache Eviction Strategy using Locality-Sensitive Hashing
Figure 3 for HashEvict: A Pre-Attention KV Cache Eviction Strategy using Locality-Sensitive Hashing
Figure 4 for HashEvict: A Pre-Attention KV Cache Eviction Strategy using Locality-Sensitive Hashing
Viaarxiv icon

TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies

Add code
Dec 13, 2024
Viaarxiv icon

Political-LLM: Large Language Models in Political Science

Add code
Dec 09, 2024
Figure 1 for Political-LLM: Large Language Models in Political Science
Figure 2 for Political-LLM: Large Language Models in Political Science
Figure 3 for Political-LLM: Large Language Models in Political Science
Figure 4 for Political-LLM: Large Language Models in Political Science
Viaarxiv icon

LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds

Add code
Dec 06, 2024
Figure 1 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Figure 2 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Figure 3 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Figure 4 for LIAR: Leveraging Alignment (Best-of-N) to Jailbreak LLMs in Seconds
Viaarxiv icon

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Add code
Nov 27, 2024
Figure 1 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 2 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 3 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Figure 4 for Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment
Viaarxiv icon