Picture for Bo Hui

Bo Hui

From Bits to Chips: An LLM-based Hardware-Aware Quantization Agent for Streamlined Deployment of LLMs

Add code
Jan 07, 2026
Viaarxiv icon

Demystify Protein Generation with Hierarchical Conditional Diffusion Models

Add code
Jul 24, 2025
Viaarxiv icon

Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization

Add code
Apr 12, 2025
Viaarxiv icon

Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning

Add code
Jan 25, 2025
Figure 1 for Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning
Figure 2 for Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning
Figure 3 for Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning
Figure 4 for Towards Distributed Backdoor Attacks with Network Detection in Decentralized Federated Learning
Viaarxiv icon

Efficient Large-Scale Traffic Forecasting with Transformers: A Spatial Data Management Perspective

Add code
Dec 13, 2024
Viaarxiv icon

Weak-to-Strong Generalization beyond Accuracy: a Pilot Study in Safety, Toxicity, and Legal Reasoning

Add code
Oct 16, 2024
Figure 1 for Weak-to-Strong Generalization beyond Accuracy: a Pilot Study in Safety, Toxicity, and Legal Reasoning
Figure 2 for Weak-to-Strong Generalization beyond Accuracy: a Pilot Study in Safety, Toxicity, and Legal Reasoning
Figure 3 for Weak-to-Strong Generalization beyond Accuracy: a Pilot Study in Safety, Toxicity, and Legal Reasoning
Figure 4 for Weak-to-Strong Generalization beyond Accuracy: a Pilot Study in Safety, Toxicity, and Legal Reasoning
Viaarxiv icon

PLeak: Prompt Leaking Attacks against Large Language Model Applications

Add code
May 14, 2024
Figure 1 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 2 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 3 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Figure 4 for PLeak: Prompt Leaking Attacks against Large Language Model Applications
Viaarxiv icon

A Survey of Lottery Ticket Hypothesis

Add code
Mar 12, 2024
Figure 1 for A Survey of Lottery Ticket Hypothesis
Figure 2 for A Survey of Lottery Ticket Hypothesis
Viaarxiv icon

Successfully Applying Lottery Ticket Hypothesis to Diffusion Model

Add code
Oct 28, 2023
Figure 1 for Successfully Applying Lottery Ticket Hypothesis to Diffusion Model
Figure 2 for Successfully Applying Lottery Ticket Hypothesis to Diffusion Model
Figure 3 for Successfully Applying Lottery Ticket Hypothesis to Diffusion Model
Figure 4 for Successfully Applying Lottery Ticket Hypothesis to Diffusion Model
Viaarxiv icon

SneakyPrompt: Jailbreaking Text-to-image Generative Models

Add code
May 20, 2023
Figure 1 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Figure 2 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Figure 3 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Figure 4 for SneakyPrompt: Jailbreaking Text-to-image Generative Models
Viaarxiv icon