Picture for Junyuan Hong

Junyuan Hong

GuideLLM: Exploring LLM-Guided Conversation with Applications in Autobiography Interviewing

Add code
Feb 10, 2025
Viaarxiv icon

Extracting and Understanding the Superficial Knowledge in Alignment

Add code
Feb 07, 2025
Figure 1 for Extracting and Understanding the Superficial Knowledge in Alignment
Figure 2 for Extracting and Understanding the Superficial Knowledge in Alignment
Figure 3 for Extracting and Understanding the Superficial Knowledge in Alignment
Figure 4 for Extracting and Understanding the Superficial Knowledge in Alignment
Viaarxiv icon

DeepOSets: Non-Autoregressive In-Context Learning of Supervised Learning Operators

Add code
Oct 11, 2024
Viaarxiv icon

LLM-PBE: Assessing Data Privacy in Large Language Models

Add code
Aug 23, 2024
Figure 1 for LLM-PBE: Assessing Data Privacy in Large Language Models
Figure 2 for LLM-PBE: Assessing Data Privacy in Large Language Models
Figure 3 for LLM-PBE: Assessing Data Privacy in Large Language Models
Figure 4 for LLM-PBE: Assessing Data Privacy in Large Language Models
Viaarxiv icon

GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning

Add code
Jun 13, 2024
Figure 1 for GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning
Figure 2 for GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning
Figure 3 for GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning
Figure 4 for GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning
Viaarxiv icon

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

Add code
Mar 18, 2024
Figure 1 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 2 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 3 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 4 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Viaarxiv icon

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

Add code
Mar 14, 2024
Viaarxiv icon

On the Generalization Ability of Unsupervised Pretraining

Add code
Mar 11, 2024
Viaarxiv icon

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Add code
Feb 26, 2024
Figure 1 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 2 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 3 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 4 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Viaarxiv icon

DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer

Add code
Nov 27, 2023
Viaarxiv icon