Picture for Junyuan Hong

Junyuan Hong

DeepOSets: Non-Autoregressive In-Context Learning of Supervised Learning Operators

Add code
Oct 11, 2024
Viaarxiv icon

LLM-PBE: Assessing Data Privacy in Large Language Models

Add code
Aug 23, 2024
Viaarxiv icon

GuardAgent: Safeguard LLM Agents by a Guard Agent via Knowledge-Enabled Reasoning

Add code
Jun 13, 2024
Viaarxiv icon

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

Add code
Mar 18, 2024
Figure 1 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 2 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 3 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Figure 4 for Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
Viaarxiv icon

Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk

Add code
Mar 14, 2024
Viaarxiv icon

On the Generalization Ability of Unsupervised Pretraining

Add code
Mar 11, 2024
Viaarxiv icon

Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark

Add code
Feb 26, 2024
Figure 1 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 2 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 3 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Figure 4 for Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
Viaarxiv icon

DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer

Add code
Nov 27, 2023
Viaarxiv icon

Understanding Deep Gradient Leakage via Inversion Influence Functions

Add code
Sep 22, 2023
Viaarxiv icon

Safe and Robust Watermark Injection with a Single OoD Image

Add code
Sep 04, 2023
Viaarxiv icon