Picture for Yuekang Li

Yuekang Li

Image-Based Geolocation Using Large Vision-Language Models

Add code
Aug 18, 2024
Viaarxiv icon

Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models

Add code
Jul 16, 2024
Viaarxiv icon

Source Code Summarization in the Era of Large Language Models

Add code
Jul 09, 2024
Viaarxiv icon

Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation

Add code
May 20, 2024
Viaarxiv icon

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

Add code
Apr 19, 2024
Viaarxiv icon

LLM Jailbreak Attack versus Defense Techniques -- A Comprehensive Study

Add code
Feb 21, 2024
Viaarxiv icon

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Add code
Jan 01, 2024
Viaarxiv icon

ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers

Add code
Aug 30, 2023
Viaarxiv icon

Prompt Injection attack against LLM-integrated Applications

Add code
Jun 08, 2023
Viaarxiv icon

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

Add code
May 23, 2023
Figure 1 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Figure 2 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Figure 3 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Figure 4 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Viaarxiv icon