Picture for Yuekang Li

Yuekang Li

Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning

Add code
Feb 19, 2025
Viaarxiv icon

Indiana Jones: There Are Always Some Useful Ancient Relics

Add code
Jan 27, 2025
Figure 1 for Indiana Jones: There Are Always Some Useful Ancient Relics
Figure 2 for Indiana Jones: There Are Always Some Useful Ancient Relics
Figure 3 for Indiana Jones: There Are Always Some Useful Ancient Relics
Figure 4 for Indiana Jones: There Are Always Some Useful Ancient Relics
Viaarxiv icon

Image-Based Geolocation Using Large Vision-Language Models

Add code
Aug 18, 2024
Viaarxiv icon

Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models

Add code
Jul 16, 2024
Viaarxiv icon

Source Code Summarization in the Era of Large Language Models

Add code
Jul 09, 2024
Viaarxiv icon

Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation

Add code
May 20, 2024
Viaarxiv icon

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

Add code
Apr 19, 2024
Viaarxiv icon

LLM Jailbreak Attack versus Defense Techniques -- A Comprehensive Study

Add code
Feb 21, 2024
Viaarxiv icon

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Add code
Jan 01, 2024
Viaarxiv icon

ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers

Add code
Aug 30, 2023
Viaarxiv icon