Picture for Liangzhi Li

Liangzhi Li

Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models

Add code
Jul 22, 2024
Figure 1 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Figure 2 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Figure 3 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Figure 4 for Imposter.AI: Adversarial Attacks with Hidden Intentions towards Aligned Large Language Models
Viaarxiv icon

Explainable Image Recognition via Enhanced Slot-attention Based Classifier

Add code
Jul 08, 2024
Viaarxiv icon

BlockPruner: Fine-grained Pruning for Large Language Models

Add code
Jun 15, 2024
Viaarxiv icon

Can multiple-choice questions really be useful in detecting the abilities of LLMs?

Add code
Mar 28, 2024
Viaarxiv icon

BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering

Add code
Dec 13, 2023
Viaarxiv icon

Towards Robust and Accurate Visual Prompting

Add code
Nov 18, 2023
Viaarxiv icon

Instruct Me More! Random Prompting for Visual In-Context Learning

Add code
Nov 07, 2023
Viaarxiv icon

Concatenated Masked Autoencoders as Spatial-Temporal Learner

Add code
Nov 02, 2023
Viaarxiv icon

MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading Comprehension

Add code
Oct 27, 2023
Viaarxiv icon

TCRA-LLM: Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction

Add code
Oct 25, 2023
Viaarxiv icon