Picture for Aiwei Liu

Aiwei Liu

Mark Your LLM: Detecting the Misuse of Open-Source Large Language Models via Watermarking

Add code
Mar 06, 2025
Viaarxiv icon

Semi-Supervised In-Context Learning: A Baseline Study

Add code
Mar 04, 2025
Viaarxiv icon

TabGen-ICL: Residual-Aware In-Context Example Selection for Tabular Data Generation

Add code
Feb 23, 2025
Viaarxiv icon

Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?

Add code
Feb 17, 2025
Viaarxiv icon

Cold-Start Recommendation towards the Era of Large Language Models (LLMs): A Comprehensive Survey and Roadmap

Add code
Jan 03, 2025
Viaarxiv icon

Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios

Add code
Nov 05, 2024
Figure 1 for Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Figure 2 for Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Figure 3 for Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Figure 4 for Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Viaarxiv icon

Less is More: Extreme Gradient Boost Rank-1 Adaption for Efficient Finetuning of LLMs

Add code
Oct 25, 2024
Viaarxiv icon

Recent Advances of Multimodal Continual Learning: A Comprehensive Survey

Add code
Oct 07, 2024
Figure 1 for Recent Advances of Multimodal Continual Learning: A Comprehensive Survey
Figure 2 for Recent Advances of Multimodal Continual Learning: A Comprehensive Survey
Figure 3 for Recent Advances of Multimodal Continual Learning: A Comprehensive Survey
Figure 4 for Recent Advances of Multimodal Continual Learning: A Comprehensive Survey
Viaarxiv icon

Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality

Add code
Oct 07, 2024
Figure 1 for Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality
Figure 2 for Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality
Figure 3 for Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality
Figure 4 for Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality
Viaarxiv icon

TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights

Add code
Oct 06, 2024
Figure 1 for TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights
Figure 2 for TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights
Figure 3 for TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights
Figure 4 for TIS-DPO: Token-level Importance Sampling for Direct Preference Optimization With Estimated Weights
Viaarxiv icon