Picture for Ziquan Liu

Ziquan Liu

RALAD: Bridging the Real-to-Sim Domain Gap in Autonomous Driving with Retrieval-Augmented Learning

Add code
Jan 21, 2025
Viaarxiv icon

Get Confused Cautiously: Textual Sequence Memorization Erasure with Selective Entropy Maximization

Add code
Aug 09, 2024
Viaarxiv icon

The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks

Add code
May 14, 2024
Viaarxiv icon

Borrowing Treasures from Neighbors: In-Context Learning for Multimodal Learning with Missing Modalities and Data Scarcity

Add code
Mar 26, 2024
Viaarxiv icon

PROSAC: Provably Safe Certification for Machine Learning Models under Adversarial Attacks

Add code
Feb 04, 2024
Viaarxiv icon

DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking Tasks

Add code
Apr 07, 2023
Viaarxiv icon

TWINS: A Fine-Tuning Framework for Improved Transferability of Adversarial Robustness and Generalization

Add code
Mar 20, 2023
Viaarxiv icon

Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization

Add code
Oct 11, 2022
Figure 1 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Figure 2 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Figure 3 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Figure 4 for Boosting Adversarial Robustness From The Perspective of Effective Margin Regularization
Viaarxiv icon

An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation

Add code
May 25, 2022
Figure 1 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 2 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 3 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 4 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Viaarxiv icon

Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice

Add code
Nov 24, 2021
Figure 1 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 2 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 3 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 4 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Viaarxiv icon