Picture for Xingjian Li

Xingjian Li

Trustworthy Federated Learning: Privacy, Security, and Beyond

Add code
Nov 03, 2024
Viaarxiv icon

Multimodal Generalized Category Discovery

Add code
Sep 18, 2024
Viaarxiv icon

Vox-UDA: Voxel-wise Unsupervised Domain Adaptation for Cryo-Electron Subtomogram Segmentation with Denoised Pseudo Labeling

Add code
Jun 25, 2024
Viaarxiv icon

Photorealistic Robotic Simulation using Unreal Engine 5 for Agricultural Applications

Add code
May 28, 2024
Viaarxiv icon

Robust Cross-Modal Knowledge Distillation for Unconstrained Videos

Add code
Apr 27, 2023
Viaarxiv icon

Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources

Add code
Jul 14, 2022
Figure 1 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 2 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 3 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Figure 4 for Large-scale Knowledge Distillation with Elastic Heterogeneous Computing Resources
Viaarxiv icon

Fine-tuning Pre-trained Language Models with Noise Stability Regularization

Add code
Jun 12, 2022
Figure 1 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 2 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 3 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Figure 4 for Fine-tuning Pre-trained Language Models with Noise Stability Regularization
Viaarxiv icon

Deep Active Learning with Noise Stability

Add code
May 26, 2022
Figure 1 for Deep Active Learning with Noise Stability
Figure 2 for Deep Active Learning with Noise Stability
Figure 3 for Deep Active Learning with Noise Stability
Figure 4 for Deep Active Learning with Noise Stability
Viaarxiv icon

Inadequately Pre-trained Models are Better Feature Extractors

Add code
Mar 09, 2022
Figure 1 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 2 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 3 for Inadequately Pre-trained Models are Better Feature Extractors
Figure 4 for Inadequately Pre-trained Models are Better Feature Extractors
Viaarxiv icon

Boosting Active Learning via Improving Test Performance

Add code
Dec 10, 2021
Figure 1 for Boosting Active Learning via Improving Test Performance
Figure 2 for Boosting Active Learning via Improving Test Performance
Figure 3 for Boosting Active Learning via Improving Test Performance
Figure 4 for Boosting Active Learning via Improving Test Performance
Viaarxiv icon