Picture for Jiaxin Chen

Jiaxin Chen

MBL-CPDP: A Multi-objective Bilevel Method for Cross-Project Defect Prediction via Automated Machine Learning

Add code
Nov 10, 2024
Viaarxiv icon

Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery

Add code
Nov 05, 2024
Figure 1 for Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery
Figure 2 for Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery
Figure 3 for Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery
Figure 4 for Centerness-based Instance-aware Knowledge Distillation with Task-wise Mutual Lifting for Object Detection on Drone Imagery
Viaarxiv icon

Adaptive Learning of Consistency and Inconsistency Information for Fake News Detection

Add code
Aug 15, 2024
Viaarxiv icon

AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer

Add code
Jul 17, 2024
Figure 1 for AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer
Figure 2 for AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer
Figure 3 for AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer
Figure 4 for AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer
Viaarxiv icon

iVPT: Improving Task-relevant Information Sharing in Visual Prompt Tuning by Cross-layer Dynamic Connection

Add code
Apr 08, 2024
Viaarxiv icon

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

Add code
Nov 23, 2023
Figure 1 for Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Figure 2 for Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Figure 3 for Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Figure 4 for Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model
Viaarxiv icon

Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning

Add code
Nov 07, 2023
Figure 1 for Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning
Figure 2 for Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning
Viaarxiv icon

The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade

Add code
Nov 07, 2023
Figure 1 for The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade
Figure 2 for The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade
Figure 3 for The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade
Figure 4 for The NeurIPS 2022 Neural MMO Challenge: A Massively Multiagent Competition with Specialization and Trade
Viaarxiv icon

Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO

Add code
Aug 30, 2023
Viaarxiv icon

DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration

Add code
Aug 23, 2023
Figure 1 for DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration
Figure 2 for DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration
Figure 3 for DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration
Figure 4 for DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration
Viaarxiv icon