Picture for Linfeng Ye

Linfeng Ye

Difficulty-guided Sampling: Bridging the Target Gap between Dataset Distillation and Downstream Tasks

Add code
Jan 15, 2026
Viaarxiv icon

Normalized Conditional Mutual Information Surrogate Loss for Deep Neural Classifiers

Add code
Jan 05, 2026
Viaarxiv icon

Widget2Code: From Visual Widgets to UI Code via Multimodal LLMs

Add code
Dec 22, 2025
Viaarxiv icon

JPEG Compliant Compression for Both Human and Machine, A Report

Add code
Mar 13, 2025
Viaarxiv icon

Distributed Quasi-Newton Method for Fair and Fast Federated Learning

Add code
Jan 18, 2025
Viaarxiv icon

How to Train the Teacher Model for Effective Knowledge Distillation

Add code
Jul 25, 2024
Viaarxiv icon

Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachers

Add code
May 22, 2024
Figure 1 for Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachers
Figure 2 for Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachers
Figure 3 for Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachers
Figure 4 for Adversarial Training via Adaptive Knowledge Amalgamation of an Ensemble of Teachers
Viaarxiv icon

Robustness Against Adversarial Attacks via Learning Confined Adversarial Polytopes

Add code
Jan 20, 2024
Viaarxiv icon

Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information

Add code
Jan 16, 2024
Viaarxiv icon

Conditional Mutual Information Constrained Deep Learning for Classification

Add code
Sep 17, 2023
Figure 1 for Conditional Mutual Information Constrained Deep Learning for Classification
Figure 2 for Conditional Mutual Information Constrained Deep Learning for Classification
Figure 3 for Conditional Mutual Information Constrained Deep Learning for Classification
Figure 4 for Conditional Mutual Information Constrained Deep Learning for Classification
Viaarxiv icon