Picture for Chen Liang

Chen Liang

KDSelector: A Knowledge-Enhanced and Data-Efficient Model Selector Learning Framework for Time Series Anomaly Detection

Add code
Mar 16, 2025
Viaarxiv icon

LLMs Can Generate a Better Answer by Aggregating Their Own Responses

Add code
Mar 06, 2025
Viaarxiv icon

Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs

Add code
Mar 03, 2025
Viaarxiv icon

COSMOS: A Hybrid Adaptive Optimizer for Memory-Efficient Training of LLMs

Add code
Feb 26, 2025
Viaarxiv icon

Event Argument Extraction with Enriched Prompts

Add code
Jan 12, 2025
Viaarxiv icon

ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers

Add code
Dec 17, 2024
Figure 1 for ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers
Figure 2 for ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers
Figure 3 for ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers
Figure 4 for ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers
Viaarxiv icon

IDEA-Bench: How Far are Generative Models from Professional Designing?

Add code
Dec 16, 2024
Figure 1 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Figure 2 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Figure 3 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Figure 4 for IDEA-Bench: How Far are Generative Models from Professional Designing?
Viaarxiv icon

In-Context LoRA for Diffusion Transformers

Add code
Oct 31, 2024
Figure 1 for In-Context LoRA for Diffusion Transformers
Figure 2 for In-Context LoRA for Diffusion Transformers
Figure 3 for In-Context LoRA for Diffusion Transformers
Figure 4 for In-Context LoRA for Diffusion Transformers
Viaarxiv icon

Group Diffusion Transformers are Unsupervised Multitask Learners

Add code
Oct 19, 2024
Figure 1 for Group Diffusion Transformers are Unsupervised Multitask Learners
Viaarxiv icon

ERCache: An Efficient and Reliable Caching Framework for Large-Scale User Representations in Meta's Ads System

Add code
Oct 09, 2024
Figure 1 for ERCache: An Efficient and Reliable Caching Framework for Large-Scale User Representations in Meta's Ads System
Figure 2 for ERCache: An Efficient and Reliable Caching Framework for Large-Scale User Representations in Meta's Ads System
Figure 3 for ERCache: An Efficient and Reliable Caching Framework for Large-Scale User Representations in Meta's Ads System
Figure 4 for ERCache: An Efficient and Reliable Caching Framework for Large-Scale User Representations in Meta's Ads System
Viaarxiv icon