Picture for Qifan Wang

Qifan Wang

Meta AI

IDInit: A Universal and Stable Initialization Method for Neural Network Training

Add code
Mar 06, 2025
Viaarxiv icon

LLM as GNN: Graph Vocabulary Learning for Text-Attributed Graph Foundation Models

Add code
Mar 05, 2025
Viaarxiv icon

Towards An Efficient LLM Training Paradigm for CTR Prediction

Add code
Mar 02, 2025
Viaarxiv icon

Advantage-Guided Distillation for Preference Alignment in Small Language Models

Add code
Feb 25, 2025
Viaarxiv icon

More for Keys, Less for Values: Adaptive KV Cache Quantization

Add code
Feb 20, 2025
Viaarxiv icon

PsyPlay: Personality-Infused Role-Playing Conversational Agents

Add code
Feb 06, 2025
Viaarxiv icon

Large Language Models for Recommendation with Deliberative User Preference Alignment

Add code
Feb 04, 2025
Viaarxiv icon

Error-driven Data-efficient Large Multimodal Model Tuning

Add code
Dec 20, 2024
Viaarxiv icon

CompCap: Improving Multimodal Large Language Models with Composite Captions

Add code
Dec 06, 2024
Figure 1 for CompCap: Improving Multimodal Large Language Models with Composite Captions
Figure 2 for CompCap: Improving Multimodal Large Language Models with Composite Captions
Figure 3 for CompCap: Improving Multimodal Large Language Models with Composite Captions
Figure 4 for CompCap: Improving Multimodal Large Language Models with Composite Captions
Viaarxiv icon

Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics

Add code
Nov 22, 2024
Figure 1 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 2 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 3 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Figure 4 for Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
Viaarxiv icon