Picture for Jinda Lu

Jinda Lu

Thinking with Frames: Generative Video Distortion Evaluation via Frame Reward Model

Add code
Jan 07, 2026
Viaarxiv icon

Punctuation-aware Hybrid Trainable Sparse Attention for Large Language Models

Add code
Jan 06, 2026
Viaarxiv icon

Accelerating Controllable Generation via Hybrid-grained Cache

Add code
Nov 14, 2025
Viaarxiv icon

Causal-HalBench: Uncovering LVLMs Object Hallucinations Through Causal Intervention

Add code
Nov 13, 2025
Viaarxiv icon

AdaViP: Aligning Multi-modal LLMs via Adaptive Vision-enhanced Preference Optimization

Add code
Apr 22, 2025
Viaarxiv icon

Aligning Multimodal LLM with Human Preference: A Survey

Add code
Mar 18, 2025
Figure 1 for Aligning Multimodal LLM with Human Preference: A Survey
Figure 2 for Aligning Multimodal LLM with Human Preference: A Survey
Figure 3 for Aligning Multimodal LLM with Human Preference: A Survey
Figure 4 for Aligning Multimodal LLM with Human Preference: A Survey
Viaarxiv icon

Accelerating Diffusion Transformer via Gradient-Optimized Cache

Add code
Mar 07, 2025
Figure 1 for Accelerating Diffusion Transformer via Gradient-Optimized Cache
Figure 2 for Accelerating Diffusion Transformer via Gradient-Optimized Cache
Figure 3 for Accelerating Diffusion Transformer via Gradient-Optimized Cache
Figure 4 for Accelerating Diffusion Transformer via Gradient-Optimized Cache
Viaarxiv icon

DAMO: Data- and Model-aware Alignment of Multi-modal LLMs

Add code
Feb 04, 2025
Figure 1 for DAMO: Data- and Model-aware Alignment of Multi-modal LLMs
Figure 2 for DAMO: Data- and Model-aware Alignment of Multi-modal LLMs
Figure 3 for DAMO: Data- and Model-aware Alignment of Multi-modal LLMs
Figure 4 for DAMO: Data- and Model-aware Alignment of Multi-modal LLMs
Viaarxiv icon

Accelerating Diffusion Transformer via Error-Optimized Cache

Add code
Jan 31, 2025
Figure 1 for Accelerating Diffusion Transformer via Error-Optimized Cache
Figure 2 for Accelerating Diffusion Transformer via Error-Optimized Cache
Figure 3 for Accelerating Diffusion Transformer via Error-Optimized Cache
Figure 4 for Accelerating Diffusion Transformer via Error-Optimized Cache
Viaarxiv icon

Unified Parameter-Efficient Unlearning for LLMs

Add code
Nov 30, 2024
Figure 1 for Unified Parameter-Efficient Unlearning for LLMs
Figure 2 for Unified Parameter-Efficient Unlearning for LLMs
Figure 3 for Unified Parameter-Efficient Unlearning for LLMs
Figure 4 for Unified Parameter-Efficient Unlearning for LLMs
Viaarxiv icon