Picture for Jiajun Li

Jiajun Li

University of Electronic Science and Technology of China, Shanghai Jiaotong University

LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding

Add code
Dec 22, 2025
Viaarxiv icon

SparseRM: A Lightweight Preference Modeling with Sparse Autoencoder

Add code
Nov 11, 2025
Figure 1 for SparseRM: A Lightweight Preference Modeling with Sparse Autoencoder
Figure 2 for SparseRM: A Lightweight Preference Modeling with Sparse Autoencoder
Figure 3 for SparseRM: A Lightweight Preference Modeling with Sparse Autoencoder
Figure 4 for SparseRM: A Lightweight Preference Modeling with Sparse Autoencoder
Viaarxiv icon

Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models

Add code
Aug 28, 2025
Figure 1 for Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models
Figure 2 for Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models
Figure 3 for Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models
Figure 4 for Video-LevelGauge: Investigating Contextual Positional Bias in Large Video Language Models
Viaarxiv icon

Constraint Matters: Multi-Modal Representation for Reducing Mixed-Integer Linear programming

Add code
Aug 26, 2025
Viaarxiv icon

Cut2Next: Generating Next Shot via In-Context Tuning

Add code
Aug 12, 2025
Viaarxiv icon

SkipVAR: Accelerating Visual Autoregressive Modeling via Adaptive Frequency-Aware Skipping

Add code
Jun 11, 2025
Viaarxiv icon

Context-Aware Probabilistic Modeling with LLM for Multimodal Time Series Forecasting

Add code
May 16, 2025
Viaarxiv icon

LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching

Add code
Mar 16, 2025
Figure 1 for LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching
Figure 2 for LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching
Figure 3 for LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching
Figure 4 for LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching
Viaarxiv icon

Fast and Interpretable Mixed-Integer Linear Program Solving by Learning Model Reduction

Add code
Dec 31, 2024
Figure 1 for Fast and Interpretable Mixed-Integer Linear Program Solving by Learning Model Reduction
Figure 2 for Fast and Interpretable Mixed-Integer Linear Program Solving by Learning Model Reduction
Figure 3 for Fast and Interpretable Mixed-Integer Linear Program Solving by Learning Model Reduction
Figure 4 for Fast and Interpretable Mixed-Integer Linear Program Solving by Learning Model Reduction
Viaarxiv icon

Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script

Add code
Dec 17, 2024
Figure 1 for Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script
Figure 2 for Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script
Figure 3 for Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script
Figure 4 for Human-in-the-Loop Generation of Adversarial Texts: A Case Study on Tibetan Script
Viaarxiv icon