Picture for Xin Jiang

Xin Jiang

Harbin Institute of Technology, Shenzhen

ToolFlow: Boosting LLM Tool-Calling Through Natural and Coherent Dialogue Synthesis

Add code
Oct 24, 2024
Viaarxiv icon

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

Add code
Oct 22, 2024
Viaarxiv icon

Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning

Add code
Oct 18, 2024
Viaarxiv icon

FlatQuant: Flatness Matters for LLM Quantization

Add code
Oct 12, 2024
Figure 1 for FlatQuant: Flatness Matters for LLM Quantization
Figure 2 for FlatQuant: Flatness Matters for LLM Quantization
Figure 3 for FlatQuant: Flatness Matters for LLM Quantization
Figure 4 for FlatQuant: Flatness Matters for LLM Quantization
Viaarxiv icon

Why pre-training is beneficial for downstream classification tasks?

Add code
Oct 11, 2024
Viaarxiv icon

Subtle Errors Matter: Preference Learning via Error-injected Self-editing

Add code
Oct 09, 2024
Figure 1 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 2 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 3 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 4 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Viaarxiv icon

RevisEval: Improving LLM-as-a-Judge via Response-Adapted References

Add code
Oct 07, 2024
Viaarxiv icon

DAPE V2: Process Attention Score as Feature Map for Length Extrapolation

Add code
Oct 07, 2024
Viaarxiv icon

@Bench: Benchmarking Vision-Language Models for Human-centered Assistive Technology

Add code
Sep 21, 2024
Viaarxiv icon

CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration

Add code
Sep 17, 2024
Figure 1 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 2 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 3 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Figure 4 for CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Viaarxiv icon