Picture for Xin Jiang

Xin Jiang

Harbin Institute of Technology, Shenzhen

SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator

Add code
Dec 16, 2024
Viaarxiv icon

CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information

Add code
Dec 13, 2024
Viaarxiv icon

Scaling Law for Language Models Training Considering Batch Size

Add code
Dec 02, 2024
Viaarxiv icon

Efficient Multi-modal Large Language Models via Visual Token Grouping

Add code
Nov 26, 2024
Figure 1 for Efficient Multi-modal Large Language Models via Visual Token Grouping
Figure 2 for Efficient Multi-modal Large Language Models via Visual Token Grouping
Figure 3 for Efficient Multi-modal Large Language Models via Visual Token Grouping
Figure 4 for Efficient Multi-modal Large Language Models via Visual Token Grouping
Viaarxiv icon

ToolFlow: Boosting LLM Tool-Calling Through Natural and Coherent Dialogue Synthesis

Add code
Oct 24, 2024
Viaarxiv icon

Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration

Add code
Oct 22, 2024
Figure 1 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 2 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 3 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Figure 4 for Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration
Viaarxiv icon

Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning

Add code
Oct 18, 2024
Figure 1 for Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning
Figure 2 for Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning
Figure 3 for Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning
Figure 4 for Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning
Viaarxiv icon

FlatQuant: Flatness Matters for LLM Quantization

Add code
Oct 12, 2024
Figure 1 for FlatQuant: Flatness Matters for LLM Quantization
Figure 2 for FlatQuant: Flatness Matters for LLM Quantization
Figure 3 for FlatQuant: Flatness Matters for LLM Quantization
Figure 4 for FlatQuant: Flatness Matters for LLM Quantization
Viaarxiv icon

Why pre-training is beneficial for downstream classification tasks?

Add code
Oct 11, 2024
Figure 1 for Why pre-training is beneficial for downstream classification tasks?
Figure 2 for Why pre-training is beneficial for downstream classification tasks?
Figure 3 for Why pre-training is beneficial for downstream classification tasks?
Figure 4 for Why pre-training is beneficial for downstream classification tasks?
Viaarxiv icon

Subtle Errors Matter: Preference Learning via Error-injected Self-editing

Add code
Oct 09, 2024
Figure 1 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 2 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 3 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 4 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Viaarxiv icon