Picture for Qun Liu

Qun Liu

LongSafetyBench: Long-Context LLMs Struggle with Safety Issues

Add code
Nov 11, 2024
Viaarxiv icon

ToolFlow: Boosting LLM Tool-Calling Through Natural and Coherent Dialogue Synthesis

Add code
Oct 24, 2024
Viaarxiv icon

Roadmap towards Superhuman Speech Understanding using Large Language Models

Add code
Oct 17, 2024
Figure 1 for Roadmap towards Superhuman Speech Understanding using Large Language Models
Figure 2 for Roadmap towards Superhuman Speech Understanding using Large Language Models
Figure 3 for Roadmap towards Superhuman Speech Understanding using Large Language Models
Figure 4 for Roadmap towards Superhuman Speech Understanding using Large Language Models
Viaarxiv icon

Subtle Errors Matter: Preference Learning via Error-injected Self-editing

Add code
Oct 09, 2024
Figure 1 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 2 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 3 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 4 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Viaarxiv icon

Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning

Add code
Sep 27, 2024
Figure 1 for Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning
Figure 2 for Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning
Figure 3 for Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning
Figure 4 for Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning
Viaarxiv icon

EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions

Add code
Sep 26, 2024
Figure 1 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 2 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 3 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 4 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Viaarxiv icon

DetectiveQA: Evaluating Long-Context Reasoning on Detective Novels

Add code
Sep 04, 2024
Viaarxiv icon

ToolACE: Winning the Points of LLM Function Calling

Add code
Sep 02, 2024
Figure 1 for ToolACE: Winning the Points of LLM Function Calling
Figure 2 for ToolACE: Winning the Points of LLM Function Calling
Figure 3 for ToolACE: Winning the Points of LLM Function Calling
Figure 4 for ToolACE: Winning the Points of LLM Function Calling
Viaarxiv icon

End-to-End Video Question Answering with Frame Scoring Mechanisms and Adaptive Sampling

Add code
Jul 23, 2024
Viaarxiv icon

Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope

Add code
Jul 21, 2024
Figure 1 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Figure 2 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Figure 3 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Figure 4 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Viaarxiv icon