Picture for Fuzheng Zhang

Fuzheng Zhang

Kuaishou Natural Language Processing Center and Audio Center

Breaking the Stage Barrier: A Novel Single-Stage Approach to Long Context Extension for Large Language Models

Add code
Dec 10, 2024
Viaarxiv icon

Video-Text Dataset Construction from Multi-AI Feedback: Promoting Weak-to-Strong Preference Learning for Video Large Language Models

Add code
Nov 25, 2024
Viaarxiv icon

DMQR-RAG: Diverse Multi-Query Rewriting for RAG

Add code
Nov 20, 2024
Viaarxiv icon

Video DataFlywheel: Resolving the Impossible Data Trinity in Video-Language Understanding

Add code
Sep 29, 2024
Viaarxiv icon

TSO: Self-Training with Scaled Preference Optimization

Add code
Aug 31, 2024
Figure 1 for TSO: Self-Training with Scaled Preference Optimization
Figure 2 for TSO: Self-Training with Scaled Preference Optimization
Figure 3 for TSO: Self-Training with Scaled Preference Optimization
Figure 4 for TSO: Self-Training with Scaled Preference Optimization
Viaarxiv icon

Towards Comprehensive Preference Data Collection for Reward Modeling

Add code
Jun 24, 2024
Viaarxiv icon

Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector

Add code
Jun 17, 2024
Figure 1 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Figure 2 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Figure 3 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Figure 4 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Viaarxiv icon

Research on Foundation Model for Spatial Data Intelligence: China's 2024 White Paper on Strategic Development of Spatial Data Intelligence

Add code
May 30, 2024
Viaarxiv icon

Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs

Add code
May 24, 2024
Figure 1 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Figure 2 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Figure 3 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Figure 4 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Viaarxiv icon

Inductive-Deductive Strategy Reuse for Multi-Turn Instructional Dialogues

Add code
Apr 17, 2024
Viaarxiv icon