Picture for Yingbo Zhou

Yingbo Zhou

JudgeRank: Leveraging Large Language Models for Reasoning-Intensive Reranking

Add code
Oct 31, 2024
Viaarxiv icon

P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains

Add code
Oct 11, 2024
Viaarxiv icon

VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks

Add code
Oct 07, 2024
Figure 1 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 2 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 3 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 4 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Viaarxiv icon

Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification

Add code
Oct 05, 2024
Viaarxiv icon

Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models

Add code
Oct 03, 2024
Viaarxiv icon

xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations

Add code
Aug 22, 2024
Figure 1 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Figure 2 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Figure 3 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Figure 4 for xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Viaarxiv icon

Diversity Empowers Intelligence: Integrating Expertise of Software Engineering Agents

Add code
Aug 13, 2024
Viaarxiv icon

INDICT: Code Generation with Internal Dialogues of Critiques for Both Security and Helpfulness

Add code
Jun 23, 2024
Viaarxiv icon

RLHF Workflow: From Reward Modeling to Online RLHF

Add code
May 13, 2024
Figure 1 for RLHF Workflow: From Reward Modeling to Online RLHF
Figure 2 for RLHF Workflow: From Reward Modeling to Online RLHF
Figure 3 for RLHF Workflow: From Reward Modeling to Online RLHF
Figure 4 for RLHF Workflow: From Reward Modeling to Online RLHF
Viaarxiv icon

When Foresight Pruning Meets Zeroth-Order Optimization: Efficient Federated Learning for Low-Memory Devices

Add code
May 08, 2024
Viaarxiv icon