Picture for Semih Yavuz

Semih Yavuz

Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown

Add code
Nov 24, 2024
Figure 1 for Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown
Figure 2 for Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown
Figure 3 for Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown
Figure 4 for Investigating Factuality in Long-Form Text Generation: The Roles of Self-Known and Self-Unknown
Viaarxiv icon

CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval

Add code
Nov 19, 2024
Figure 1 for CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval
Figure 2 for CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval
Figure 3 for CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval
Figure 4 for CodeXEmbed: A Generalist Embedding Model Family for Multiligual and Multi-task Code Retrieval
Viaarxiv icon

JudgeRank: Leveraging Large Language Models for Reasoning-Intensive Reranking

Add code
Oct 31, 2024
Viaarxiv icon

P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains

Add code
Oct 11, 2024
Figure 1 for P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains
Figure 2 for P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains
Figure 3 for P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains
Figure 4 for P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains
Viaarxiv icon

VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks

Add code
Oct 07, 2024
Figure 1 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 2 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 3 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Figure 4 for VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Viaarxiv icon

Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification

Add code
Oct 05, 2024
Viaarxiv icon

Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models

Add code
Oct 03, 2024
Figure 1 for Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models
Figure 2 for Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models
Figure 3 for Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models
Figure 4 for Traffic Light or Light Traffic? Investigating Phrasal Semantics in Large Language Models
Viaarxiv icon

Parameter-Efficient Detoxification with Contrastive Decoding

Add code
Jan 13, 2024
Viaarxiv icon

Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models

Add code
Dec 11, 2023
Figure 1 for Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models
Figure 2 for Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models
Figure 3 for Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models
Figure 4 for Unlocking Anticipatory Text Generation: A Constrained Approach for Faithful Decoding with Large Language Models
Viaarxiv icon

DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain Question Answering over Knowledge Base and Text

Add code
Oct 31, 2023
Viaarxiv icon