Picture for Perouz Taslakian

Perouz Taslakian

Learning to Defer for Causal Discovery with Imperfect Experts

Add code
Feb 18, 2025
Viaarxiv icon

ReTreever: Tree-based Coarse-to-Fine Representations for Retrieval

Add code
Feb 11, 2025
Viaarxiv icon

AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding

Add code
Feb 03, 2025
Viaarxiv icon

BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks

Add code
Dec 05, 2024
Figure 1 for BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks
Figure 2 for BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks
Figure 3 for BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks
Figure 4 for BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks
Viaarxiv icon

InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation

Add code
Jul 08, 2024
Figure 1 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Figure 2 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Figure 3 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Figure 4 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Viaarxiv icon

RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content

Add code
Jun 17, 2024
Viaarxiv icon

VCR: Visual Caption Restoration

Add code
Jun 10, 2024
Viaarxiv icon

XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference

Add code
Apr 23, 2024
Figure 1 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Figure 2 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Figure 3 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Figure 4 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Viaarxiv icon

A Sparsity Principle for Partially Observable Causal Representation Learning

Add code
Mar 13, 2024
Viaarxiv icon

Capture the Flag: Uncovering Data Insights with Large Language Models

Add code
Dec 21, 2023
Viaarxiv icon