Picture for Valentina Zantedeschi

Valentina Zantedeschi

LHC

PairBench: A Systematic Framework for Selecting Reliable Judge VLMs

Add code
Feb 21, 2025
Viaarxiv icon

Learning to Defer for Causal Discovery with Imperfect Experts

Add code
Feb 18, 2025
Viaarxiv icon

ReTreever: Tree-based Coarse-to-Fine Representations for Retrieval

Add code
Feb 11, 2025
Viaarxiv icon

Performance Control in Early Exiting to Deploy Large Models at the Same Cost of Smaller Ones

Add code
Dec 26, 2024
Viaarxiv icon

Context is Key: A Benchmark for Forecasting with Essential Textual Information

Add code
Oct 24, 2024
Figure 1 for Context is Key: A Benchmark for Forecasting with Essential Textual Information
Figure 2 for Context is Key: A Benchmark for Forecasting with Essential Textual Information
Figure 3 for Context is Key: A Benchmark for Forecasting with Essential Textual Information
Figure 4 for Context is Key: A Benchmark for Forecasting with Essential Textual Information
Viaarxiv icon

Sample compression unleashed : New generalization bounds for real valued losses

Add code
Sep 26, 2024
Viaarxiv icon

InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation

Add code
Jul 08, 2024
Figure 1 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Figure 2 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Figure 3 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Figure 4 for InsightBench: Evaluating Business Analytics Agents Through Multi-Step Insight Generation
Viaarxiv icon

RepLiQA: A Question-Answering Dataset for Benchmarking LLMs on Unseen Reference Content

Add code
Jun 17, 2024
Viaarxiv icon

XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference

Add code
Apr 23, 2024
Figure 1 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Figure 2 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Figure 3 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Figure 4 for XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
Viaarxiv icon

Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures

Add code
Feb 19, 2024
Figure 1 for Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures
Figure 2 for Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures
Figure 3 for Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures
Figure 4 for Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures
Viaarxiv icon