Picture for Marco Valentino

Marco Valentino

Abstract Activation Spaces for Content-Invariant Reasoning in Large Language Models

Add code
Feb 02, 2026
Viaarxiv icon

Monotonic Reference-Free Refinement for Autoformalization

Add code
Jan 30, 2026
Viaarxiv icon

Decompose-and-Formalise: Recursively Verifiable Natural Language Inference

Add code
Jan 27, 2026
Viaarxiv icon

Inferring Latent Intentions: Attributional Natural Language Inference in LLM Agents

Add code
Jan 13, 2026
Viaarxiv icon

Logic-Parametric Neuro-Symbolic NLI: Controlling Logical Formalisms for Verifiable LLM Reasoning

Add code
Jan 09, 2026
Viaarxiv icon

Learning to Disentangle Latent Reasoning Rules with Language VAEs: A Systematic Study

Add code
Jun 24, 2025
Viaarxiv icon

Beyond Gold Standards: Epistemic Ensemble of LLM Judges for Formal Mathematical Reasoning

Add code
Jun 12, 2025
Viaarxiv icon

Faithful and Robust LLM-Driven Theorem Proving for NLI Explanations

Add code
May 30, 2025
Viaarxiv icon

Enhancing Logical Reasoning in Language Models via Symbolically-Guided Monte Carlo Process Supervision

Add code
May 26, 2025
Viaarxiv icon

Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

Add code
May 18, 2025
Viaarxiv icon