Picture for Edouardo Honig

Edouardo Honig

Inference-Time Rethinking with Latent Thought Vectors for Math Reasoning

Add code
Feb 06, 2026
Viaarxiv icon

Scalable Language Models with Posterior Inference of Latent Thought Vectors

Add code
Feb 03, 2025
Figure 1 for Scalable Language Models with Posterior Inference of Latent Thought Vectors
Figure 2 for Scalable Language Models with Posterior Inference of Latent Thought Vectors
Figure 3 for Scalable Language Models with Posterior Inference of Latent Thought Vectors
Figure 4 for Scalable Language Models with Posterior Inference of Latent Thought Vectors
Viaarxiv icon

Better Prompt Compression Without Multi-Layer Perceptrons

Add code
Jan 12, 2025
Figure 1 for Better Prompt Compression Without Multi-Layer Perceptrons
Figure 2 for Better Prompt Compression Without Multi-Layer Perceptrons
Figure 3 for Better Prompt Compression Without Multi-Layer Perceptrons
Figure 4 for Better Prompt Compression Without Multi-Layer Perceptrons
Viaarxiv icon

Long-range gene expression prediction with token alignment of large language model

Add code
Oct 02, 2024
Figure 1 for Long-range gene expression prediction with token alignment of large language model
Figure 2 for Long-range gene expression prediction with token alignment of large language model
Figure 3 for Long-range gene expression prediction with token alignment of large language model
Figure 4 for Long-range gene expression prediction with token alignment of large language model
Viaarxiv icon

Dual-Space Optimization: Improved Molecule Sequence Design by Latent Prompt Transformer

Add code
Feb 27, 2024
Figure 1 for Dual-Space Optimization: Improved Molecule Sequence Design by Latent Prompt Transformer
Figure 2 for Dual-Space Optimization: Improved Molecule Sequence Design by Latent Prompt Transformer
Figure 3 for Dual-Space Optimization: Improved Molecule Sequence Design by Latent Prompt Transformer
Figure 4 for Dual-Space Optimization: Improved Molecule Sequence Design by Latent Prompt Transformer
Viaarxiv icon

Differentiable VQ-VAE's for Robust White Matter Streamline Encodings

Add code
Nov 18, 2023
Figure 1 for Differentiable VQ-VAE's for Robust White Matter Streamline Encodings
Figure 2 for Differentiable VQ-VAE's for Robust White Matter Streamline Encodings
Figure 3 for Differentiable VQ-VAE's for Robust White Matter Streamline Encodings
Figure 4 for Differentiable VQ-VAE's for Robust White Matter Streamline Encodings
Viaarxiv icon