Picture for Yutaka Matsuo

Yutaka Matsuo

Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search

Add code
Jan 31, 2025
Viaarxiv icon

Large Language Models as Theory of Mind Aware Generative Agents with Counterfactual Reflection

Add code
Jan 26, 2025
Viaarxiv icon

Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words

Add code
Jan 09, 2025
Figure 1 for Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words
Figure 2 for Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words
Figure 3 for Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words
Figure 4 for Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words
Viaarxiv icon

Improving Dynamic Object Interactions in Text-to-Video Generation with AI Feedback

Add code
Dec 03, 2024
Viaarxiv icon

ADOPT: Modified Adam Can Converge with Any $β_2$ with the Optimal Rate

Add code
Nov 05, 2024
Viaarxiv icon

Object-Centric Temporal Consistency via Conditional Autoregressive Inductive Biases

Add code
Oct 21, 2024
Viaarxiv icon

Enhancing Unimodal Latent Representations in Multimodal VAEs through Iterative Amortized Inference

Add code
Oct 15, 2024
Viaarxiv icon

Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?

Add code
Oct 09, 2024
Figure 1 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 2 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 3 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 4 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Viaarxiv icon

Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

Add code
Oct 01, 2024
Figure 1 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 2 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 3 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 4 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Viaarxiv icon

Geometric-Averaged Preference Optimization for Soft Preference Labels

Add code
Sep 10, 2024
Viaarxiv icon