Picture for Tengyu Ma

Tengyu Ma

Formal Theorem Proving by Rewarding LLMs to Decompose Proofs Hierarchically

Add code
Nov 04, 2024
Viaarxiv icon

Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective

Add code
Oct 07, 2024
Figure 1 for Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective
Figure 2 for Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective
Figure 3 for Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective
Figure 4 for Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective
Viaarxiv icon

SAM 2: Segment Anything in Images and Videos

Add code
Aug 01, 2024
Figure 1 for SAM 2: Segment Anything in Images and Videos
Figure 2 for SAM 2: Segment Anything in Images and Videos
Figure 3 for SAM 2: Segment Anything in Images and Videos
Figure 4 for SAM 2: Segment Anything in Images and Videos
Viaarxiv icon

Linguistic Calibration of Language Models

Add code
Mar 30, 2024
Viaarxiv icon

Chain of Thought Empowers Transformers to Solve Inherently Serial Problems

Add code
Feb 20, 2024
Viaarxiv icon

Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation

Add code
Sep 07, 2023
Viaarxiv icon

Sharpness Minimization Algorithms Do Not Only Minimize Sharpness To Achieve Better Generalization

Add code
Jul 23, 2023
Viaarxiv icon

One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention

Add code
Jul 07, 2023
Viaarxiv icon

Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time

Add code
Jun 28, 2023
Viaarxiv icon

The Inductive Bias of Flatness Regularization for Deep Matrix Factorization

Add code
Jun 22, 2023
Viaarxiv icon