Picture for Jaehoon Lee

Jaehoon Lee

Shammie

Partial-Multivariate Model for Forecasting

Add code
Aug 19, 2024
Figure 1 for Partial-Multivariate Model for Forecasting
Figure 2 for Partial-Multivariate Model for Forecasting
Figure 3 for Partial-Multivariate Model for Forecasting
Figure 4 for Partial-Multivariate Model for Forecasting
Viaarxiv icon

Training Language Models on the Knowledge Graph: Insights on Hallucinations and Their Detectability

Add code
Aug 14, 2024
Viaarxiv icon

Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters

Add code
Aug 06, 2024
Figure 1 for Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Figure 2 for Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Figure 3 for Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Figure 4 for Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters
Viaarxiv icon

Scaling Exponents Across Parameterizations and Optimizers

Add code
Jul 08, 2024
Viaarxiv icon

Training LLMs over Neurally Compressed Text

Add code
Apr 04, 2024
Viaarxiv icon

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

Add code
Dec 22, 2023
Viaarxiv icon

Frontier Language Models are not Robust to Adversarial Arithmetic, or "What do I need to say so you agree 2+2=5?

Add code
Nov 15, 2023
Viaarxiv icon

Small-scale proxies for large-scale Transformer training instabilities

Add code
Sep 25, 2023
Viaarxiv icon

Replacing softmax with ReLU in Vision Transformers

Add code
Sep 15, 2023
Viaarxiv icon