Picture for Michael W. Mahoney

Michael W. Mahoney

UC Berkeley/LBNL/ICSI

Neural equilibria for long-term prediction of nonlinear conservation laws

Add code
Jan 12, 2025
Viaarxiv icon

Using Pre-trained LLMs for Multivariate Time Series Forecasting

Add code
Jan 10, 2025
Viaarxiv icon

A Statistical Framework for Ranking LLM-Based Chatbots

Add code
Dec 24, 2024
Viaarxiv icon

LossLens: Diagnostics for Machine Learning through Loss Landscape Visual Analytics

Add code
Dec 17, 2024
Figure 1 for LossLens: Diagnostics for Machine Learning through Loss Landscape Visual Analytics
Figure 2 for LossLens: Diagnostics for Machine Learning through Loss Landscape Visual Analytics
Figure 3 for LossLens: Diagnostics for Machine Learning through Loss Landscape Visual Analytics
Figure 4 for LossLens: Diagnostics for Machine Learning through Loss Landscape Visual Analytics
Viaarxiv icon

Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization

Add code
Dec 06, 2024
Viaarxiv icon

LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data

Add code
Dec 03, 2024
Figure 1 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Figure 2 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Figure 3 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Figure 4 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Viaarxiv icon

Hard Constraint Guided Flow Matching for Gradient-Free Generation of PDE Solutions

Add code
Dec 02, 2024
Viaarxiv icon

Visualizing Loss Functions as Topological Landscape Profiles

Add code
Nov 19, 2024
Viaarxiv icon

Evaluating Loss Landscapes from a Topology Perspective

Add code
Nov 14, 2024
Viaarxiv icon

Squeezed Attention: Accelerating Long Context Length LLM Inference

Add code
Nov 14, 2024
Figure 1 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Figure 2 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Figure 3 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Figure 4 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Viaarxiv icon