Picture for Michael W. Mahoney

Michael W. Mahoney

UC Berkeley/LBNL/ICSI

Enhancing Foundation Models for Time Series Forecasting via Wavelet-based Tokenization

Add code
Dec 06, 2024
Viaarxiv icon

LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data

Add code
Dec 03, 2024
Figure 1 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Figure 2 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Figure 3 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Figure 4 for LLMForecaster: Improving Seasonal Event Forecasts with Unstructured Textual Data
Viaarxiv icon

Hard Constraint Guided Flow Matching for Gradient-Free Generation of PDE Solutions

Add code
Dec 02, 2024
Viaarxiv icon

Visualizing Loss Functions as Topological Landscape Profiles

Add code
Nov 19, 2024
Viaarxiv icon

Evaluating Loss Landscapes from a Topology Perspective

Add code
Nov 14, 2024
Viaarxiv icon

Squeezed Attention: Accelerating Long Context Length LLM Inference

Add code
Nov 14, 2024
Figure 1 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Figure 2 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Figure 3 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Figure 4 for Squeezed Attention: Accelerating Long Context Length LLM Inference
Viaarxiv icon

$\spadesuit$ SPADE $\spadesuit$ Split Peak Attention DEcomposition

Add code
Nov 06, 2024
Viaarxiv icon

How many classifiers do we need?

Add code
Nov 01, 2024
Viaarxiv icon

AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models

Add code
Oct 14, 2024
Figure 1 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 2 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 3 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 4 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Viaarxiv icon

Elucidating the Design Choice of Probability Paths in Flow Matching for Forecasting

Add code
Oct 04, 2024
Viaarxiv icon