Picture for Mats L. Richter

Mats L. Richter

CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling

Add code
Jun 07, 2024
Figure 1 for CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling
Figure 2 for CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling
Figure 3 for CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling
Figure 4 for CarbonSense: A Multimodal Dataset and Baseline for Carbon Flux Modelling
Viaarxiv icon

Simple and Scalable Strategies to Continually Pre-train Large Language Models

Add code
Mar 26, 2024
Figure 1 for Simple and Scalable Strategies to Continually Pre-train Large Language Models
Figure 2 for Simple and Scalable Strategies to Continually Pre-train Large Language Models
Figure 3 for Simple and Scalable Strategies to Continually Pre-train Large Language Models
Figure 4 for Simple and Scalable Strategies to Continually Pre-train Large Language Models
Viaarxiv icon

Continual Pre-Training of Large Language Models: How to (re)warm your model?

Add code
Aug 08, 2023
Figure 1 for Continual Pre-Training of Large Language Models: How to (re)warm your model?
Figure 2 for Continual Pre-Training of Large Language Models: How to (re)warm your model?
Figure 3 for Continual Pre-Training of Large Language Models: How to (re)warm your model?
Figure 4 for Continual Pre-Training of Large Language Models: How to (re)warm your model?
Viaarxiv icon

Receptive Field Refinement for Convolutional Neural Networks Reliably Improves Predictive Performance

Add code
Nov 26, 2022
Viaarxiv icon

Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training by Receptive Field Analysis

Add code
Jun 23, 2021
Figure 1 for Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training by Receptive Field Analysis
Figure 2 for Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training by Receptive Field Analysis
Figure 3 for Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training by Receptive Field Analysis
Figure 4 for Should You Go Deeper? Optimizing Convolutional Neural Network Architectures without Training by Receptive Field Analysis
Viaarxiv icon

Exploring the Properties and Evolution of Neural Network Eigenspaces during Training

Add code
Jun 18, 2021
Figure 1 for Exploring the Properties and Evolution of Neural Network Eigenspaces during Training
Figure 2 for Exploring the Properties and Evolution of Neural Network Eigenspaces during Training
Figure 3 for Exploring the Properties and Evolution of Neural Network Eigenspaces during Training
Figure 4 for Exploring the Properties and Evolution of Neural Network Eigenspaces during Training
Viaarxiv icon

Size Matters

Add code
Feb 09, 2021
Figure 1 for Size Matters
Figure 2 for Size Matters
Figure 3 for Size Matters
Figure 4 for Size Matters
Viaarxiv icon

Feature Space Saturation during Training

Add code
Jun 18, 2020
Figure 1 for Feature Space Saturation during Training
Figure 2 for Feature Space Saturation during Training
Figure 3 for Feature Space Saturation during Training
Figure 4 for Feature Space Saturation during Training
Viaarxiv icon

Spectral Analysis of Latent Representations

Add code
Jul 19, 2019
Figure 1 for Spectral Analysis of Latent Representations
Figure 2 for Spectral Analysis of Latent Representations
Figure 3 for Spectral Analysis of Latent Representations
Figure 4 for Spectral Analysis of Latent Representations
Viaarxiv icon