Picture for Peter Mattson

Peter Mattson

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Viaarxiv icon

Croissant: A Metadata Format for ML-Ready Datasets

Add code
Mar 28, 2024
Viaarxiv icon

DMLR: Data-centric Machine Learning Research -- Past, Present and Future

Add code
Nov 21, 2023
Figure 1 for DMLR: Data-centric Machine Learning Research -- Past, Present and Future
Figure 2 for DMLR: Data-centric Machine Learning Research -- Past, Present and Future
Figure 3 for DMLR: Data-centric Machine Learning Research -- Past, Present and Future
Viaarxiv icon

Benchmarking Neural Network Training Algorithms

Add code
Jun 12, 2023
Figure 1 for Benchmarking Neural Network Training Algorithms
Figure 2 for Benchmarking Neural Network Training Algorithms
Figure 3 for Benchmarking Neural Network Training Algorithms
Figure 4 for Benchmarking Neural Network Training Algorithms
Viaarxiv icon

Understanding metric-related pitfalls in image analysis validation

Add code
Feb 09, 2023
Viaarxiv icon

DataPerf: Benchmarks for Data-Centric AI Development

Add code
Jul 20, 2022
Figure 1 for DataPerf: Benchmarks for Data-Centric AI Development
Figure 2 for DataPerf: Benchmarks for Data-Centric AI Development
Figure 3 for DataPerf: Benchmarks for Data-Centric AI Development
Figure 4 for DataPerf: Benchmarks for Data-Centric AI Development
Viaarxiv icon

Metrics reloaded: Pitfalls and recommendations for image analysis validation

Add code
Jun 03, 2022
Figure 1 for Metrics reloaded: Pitfalls and recommendations for image analysis validation
Figure 2 for Metrics reloaded: Pitfalls and recommendations for image analysis validation
Figure 3 for Metrics reloaded: Pitfalls and recommendations for image analysis validation
Figure 4 for Metrics reloaded: Pitfalls and recommendations for image analysis validation
Viaarxiv icon

Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks

Add code
Apr 05, 2022
Figure 1 for Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks
Figure 2 for Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks
Figure 3 for Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks
Figure 4 for Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks
Viaarxiv icon

MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems

Add code
Oct 26, 2021
Figure 1 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Figure 2 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Figure 3 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Figure 4 for MLPerf HPC: A Holistic Benchmark Suite for Scientific Machine Learning on HPC Systems
Viaarxiv icon

MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence using Federated Evaluation

Add code
Oct 08, 2021
Figure 1 for MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence using Federated Evaluation
Figure 2 for MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence using Federated Evaluation
Figure 3 for MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence using Federated Evaluation
Figure 4 for MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence using Federated Evaluation
Viaarxiv icon