Picture for Makoto Takamoto

Makoto Takamoto

Active Learning for Neural PDE Solvers

Add code
Aug 02, 2024
Viaarxiv icon

Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing

Add code
May 23, 2024
Viaarxiv icon

Uncertainty-biased molecular dynamics for learning uniformly accurate interatomic potentials

Add code
Dec 03, 2023
Viaarxiv icon

Learning Neural PDE Solvers with Parameter-Guided Channel Attention

Add code
Apr 27, 2023
Viaarxiv icon

PDEBENCH: An Extensive Benchmark for Scientific Machine Learning

Add code
Oct 17, 2022
Figure 1 for PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
Figure 2 for PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
Figure 3 for PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
Figure 4 for PDEBENCH: An Extensive Benchmark for Scientific Machine Learning
Viaarxiv icon

Integrating diverse extraction pathways using iterative predictions for Multilingual Open Information Extraction

Add code
Oct 15, 2021
Figure 1 for Integrating diverse extraction pathways using iterative predictions for Multilingual Open Information Extraction
Figure 2 for Integrating diverse extraction pathways using iterative predictions for Multilingual Open Information Extraction
Figure 3 for Integrating diverse extraction pathways using iterative predictions for Multilingual Open Information Extraction
Figure 4 for Integrating diverse extraction pathways using iterative predictions for Multilingual Open Information Extraction
Viaarxiv icon

An Empirical Study of the Effects of Sample-Mixing Methods for Efficient Training of Generative Adversarial Networks

Add code
Apr 08, 2021
Figure 1 for An Empirical Study of the Effects of Sample-Mixing Methods for Efficient Training of Generative Adversarial Networks
Figure 2 for An Empirical Study of the Effects of Sample-Mixing Methods for Efficient Training of Generative Adversarial Networks
Figure 3 for An Empirical Study of the Effects of Sample-Mixing Methods for Efficient Training of Generative Adversarial Networks
Figure 4 for An Empirical Study of the Effects of Sample-Mixing Methods for Efficient Training of Generative Adversarial Networks
Viaarxiv icon

An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation

Add code
Feb 28, 2020
Figure 1 for An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation
Figure 2 for An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation
Figure 3 for An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation
Figure 4 for An Efficient Method of Training Small Models for Regression Problems with Knowledge Distillation
Viaarxiv icon