Picture for Damian Podareanu

Damian Podareanu

Automatic Labels are as Effective as Manual Labels in Biomedical Images Classification with Deep Learning

Add code
Jun 20, 2024
Viaarxiv icon

Neural Symplectic Integrator with Hamiltonian Inductive Bias for the Gravitational $N$-body Problem

Add code
Nov 28, 2021
Figure 1 for Neural Symplectic Integrator with Hamiltonian Inductive Bias for the Gravitational $N$-body Problem
Figure 2 for Neural Symplectic Integrator with Hamiltonian Inductive Bias for the Gravitational $N$-body Problem
Viaarxiv icon

Predicting atmospheric optical properties for radiative transfer computations using neural networks

Add code
May 06, 2020
Figure 1 for Predicting atmospheric optical properties for radiative transfer computations using neural networks
Figure 2 for Predicting atmospheric optical properties for radiative transfer computations using neural networks
Figure 3 for Predicting atmospheric optical properties for radiative transfer computations using neural networks
Figure 4 for Predicting atmospheric optical properties for radiative transfer computations using neural networks
Viaarxiv icon

Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models

Add code
May 10, 2019
Figure 1 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 2 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 3 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Figure 4 for Densifying Assumed-sparse Tensors: Improving Memory Efficiency and MPI Collective Performance during Tensor Accumulation for Parallelized Training of Neural Machine Translation Models
Viaarxiv icon

Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train

Add code
Nov 15, 2017
Figure 1 for Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train
Figure 2 for Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train
Figure 3 for Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train
Figure 4 for Scale out for large minibatch SGD: Residual network training on ImageNet-1K with improved accuracy and reduced time to train
Viaarxiv icon