Picture for Sasikanth Avancha

Sasikanth Avancha

Generative Active Learning for the Search of Small-molecule Protein Binders

Add code
May 02, 2024
Figure 1 for Generative Active Learning for the Search of Small-molecule Protein Binders
Figure 2 for Generative Active Learning for the Search of Small-molecule Protein Binders
Figure 3 for Generative Active Learning for the Search of Small-molecule Protein Binders
Figure 4 for Generative Active Learning for the Search of Small-molecule Protein Binders
Viaarxiv icon

DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling

Add code
Nov 11, 2022
Figure 1 for DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling
Figure 2 for DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling
Figure 3 for DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling
Figure 4 for DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling
Viaarxiv icon

DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks

Add code
Apr 16, 2021
Figure 1 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Figure 2 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Figure 3 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Figure 4 for DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks
Viaarxiv icon

Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads

Add code
Apr 14, 2021
Figure 1 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 2 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 3 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 4 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Viaarxiv icon

Deep Graph Library Optimizations for Intel(R) x86 Architecture

Add code
Jul 13, 2020
Figure 1 for Deep Graph Library Optimizations for Intel(R) x86 Architecture
Figure 2 for Deep Graph Library Optimizations for Intel(R) x86 Architecture
Figure 3 for Deep Graph Library Optimizations for Intel(R) x86 Architecture
Figure 4 for Deep Graph Library Optimizations for Intel(R) x86 Architecture
Viaarxiv icon

Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights

Add code
Jul 02, 2020
Figure 1 for Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Figure 2 for Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Figure 3 for Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Figure 4 for Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Viaarxiv icon

PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives

Add code
Jun 02, 2020
Figure 1 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 2 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 3 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 4 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Viaarxiv icon

PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

Add code
Feb 06, 2020
Figure 1 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 2 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 3 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 4 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Viaarxiv icon

SEERL: Sample Efficient Ensemble Reinforcement Learning

Add code
Jan 15, 2020
Figure 1 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 2 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 3 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 4 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Viaarxiv icon

High Performance Scalable FPGA Accelerator for Deep Neural Networks

Add code
Aug 29, 2019
Figure 1 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 2 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 3 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 4 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Viaarxiv icon