Picture for Bharat Kaul

Bharat Kaul

Generative Active Learning for the Search of Small-molecule Protein Binders

Add code
May 02, 2024
Figure 1 for Generative Active Learning for the Search of Small-molecule Protein Binders
Figure 2 for Generative Active Learning for the Search of Small-molecule Protein Binders
Figure 3 for Generative Active Learning for the Search of Small-molecule Protein Binders
Figure 4 for Generative Active Learning for the Search of Small-molecule Protein Binders
Viaarxiv icon

AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

Add code
Apr 14, 2023
Viaarxiv icon

Efficient and Generic 1D Dilated Convolution Layer for Deep Learning

Add code
Apr 16, 2021
Figure 1 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 2 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 3 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 4 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Viaarxiv icon

MADRaS : Multi Agent Driving Simulator

Add code
Oct 02, 2020
Figure 1 for MADRaS : Multi Agent Driving Simulator
Figure 2 for MADRaS : Multi Agent Driving Simulator
Figure 3 for MADRaS : Multi Agent Driving Simulator
Figure 4 for MADRaS : Multi Agent Driving Simulator
Viaarxiv icon

PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives

Add code
Jun 02, 2020
Figure 1 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 2 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 3 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 4 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Viaarxiv icon

PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

Add code
Feb 06, 2020
Figure 1 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 2 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 3 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 4 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Viaarxiv icon

SEERL: Sample Efficient Ensemble Reinforcement Learning

Add code
Jan 15, 2020
Figure 1 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 2 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 3 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 4 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Viaarxiv icon

K-TanH: Hardware Efficient Activations For Deep Learning

Add code
Oct 21, 2019
Figure 1 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 2 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 3 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 4 for K-TanH: Hardware Efficient Activations For Deep Learning
Viaarxiv icon

High Performance Scalable FPGA Accelerator for Deep Neural Networks

Add code
Aug 29, 2019
Figure 1 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 2 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 3 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 4 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Viaarxiv icon

A Study of BFLOAT16 for Deep Learning Training

Add code
Jun 13, 2019
Figure 1 for A Study of BFLOAT16 for Deep Learning Training
Figure 2 for A Study of BFLOAT16 for Deep Learning Training
Figure 3 for A Study of BFLOAT16 for Deep Learning Training
Figure 4 for A Study of BFLOAT16 for Deep Learning Training
Viaarxiv icon