Picture for Chandrashekar Lakshminarayanan

Chandrashekar Lakshminarayanan

Transformers with Sparse Attention for Granger Causality

Add code
Nov 20, 2024
Viaarxiv icon

Half-Space Feature Learning in Neural Networks

Add code
Apr 05, 2024
Viaarxiv icon

Approximate Linear Programming and Decentralized Policy Improvement in Cooperative Multi-agent Markov Decision Processes

Add code
Nov 20, 2023
Viaarxiv icon

Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality

Add code
Mar 01, 2022
Figure 1 for Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality
Figure 2 for Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality
Figure 3 for Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality
Figure 4 for Explicitising The Implicit Intrepretability of Deep Neural Networks Via Duality
Viaarxiv icon

Disentangling deep neural networks with rectified linear units using duality

Add code
Oct 06, 2021
Figure 1 for Disentangling deep neural networks with rectified linear units using duality
Figure 2 for Disentangling deep neural networks with rectified linear units using duality
Figure 3 for Disentangling deep neural networks with rectified linear units using duality
Figure 4 for Disentangling deep neural networks with rectified linear units using duality
Viaarxiv icon

Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning

Add code
Jun 11, 2020
Figure 1 for Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Figure 2 for Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Figure 3 for Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Figure 4 for Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Viaarxiv icon

Deep Gated Networks: A framework to understand training and generalisation in deep learning

Add code
Mar 02, 2020
Figure 1 for Deep Gated Networks: A framework to understand training and generalisation in deep learning
Figure 2 for Deep Gated Networks: A framework to understand training and generalisation in deep learning
Figure 3 for Deep Gated Networks: A framework to understand training and generalisation in deep learning
Figure 4 for Deep Gated Networks: A framework to understand training and generalisation in deep learning
Viaarxiv icon

Linear Stochastic Approximation: Constant Step-Size and Iterate Averaging

Add code
Sep 12, 2017
Figure 1 for Linear Stochastic Approximation: Constant Step-Size and Iterate Averaging
Figure 2 for Linear Stochastic Approximation: Constant Step-Size and Iterate Averaging
Viaarxiv icon