Picture for Harish G. Ramaswamy

Harish G. Ramaswamy

Impact of Label Noise on Learning Complex Features

Add code
Nov 07, 2024
Viaarxiv icon

Graph Classification with GNNs: Optimisation, Representation and Inductive Bias

Add code
Aug 17, 2024
Viaarxiv icon

On the Learning Dynamics of Attention Networks

Add code
Jul 26, 2023
Viaarxiv icon

On the Interpretability of Attention Networks

Add code
Dec 30, 2022
Viaarxiv icon

Consistent Multiclass Algorithms for Complex Metrics and Constraints

Add code
Oct 19, 2022
Figure 1 for Consistent Multiclass Algorithms for Complex Metrics and Constraints
Figure 2 for Consistent Multiclass Algorithms for Complex Metrics and Constraints
Figure 3 for Consistent Multiclass Algorithms for Complex Metrics and Constraints
Figure 4 for Consistent Multiclass Algorithms for Complex Metrics and Constraints
Viaarxiv icon

Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)

Add code
Nov 25, 2021
Figure 1 for Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)
Figure 2 for Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)
Figure 3 for Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)
Figure 4 for Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)
Viaarxiv icon

Using noise resilience for ranking generalization of deep neural networks

Add code
Dec 16, 2020
Figure 1 for Using noise resilience for ranking generalization of deep neural networks
Figure 2 for Using noise resilience for ranking generalization of deep neural networks
Viaarxiv icon

Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets

Add code
Oct 24, 2020
Figure 1 for Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets
Figure 2 for Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets
Figure 3 for Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets
Figure 4 for Inductive Bias of Gradient Descent for Exponentially Weight Normalized Smooth Homogeneous Neural Nets
Viaarxiv icon

Convex Calibrated Surrogates for the Multi-Label F-Measure

Add code
Sep 16, 2020
Figure 1 for Convex Calibrated Surrogates for the Multi-Label F-Measure
Figure 2 for Convex Calibrated Surrogates for the Multi-Label F-Measure
Viaarxiv icon

On Controllable Sparse Alternatives to Softmax

Add code
Oct 30, 2018
Figure 1 for On Controllable Sparse Alternatives to Softmax
Figure 2 for On Controllable Sparse Alternatives to Softmax
Figure 3 for On Controllable Sparse Alternatives to Softmax
Figure 4 for On Controllable Sparse Alternatives to Softmax
Viaarxiv icon